Improve Phase of Lean Six Sigma Black Belt Tutorial

5.1 Welcome

Hello and welcome to the Lean Six Sigma Black Belt program offered by Simplilearn. We will be discussing section five, Improve phase, in this session. It is widely believed in the Six Sigma fraternity, “If you did not improve the process, what did you really do?” Before we take up this lesson, please download the Improve phase toolkit, which is provided in a separate folder named ‘Improve.’

5.5 Agenda

In this slide we will start with the pre-improve considerations, model adequacy checking, and the multi-vari charts. Then we will cover the seven management tools, grouped and named as seven ‘M’ tools. After that we will learn how to build and use activity network diagram. Then, we will find out how to do a point and interval estimation. We will also cover Porter’s five forces analysis that is used for industry analysis and business strategy development. Followed by Pugh (pronounced as pyuh) analysis to evaluate multiple options, we will discuss the Lean ‘five "S" concept’ for organizing the workplace.

5.6 Pre Improve Considerations

Let us look into some pre-improve considerations. One of the key understandings from the Analyze phase is that the output variable variation is because of input variable variation. We identified the input variable using a variety of tools like the Cause-Effect Diagram, Cause-Effect Matrix, and validated the correlation between multiple variables with the help of regression. Apart from that, we also ran a series of confidence intervals and hypothesis tests to confirm whether they were special causes or common causes of variation. One tool that a lot of Six Sigma practitioners, especially Black Belts, love to use is the multi-vari chart. We will discuss multi-vari chart and some other ‘pre-improve considerations’ in the next few slides. In the next slide we will discuss model adequacy checking.

5.7 Model Adequacy Checking

Model adequacy checking is used during the regression analysis to check whether the model found by regression analysis is adequate or not. Adequacy checks are done by the following methods. First, check if all the points fit the regression line and also look for linearity. Once the r-square or r-square adjusted values are calculated, check whether the value is greater than 80%, in which case, the model can be considered adequate. After liner regression is performed, check for the non-linearity of residuals. Non-linearity shows that the model is not adequate. Finally, check if ninety five percent of the scaled individuals are within the range of minus one and one. We will be continuing with the same in the next two slides as well.

5.8 Model Adequacy Checking(Contd.)

Now let us understand how to check the adequacy of model using the lack of fit test. A perfect regression model that has all the points fitting the regression line will have zero SSE (pronounced as S-S-E). SSE here stands for the sum squares of errors. When the readings or observations are repeated in identical conditions (that is similar environment, measurements, variables etc.), error observed is the sum squares due to pure error, which is also known as SSPE. In the case of an imperfect regression model, when repeated observations are used, sum squares of errors will have two components. They are sum squares due to pure error or SSPE and sum squares due to lack of fit or SSLOF (pronounced as S-S-L-O-F). A model adequacy test will determine whether the ‘sum squares of errors’ contains a pure error or a lack of fit error. The model will be looked for the lack of fit error. If the lack of fit error is found to be at a minimum, the model is adequate, which means the model is a good fit.

5.9 Model Adequacy Checking(Contd.)

SSE (pronounced as S-S-E) is basically an addition of SSPE (pronounced as S-S-P_E) and SSLOF (pronounced as S-S-L-O-F), which means SSLOF will be SSE minus SSPE. Though there are special computer applications that will do these calculations automatically, it is important to know these fundamentals conceptually. On finding the lack of fit sum of squares, it is important that the Black Belt checks for the goodness of fit for the data. If the goodness of fit value is less than zero point zero five, the fit of the model should be rejected. The Black Belt should reject the model claiming lack of fit. If the goodness of fit value is greater than zero point zero five, the Black Belt can accept the fit of the data.

5.10 Multi Vari Charts

Let us get some concepts cleared on multi-vari charts, a popular tool used by Green Belts and Black Belts, in this slide. Multi-vari charts help us understand the variation within piece, piece-to-piece and time-to-time. These variations are also known as positional variation, cyclical variation, and temporal variation. With the help of a multi-vari chart, we can clearly identify the major source of variation in our process. This variation has its roots in common and special causes of variation, so Black Belts need not worry dealing with a new type of variation. Use the tool, MVA, in the toolkit to conduct a multi-vari analysis. Multi-vari analysis helps us to identify the dominant source of variation, i.e., within piece, piece-to-piece, or lot to lot.

5.11 7M Tools

In this slide, we will discuss the seven quality management tools commonly known as Seven “M” tools. These tools are mentioned as follows. The first one we will start with is affinity diagrams. This tool helps us organize ideas into meaningful categories by recognizing the underlying similarity. It is a means of data reduction in a systematic way. It will help in organizing a large number of qualitative inputs into a smaller number of major dimensions. It constructs or categorizes the inputs by collating items that have affinity to each other. The second tool we will discuss is tree diagrams. This helps to break down ideas in progressively greater detail in a systematic way. The next tool is process decision program charts (also called as PDPC (pronounced as P-D-P-C)). This tool helps in preparing the contingency plans. The emphasis of PDPC is to highlight the impact of problems or issues on the project and how to mitigate them. Tool number four is matrix diagrams. The matrix diagram is constructed to analyze the correlations between two groups of data. After this we will discuss the next tool, tool number five, interrelationship digraph (also called as ID). This helps in organizing disparate ideas by arranging the related ideas into groups to demonstrate the different ways in which the ideas influence one another. The sixth tool is prioritization matrices. This is predominantly used to help the decision makers determine the order of importance of the activities being considered. The last but not least tool is activity network diagram. This is used for scheduling and monitoring tasks within a project or process that has several dependent tasks and resources. We will discuss this in detail, in the next slide.

5.12 Activity Network Diagram

In this slide, we will look into an activity network diagram in detail and understand how to construct one. The activity network diagram is also called as arrow diagram, network diagram, activity chart, node diagram, critical path method (also called as CPM, or program evaluation and review technique (also called as PERT) chart. This is primarily used for scheduling and monitoring tasks within a project or process that has several dependent tasks and resources. The project or process steps are organized in sequence with details around the time that each step takes. Now, let us understand the steps involved in building an activity network diagram. First, we need to list all the necessary tasks in the project or process. After this, determine the correct sequence of the tasks, so re-arrange the tasks to put them in sequence. The third step is to draw circles for the events between two tasks. Finally, we can walk the tasks and see if there are any challenges and update the sequence accordingly.

5.13 Point and Interval Estimation

In this slide, we will understand point and interval estimation in detail. To begin with, we will first discuss point estimation. For point estimation, a random variable is used to estimate a characteristic or relationship in the population. The formula is specified before gathering the sample, and the actual numerical value obtained is called an estimate. For example, if we want to estimate the mean of the population, we could use the standard formula for calculating sample mean of the population as mentioned in the slide. Otherwise, we can use an average of the largest and the smallest values observed by adding the maximum and minimum value and dividing it by 2. Refer the slide for the formula. Similarly, if we want to estimate the variance of the population, we could use the standard formula to find variance of the sample. An alternative formula is the sum of square of difference between each of the values and means, as mentioned in the slide.

5.14 Porter s Five Forces

In this slide, we will discuss the Porter’s five forces analysis in detail. Porter’s five forces analysis is a framework for the industry analysis and business strategy development. It provides a list of five forces to be used for analysis. Details are covered in the next slide. It draws upon the industrial organization economics to derive the five forces that determine the competitive intensity and therefore, the attractiveness of a market. Attractiveness directly relates to the industry profitability. An unattractive industry is one in which the combination of these five forces acts to drive down overall profitability. In a very unattractive industry that is approaching “pure competition”, the available profits for all firms are driven to normal profit.

5.15 Porter s Five Forces (Contd.)

In this slide, we will provide details of each of the Porter’s five forces. The first force is “threat of new entrants”. This is a typical force. Profitable markets that yield high returns will attract new firms. This results in large number of new entrants, which eventually will decrease profitability for all firms in the industry. The next force is “threat of substitute products or services”. The existence of alternate products or services outside the realm of the common product boundaries increases the options for the customer and hence, increases the likelihood of customers to switch to alternatives. The third force is “bargaining power of the customers or buyers”. The bargaining power of the customers is also described as market of outputs. This is the ability of the customers to put the supplier of product or services under pressure, which also affects the customer's sensitivity to price changes. The fourth force is “bargaining power of the suppliers”. The bargaining power of suppliers is also described as the market of inputs. Suppliers of raw materials, components, labor, and services to the firm can be a source of power over the firm, when there are very few substitutes or alternates available. The last and final force is “intensity of competitive rivalry”. For most industries, the intensity of competitive rivalry is the major determinant of the competitiveness of the industry.

5.16 Pugh Analysis

In this slide, we will cover Pugh analysis for comparing and evaluating multiple options. Pugh analysis charts are used for evaluating multiple options against each other, in relation to a baseline option. The method was invented by Stuart Pugh, and hence called as Pugh analysis. Pugh Analysis charts are similar to that of pros versus cons lists. Typically when somebody compares pros versus cons, people have different approach and methodology. Pugh analysis provides us a method for systematic way of selecting between alternatives. The step by step method is as follows. First, we should identify relevant user requirements and develop engineering specifications for those requirements. Then, weights are assigned to each of the requirements based on its importance and criticalness. After that, generate several viable design concepts. Once this is ready, rank the concepts using Pugh analysis. At the end, synthesize the best elements of each initial concept into a final optimal concept. Now, continue to iterate until a clearly superior concept emerges.

5.17 Lean 5S

In this slide, we will cover Lean 5S concept. 5S is the Japanese word used for the workplace organization method; it uses a list of five other Japanese words. The first one is seiri. It is a Japanese word for sort. It is the first step in making things cleaned up, sorted, and organized. Once seiri has been carried out, seiton is implemented to classify items by use and set in order. The third stage of 5S is seiso, called as sweep and shine in English. In this stage, everything is swept and kept clean. Once the first three “S” of the 5Ss have been implemented, the next pillar is to standardize the best practices in the work area. Seiketsu or standardize is the method to maintain the first three pillars. Shitsuke or sustain is the final step in 5S implementation which represents discipline and commitment of all other stages. Without “sustaining”, the workplace can easily revert to being dirty and chaotic. It denotes commitment to maintain orderliness, and to practice the first 4Ss as a way of life. 5S simplifies the workplace environments and assists with the reduction of wastage and other forms of non-value adding activities whilst improving quality, effectiveness, process efficiencies, and employee safety. In the next slide, let us summarize the topics discussed in this lesson.

5.18 Summary

In this lesson we have learned pre-improve considerations followed by model adequacy checking. Then, we looked into multi-vari charts and the different 7 M tools. We also understood how to construct an activity network diagram and the concept of point and interval estimation. Then, we discussed Porter’s five forces, use of Pugh analysis and finally, Lean 5S.

5.19 Section V Lesson 2 Design of Experiments Theory

In this lesson we will understand some theoretical aspects of design of experiments. Let us look into the agenda in the next slide.

5.20 Agenda

We will begin this lesson by looking into the introduction to DOE. Moving on we will discuss the different types of designed experiments. Then we will look into main and interaction effects, followed by replication, randomization and blocking. After that we will discuss confounding, and coding and other DOE terms. We will end our discussion with sum squares analysis.

5.21 Introduction to DOE

Let us now get introduced to the design of experiments also known as DOE (pronounced as D-O-E). Design of experiments is a series of scientific experiments planned to measure the optimal response of the output variable called as Y (also referred to as "response" variable) by varying the input variables called as X (also known as "Factors" influencing the output) at various levels. The objectives of conducting designed experiments are as follows. It determines the variables influencing Y the most. It also determines the optimal levels for X so that Y is always at the optimal output. Conducting a designed experiment will determine the optimal levels for X so that variability for Y is small. It will also determine the optimal levels for X so that effects of uncontrolled variables are minimized at all times. In short, we can refer to DOE as a planned series of scientific experiments.

5.22 Introduction to DOE(Contd.)

Now let us understand the objective behind conducting a DOE with the help of a simple example. A golf player wishes to optimize his golf score by hitting birdies all the time. To do this, he understands that, a lot of factors would impact the end result. The end factors that can impact the end result are the type of driver used, the type of ball used, the walk through the golf course, the time of the day like morning or afternoon; the type of golf shoe worn; and finally, the strength of the ball. A person who knows about the game would know that these factors typically impact the golf scores. Importantly, these factors are controllable at the end of the person playing golf. We will be continuing with this example in the next slide as well.

5.23 Introduction to DOE(Contd.)

There are many factors of input or process that could impact the output. The factors as mentioned in the previous slide could be qualitative as well as quantitative. Experimenters prefer quantitative factors, as these can be measured and set at appropriate levels. Qualitative factors can only be rated or in other words, qualitative factors form the basic principles of the attribute data. For example, playing golf in the mornings or afternoons is often considered an attribute, as here we would tick a Yes or a No to set the levels. However, strength of the ball, another factor, is a continuous data as that can be measured.

5.24 Introduction to DOE(Contd.)

There are many experimental approaches. Let us in the next few slides learn these approaches in theory. The first approach we have is the best guess approach. In this approach, the experimenter rules out certain factors and tests for an arbitrary combination of factors until optimal results are achieved. The advantage of this approach is that this works well when the experimenter has a good knowledge of the process, and his technical knowledge on the dynamics of the process is also good. This allows the experimenter to see through any practical problems that may arise during the conduct of the experiments For example, a golfer would know best how much energy to be used, based on the strength and type of the golf ball, to get the best shot. The disadvantage of this approach is that initial best guesses may not always produce the best results. The best guess approach could actually continue for a long time without producing the desired result. Basically, this approach is best left to take a chance. If it works, it works. If it doesn’t we may have to try another approach. A golfer might be using a certain level of energy for a type of golf ball for a long time without exploring other options to see if the results improve. Let us continue with the same in the next slide as well.

5.25 Introduction to DOE(Contd.)

The second approach is the one factor at a time approach popularly known as OFAT experiments. In this approach to experimenting, one factor is varied in the range keeping all other factors constant and the response is measured. Here, the experimenter chooses all the factors to test but tests only one factor at a time. The principle advantage of this approach is that it gives the impact that each factor has on the response, individually. The principle disadvantage of this approach is that it misses out on interactions. For example, if the golf score is studied only on the type of golf ball used, the experimenter may miss out on a possible interaction between the type of golf ball and the timings of playing golf.

5.26 Introduction to DOE(Contd.)

Finally, the third approach that we will be discussing here and for a lot of time in this lesson is factorial experiments. In factorial experiments, multiple factors are varied simultaneously, and the corresponding output is noted against each combination. This allows the experimenter to test the interactions between factors, which could have made an impact on the response. Factorial experiments are the most commonly used experiments in the designed experiments, and will form a major part of our discussion in DOE. For example, for a particular process, there are multiple factors (like temperature, duration and pressure). To do a factorial experiment, we need to check all the combinations of these variables by varying each of the factors and noting the output. The advantage of this approach is that it provides a list of outputs for all possible inputs, to decide the most appropriate one. The disadvantage is that the number of tests to be done might be too many if there are multiple factors. Let us look into the factorial experiments in the next slide.

5.27 Types of Designed Experiments

The different types of factorial designs are listed as follows: First is, two raised to two full factorial where two factors are tested at two levels completely. Next is, two raised to three full factorial where three factors are tested at two levels completely. Third is ‘three raised to two full factorial’, where two factors are tested at three levels completely After that, it is ‘three raised to three full factorial’, where three factors are tested at three levels completely. Next is, ‘two raised to four fractional factorial’, where four factors are tested at two levels at three combinations. The final one is ‘two raised to five fractional factorial’, where five factors are tested at two levels at four and three combinations. In addition to these, we will also learn the Taguchi’s Designs, Plackett Burman’s designs, and Response Surface Designs. The practical working of most of these experimental settings would be discussed in DOE – Practical.

5.28 Main and Interaction Effects

Main effect and interaction effects are the two main types of effects which a Black Belt should know in detail. Let us in this slide understand the main effect. Main effect is the effect of an individual factor on the response variable. Let us take the example of a golf player’s scores. He chooses two types of drivers, an oversized driver and a regular sized driver. He gives four tries for both the drivers and records the scores. The scores after eight tries in total are as follows. As we can see, with an oversized driver the player scored a ninety two, ninety four, ninety three, and ninety one. With a regular sized driver, the player scored an eighty eight, ninety one, another eighty eight, and a ninety. We will calculate the main effect in the next slide.

5.29 Main and Interaction Effects(Contd.)

Let us calculate the main effect for the type of driver used. For that we take an average of all the oversized driver readings and subtract the average of all the regular sized driver scores from it. From the calculations that we see on the slide, the driver effect is three point two five. We can therefore say that using the oversized driver, the golfer is able to increase his scores to three point two five per round. Similarly, for the types of balls used, a light ball and a heavy ball, the scores can be seen on the slide.

5.30 Main and Interaction Effects(Contd.)

The main effect of the type of balls used on the golf scores is calculated using the same calculation mechanism discussed in the previous slide with the type of driver used. As we can see the main effect for the type of ball used is zero point seven five. We can interpret the result saying that in terms of individual factor effects, the type of driver used in hitting the golf ball has a greater impact than the ball used. Let us now understand the diagram. The X-axis on the diagram gives the effect of the driver and Y-axis the effect of the ball. When the data is collected and plotted, one can see the impact of the driver and ball combination and find the combination that has the best impact.

5.31 Main and Interaction Effects(Contd.)

In this slide, we will discuss the interaction effect, another important category of effect. It is for this effect mainly we use factorial designs, resolution designs, and so on. Interaction effects are observed when two or more factors interact and result in a change in the response. For example, the type of ball and the type of driver interact to result in responses are given on the slide. The hard ball and oversized driver scores are ninety two and eighty eight. The soft ball and oversized driver scores are ninety four and ninety one. The hard ball and regular driver scores are ninety and ninety three. The soft ball and regular driver scores are eighty eight and ninety one. The interaction effect is shown by the calculation given on the slide. The interaction effect is calculated to be zero point two five. We can say that the interaction effect is negligible.

5.32 Replication

In this slide, we will discuss replication, a very important technique which is often used in a lot of designed experimental settings. Replication means the experimenter has repeated the set of experiments. This is often done to get an estimate of the experimental error. For example, the golf player could have just done four strokes to understand the significance of the type of ball and the type of golf driver used. Instead, he repeats the same settings twice in order to know if there are any experimental errors at all. Replication helps the experimenter understand variations between separate runs within the same runs. An experiment in a designed experiments setting is known as run.

5.33 Randomization

In this slide, we will look into randomization. In simple words, randomization means running the experiments in a random order and not in a set order. The allocation of experimental inputs and the order in which the trials or experiments or runs has to be conducted are completely random. Randomization is done to aid the statistical concept that the observations or errors need to be independently distributed random variables. By randomizing, effects of extraneous factors are averaged out. For example, if we conduct continuous trials of golf scores with an over-sized driver and a slightly heavier ball, our results could be biased, (as we are not conducting trials with any other driver size or ball weight). This bias is eliminated by randomizing the conduct of golf trials.

5.34 Blocking

Finally, we come to probably one of the most important techniques used in designed experimental settings, which is blocking. It is a design technique that helps in improving the precision of an experiment. Blocking is done to reduce or eliminate variability due to nuisance factors. A nuisance factor is one that is not considered as a factor of interest by the experimenter. This factor could be a factor but is often ignored in the designed experiments setting. For example, if the golf driver A used by the golfer comes from supplier X and golf driver B comes from supplier Y, there could be differences in the golf scores. These differences could be because of the supplier variability. Assuming that the experimenter is not interested in studying supplier variability, this could be considered as a nuisance factor. In the next slide we will look into confounding.

5.35 Confounding

What is confounding? Confounding means high order interaction effects are indistinguishable from or getting mixed, with the blocks. In other words, blocks overshadow the high order interaction effects. Confounding happens when the block size is smaller than the number of treatment combinations in one replicate. It can happen in any experimental setting, but occurs more often in the fractional factorial settings.

5.36 Coding and other DOE Terms

Let us understand some other popular terms used in the design of experiments. Coding refers to transforming the scale of measurement. This is done so that the high value of the level becomes plus and low value becomes minus, which is also represented as plus one and minus one. This representation is often done to show the designs on a geometric scale. The next term is error. It is the unexplained variation in a collection of observations. The error includes both pure error as well as lack of fit error. Fixed effect, the next DOE (pronounced as D-O-E) term, is an effect associated with the input that has a limited number of levels. It can also be an effect in which only a certain number of levels are of interest to the experimenter. Lack of fit error occurs when the analysis excludes one or more important factors from the model. By using replication, the error can be partitioned into lack of fit and pure error. Random error is an error that occurs due to natural variation in the process. Random error normally has a normal distribution with a mean of zero and a constant variance.

5.37 Sum of Squares Analysis

Over the next some slides, we will discuss the sum of squares analysis. The sum of squares is an excellent method to analyze the fixed effects model. Sum of squares is popularly used in the technique, analysis of variance (ANOVA). Though this technique of partitioning the variances is popularly used in ANOVA (pronounced as ae-noh-vah), it finds immense use in DOE too. The sum of squares table is constructed as given in the slide. As we can see, the popular terms used here are source of variation, sum of squares, degree of freedom represented as DOF, mean squares, and F-o also known as f statistic.

5.38 Sum of Squares Analysis(Contd.)

Let us try to build the sum of squares table manually with the help of an example. Note that a lot of computer packages would do this for us but it is important for us to know this conceptually. Now let us look into the example. An experimenter has varied percentage of polyester in a cloth five times, and for each reading, five replicate responses have been taken. We will do the sum of squares analysis to see if the tensile strength in each of the five reading groups is indeed the same. Data table can be seen on the slide. We will look into the steps in the next slide.

5.39 Sum of Squares Analysis(Contd.)

In step one, calculate the totals of all the readings by adding the observations. Calculate the averages by taking an average of the readings. In step two, calculate the grand total of all the sums and average of averages. We can find the data sheet on the slide. Let us see the calculation of the first row on how to get the last two columns titled totals (denoted by y-i) and average (denoted by y-i bar), as given in the data sheet. The five readings for the observed tensile strength for fifteen per cent polyester are seven, seven, fifteen, eleven, and nine. Now, add them up. We will get forty nine. When we take an average of these readings, we would get an average tensile strength of nine point eight. Do the same calculation for all five rows of data. .Take a grand total of all the totals by adding the data under the column Totals (Yi) and average of all the averages from the column Average (Yibar). We will get 376 for the total and 15.04 for the averages. Note that, yi stands for sum of values with a treatment and y stands for sum of values across all the treatment.

5.40 Sum of Squares Analysis(Contd.)

The next step is to calculate the squares table and find out the sum of squares. Let us understand the calculation by doing it for the first row. Please repeat the same for all other rows of data too. The data in the first row for tensile strength was seven, seven, fifteen, eleven, and nine. When we square each one of them individually, we get forty nine, forty nine, two hundred twenty five, one hundred twenty one, and eighty one respectively. Now repeat the same process for the second row, third row, and so on. Finally, add all the squares up. This will give individual sum of squares. The individual sum of squares here is six thousand two hundred ninety two. Now, square all the row data under the column totals. For example, the grand total of all the group totals is three hundred seventy six here. Take a square of three hundred seventy six and divide it by twenty five, which is the total number of observations here.

5.41 Sum of Squares Analysis(Contd.)

In step 4, find the difference of both the sum of squares. This is the total sum of squares. Total sum of squares is six thousand two hundred ninety two minus five thousand six hundred fifty five point zero four, which equals six hundred thirty six point nine six. Now in step 5, find the sum of squares of treatments. Sum of squares of treatments equals square of totals divided by five minus square of grand total divided by twenty five. Doing this calculation, we get six thousand one hundred thirty point eight minus five thousand six hundred fifty five point zero four. That gives us four hundred seventy five point five six. This is the sum squares of treatments. In Step 6, find the sum of squares of errors. The sum of squares of errors is the sum of squares totals minus sum of squares of treatments. Doing some quick calculations, our sum of squares of errors here is one hundred sixty one point two zero.

5.42 Sum of Squares Analysis(Contd.)

Now let us calculate the mean squares. This is simple. Mean squares of treatments is the sum of squares of treatments divided by degrees of freedom for treatments. Since we have five treatments in all, the degree of freedom for treatments is four. The mean squares of treatments, as we can see in the slide, is one hundred eighteen point nine four. Using the same logic, the mean squares of errors is eight point zero six. The degree of freedom for errors is twenty five observations minus five treatments which equals twenty. Now, we need to calculate F Statistic, in the next step. The formula for f statistic is mean squares of treatments divided by mean squares of errors. Using the data we calculated, f statistic is fourteen point seven five six which can be rounded to fourteen point seven six. We can calculate the critical value of f statistic using the formula FINV in excel. The two degrees of freedom would be four and twenty. Using this formula in Microsoft Excel, the critical value we get is two point zero eight. As we can see that the calculated F statistic is greater than the critical statistic, we can reject the null hypothesis. So, we can conclude that the treatment means are different.

5.43 Sum of Squares Analysis(Contd.)

This slide shows the tabular way of representing the sum of squares. Use the tool sheet, sum of squares analysis and sum of squares table in the Simplilearn toolkit to do the calculations and summary table.

5.44 Summary

Let us look into the summary of what we have learned in this lesson. We started with understanding how to perform design of experiments along their approaches. Then, we moved on to learn the different types of designed experiments. After which, we understood the interaction effects between factors in an experiment. Followed by that, we looked into replication and randomization in an experiment in detail. We also learned blocking techniques to improve precision and understood the confounding effect. We also explored Coding and other DOE terms and finally sum squares analysis.

5.45 Section V Lesson 3 Design of Experiments Practice

Let us begin with the next lesson, design of experiments – practice, in this slide. This is going to be a rather long session as we will go through the techniques that we would be using in real world when doing design of experiments. Before we start with this lesson, download the software, design expert. This is a free trial version which can be downloaded from any search engine. Let us look into the agenda in the next slide.

5.46 Agenda

In this lesson, we will begin with introduction to the two factor factorial design. Then we will move on to two raised to two design, general two raised to k design, single replicate of two raised to k design, half fractional two raised to k minus one design, quarter fraction two raised to k minus two design, and three raised to k design. We will then look into analysis of second order response surface, nested design, split plot design, Taguchi’s L four and L six design, and finally, Plackett Burman’s design.

5.47 Introduction to 2 Factor Factorial Design

Let us first get introduced to the concepts of the two factor factorial design, one of the most basic experimental designs. Experiments that involve the study of two or more factors at two levels are known as the two level factorial design. When the experiment tests two factors at two levels, it is known as two level two factor factorial design. A two factor factorial design will have two factors, namely, A and B. We will have a levels of factor A and b levels of factor B. Every replicate in the experiment will contain “AB” level of treatments, i.e., the number of treatments or runs in the experiment would be AB.

5.48 Introduction to 2 Factor Factorial Design(Contd.)

Let us take an example to understand a two factor factorial design. An engineer designing a battery for use understands that the temperature could play the role of a major impact factor in the battery life. He also understands that the battery type, that is the raw materials used for making the battery, often influences the temperature volatility resistance of the battery. The experimenter determines that temperature as a parameter can be controlled in a laboratory setting. By conducting the experiment, the engineer wishes to answer two questions. One, what are the effects of material type and temperature on the battery life? The other question is do we have a type of material that resists temperature regardless of the extremes?

5.49 Introduction to 2 Factor Factorial Design(Contd.)

The main effects and interaction effects are studied and analyzed by conducting a factorial design. The sum of squares analysis method is used to study the main effects and interaction effects in the factorial method setting. The analysis of the experimental setting could be done by effects model, which is the most popular technique used for analyzing the experimental setting; means model; and regression model. The choice of the model should be made by the Black Belt depending on what he chooses to study.

5.50 2² Design

Let us now look into the concept of two raised to two design. The tool that we would use for Designs of Experiments from here on is an excel file, DOE, which is provided to you as an Excel Worksheet. We can also use Design Expert ™, a free downloadable software for trial version. This will allow us to design the experiments. The two raised to two design is considered a very simple and powerful experiment to be run at two levels for two factors. Without any replicate, the two raised to two design will have 4 runs. With two replicates, the two raised to two design will have eight runs; and with three replicates, it will have twelve runs.

5.51 2² Design(Contd.)

Let us look into an example for 2 raised to two in this slide. An investigation is done on the effect of concentration of reactant and the amount of chemical catalyst used, on the yield of the chemical process. Thus, yield is the response we wish to study. Conduct a two raised to two study with three replicates. The tool to use is Design Expert Free Trial. We will look into the steps in the following slides.

5.52 2² Design(Contd.)

In this slide we will learn to choose the type of experiment. When we open the design expert software, the first thing we need to do is to specify the type of design we need. As we can see from the snapshot o, the software allows us to choose from a variety of designs. The first step is to choose the type of experiment. According to the table snapshot, the leftmost column contains numbers like four, eight, sixteen, thirty two, and so on. All the cells that have been coloured in red are known as resolution three designs. The cells coloured in yellow are known as resolution four designs, and finally the cells coloured in green are known as resolution five to nine designs. The cells coloured in white are full factorial designs. Because we want to work with the two raised to two designs we will select the first immediate cell next to the column entry four. Right below also specify the number of replicates to be three. This means we can now expect twelve runs for a two raised to two design.

5.53 2² Design(Contd.)

Next, we need to define the factors and set the type of data for the study. This is the second step. As we can see the low and high levels are set at minus one and plus one respectively. These are called coded levels. Minus one indicates that the factor is at a low level and plus one indicates the factor being at high level. By now, you must know that a ‘two raised to two’ experiment is tested at two levels, that is, low and high.

5.54 2² Design(Contd.)

In the third step, set the number of responses you wish to measure for each run. Remember that every run will have the factor levels maintained the same. The difference to detect the levels should also be set. For example, if response one and response two are showing different readings, we would want the experiment to show up a possible special cause of variation. This is possible only when we set the difference to detect levels. For example, in this experiment, we wish to test 3 responses per run to get an adequate number of observations and also, to give enough room for a variability check. Let us look into the fourth step in the following slide.

5.55 2² Design(Contd.)

Once we have set the factor (as 1 and -1), levels, and the number of responses (as R1, R2, and R3) which are to be measured, we will find that the design is now setup for running the experiments, and measuring the responses. Remember for every experiment in our setting, we would be measuring three responses. We have a total of twelve runs, and for every run three responses. That means we would now be capturing thirty six responses for the experiment. The one key thing we should note here is to ensure that the run order is randomized. One of the approaches to randomize it is by ensuring that there is no particular sequence in conducting these tests.

5.56 2² Design(Contd.)

The design has been constructed. Now, we must analyze the design model. This is the fifth step. Design Expert allows us to choose default options; and present a wide range of plots, graphs, and results to be interpreted. First click on Design and then Summary. This will give us a broad outline on how the responses vary with their means. That means, merely by looking at this summary table we would be able to conclude whether the means are significantly different or not.

5.57 2² Design(Contd.)

Now click on Design, followed by Evaluation, and then on the f of x Model. Let the order be 2FI, model be factorial, and response be design only. By selecting this, we would get the results and graphs on the design as well as response. See the snapshot given on the slide to know how the screen would look like. You can try this out using the Design Expert software.

5.58 2² Design(Contd.)

Now click on Design, followed by Evaluation, and then, on Results. This is the first heavy slide, full of numbers that can help us interpret the design in many ways. Look at the text given in blue colour. Design expert has presented the interpretation of the design in simple words. One of the key things to note here is the value of VIF. VIF stands for variance inflation factor, and a high value of VIF indicates that the data is multi-collinear, which means the variables could be correlated.

5.59 2² Design(Contd.)

From the previous slides, we see how Design Expert makes the job easy for us. A Black Belt should know how to use his DOE knowledge to interpret the results for the business. The main findings from the results page are the following. First, VIF is one. It is acceptable. Second, the model passes the lack of fit test. Any error is thus pure error, and we don’t have any lack of fit issues. The final finding is low R i – square value. It indicates that the terms are not correlated to each other.

5.60 2² Design(Contd.)

Let us look into the sixth step, in this slide. In this step select a three D surface plot for the design and we will get a design plot like the one shown on the slide. Contour plots and three D surface plots are typically used in response surface designs, which we will study in a while from now.

5.61 2² Design(Contd.)

Let us now understand the seventh step. Here, click on the analysis option. The analysis option will give the analysis for each of the responses influenced by the two factors. After selecting R one, click on Effects. This means we are now interested in studying the response R one. As we can see from the table, variability in Response one is due to pure error, which happens almost always by chance.

5.62 2² Design(Contd.)

Make it always a point to check the half normal plots. This is step number 8. These plots are central to the DOE analysis. From the plot given on the slide, we can find that the plots for the design terms seem to be okay.

5.63 2² Design(Contd.)

In Step 9, Click on Effects list on the effects tool on the toolbar and change the order to design model. Click on ANOVA. The table we would see in the next slide is one of the most important slides in making inferences about the model itself. Let us now read the results. Look at the text not significant given next to the value zero point nine one nine six. That tells you that we have a ninety one point nine six per cent chance that the large "Model F-value" could be not because of assignable cause - but due to some random variables or noise. In case the value in the column "Prob > F" was less than 0.05, we could have concluded that the model terms are significant. In this case, there are no significant model terms.

5.64 2² Design(Contd.)

Let us look into step 10 here. The model also gives the best possible equation. We may not be able to use this equation, as a high degree of the variance here is by chance, and we still have not been able to determine the real reason for variability. As we found out in the earlier slide, ninety one point nine six per cent chance of the model occurring due to noise means most variability is due to noise or by chance.

5.65 2² Design(Contd.)

In the last step, which is step number 11, let us check the diagnostic plots and within this we will check the normal plots of the residuals. The question is whether all the points fall immediately next to the straight line. In this case, the points fall right next to the line.

5.66 2² Design Summary

Now, let us summarize what we have learned from doing the two raised to two experiment. Using the Design Expert software, we were able to setup a two raised to two designed experiment setting with three replicates. We also entered the response and found out what is causing the impact on the response. Finally, we analyzed the graphs and understood the variability in the experiment. In additional to these tools and analysis, we can use the Optimization tool of Design Expert to help improve the process further.

5.67 General 2k Design

Let us now learn more about the general two raised to k design. In a two raised to two design, the experiment tests two factors at two levels. The example we tested was reactant concentration and catalyst weight. Let us introduce a third factor in the setting, process time. Now, three factors need to be tested at two levels, resulting in two to the power of three runs in one replicate. This is known as two raised to three design. All these designs discussed so far follow a generalized norm of two raised to k. Thus, they are referred to as two raised to k designs, i.e., k factors are tested at 2 levels. We can use Design Expert to make the designs. Here, we will use Microsoft excel to understand the Designs. Choose the excel file, DOE (pronounced as D-O-E) provided in the toolkit in which, you can find the worksheet, full factorial. Snapshot of this worksheet is in the next slide for easy reference.

5.68 General 2k Design(Contd.)

When we click on the sheet name full factorial, we will be able to see the entire design preparation for the full factorial experiment. One of the first things we would see in this sheet is a lot of plus and minus signs. These are the coded levels for three factors A, B, and C, which we are going to test using this experiment.

5.69 General 2k Design(Contd.)

Let us look into the step by step process on how to use design of experiments. Please open up the excel sheet titled DOE that has been provided as part of the toolkit. Open the worksheet that says full factorial. Now, in this sheet we will see some columns coloured yellow and some coloured grey. Don’t touch or edit the grey coloured cells. Enter the data in the yellow coloured columns. This is step one. Define the factors in cells D two, D three, and D five. Write the factor levels in G two, G three, and G five; and I two, I three, and I five. The worksheet we have opened would already have the factors defined in it. The table given on the slide is a snapshot from the DOE tool that summarizes all possible main and interaction effects.

5.70 General 2k Design(Contd.)

Next, it is important for us to check the run sequence. We saw that Design Expert automatically randomized the runs for us while excel doesn’t do it for us. We would have to manually set the run order. The run sequence is indicated under the column title trial in the DOE full factorial worksheet. In the first run thus, we keep a low setting of reactant concentration, catalyst weight, and process time. In the second run, we keep a high setting of reactant concentration while the other two factors are kept at low.

5.71 General 2k Design(Contd.)

In the third step, we need to measure the responses for this experimental setting. In ideal circumstances, the suggestion is to measure three responses per trial. In this experiment, we have measured five responses per run and have written them down in the cell numbers R twenty to V twenty seven. The snapshot of the excel file with responses can be seen on the slide.

5.72 General 2k Design(Contd.)

Before we can do the analysis of the experiment, check if everything about the experiment is right. This is step number 4, where we review the experimental setting. Check if the run sequence is proper and double check it with the check sheet to see if all the data has been recorded correctly.

5.73 General 2k Design(Contd.)

In the next step, move to the results table, which is displayed starting from cell B forty four to L fifty one in the excel worksheet. The results below I fifty one are not significant as we are doing a ‘two raised to three’ design.

5.74 General 2k Design(Contd.)

The graphs in this slide show the main effects of factors a, b, and c on the response. It plots values for both high and low settings, and receives the corresponding response for the same.

5.75 General 2k Design(Contd.)

In this slide, we will see the interaction effects between the variables. As we can see, the two-way interactions are very much visible here. On the slide, we have 3 graphs depicting interaction effects between the 3 factors a, b, and c. The first graph shows the effects between a and b. One can observe from the graph that there is interaction between the factors ‘a’ and ‘b’, and it is clear from the response as well. The response varies for values of ‘a’ as high or low, and so, we get high and low values for the factor b. We can observe similar things in the other 2 graphs. Thus, we can conclude that there definitely exist some interactions between the factors a, b, and c on the response. Detail analysis is shown in the next slide.

5.76 General 2k Design(Contd.)

Let us look into a detailed analysis of the data and charts that we have seen in the last couple of slides.All the p-values are significant. The main effects and the interaction effects all need to be considered for developing the model with these terms. The effects column is interesting. It shows that factor c, which is the process time has the highest possible effect on the average response. All the three factors namely, a, b, and c, have a positive effect on the response, while the interactions predominantly have a negative effect on it. The contribution of error, sum of squares is marginal as compared to the total sum of squares. Thus, this model can be regressed.

5.77 Single Replicate of 2k Design

After understanding how to do a full factorial two raised to three design, let us now discuss replication and a single replicate two raised to k design. For up to three factors in a design, running a two raised to k design on multiple replicates make sense. For example, on three factors, a two raised to k design with two replicates will have sixteen runs. A five factor design with one replicate will have thirty two runs. Adding another replicate means thirty two plus thirty two which equals sixty four runs. The experimenter may not have time to experiment with so many runs. Thus, each test combination is tested only at one run, which may expose the experiment or the model to noise. Single replicate designs are used in screening designs, where we have a lot of factors to be considered and out of which the most unimportant ones can be screened out. Linearity in factor effects is an assumption in conducting two raised to k experiments with a single design. Adding interaction terms to the main effects of the model could result in curvature, resulting in the linearity assumption not holding good anymore.

5.78 Half Fractional 2k-1 Design

We will now look into the half fractional two raised to k minus one design, which is considered one of the popular designs used in DOE. Often, conducting a full factorial experiment on all factors at all runs is considered most beneficial as not many interactions will be missed in this. Running a lot of experiments though, may take the experimenter a lot of time, as a result of which, the experimenter may want a run combination with lesser number of runs, where all the factors would be tested. Setting up an experiment itself is a cumbersome task. On paper, it is all about plus and minus, but in reality setting up an experiment means making the necessary adjustments to the equipment involved. The choice thus is half fractional factorial experiment. For a five factor two level, a full factorial experiment will need thirty two experiments on one replicate while a half factor factorial will only need sixteen experiments. In fractional factorial experiments, high order interactions typically will represent the main effects or the effects of 2nd order interactions. We will continue this in the next slide.

5.79 Half Fractional 2k-1 Design(Contd.)

Using Design Expert Software, let us create a fractional factorial experiment for three factors. New terms are observed in this design. Aliased terms are ABC, BC, and AB. Thus, in the final model we can expect not to see these terms appearing. A plus BC means interaction effect BC gets aliased with the main effect of A. Similarly, B plus AC means AC gets aliased with B and so on. Please note that alias is used for demonstration purpose. We can use temperature, pressure, size, weight, or rather, any factor whose interaction effects are to be seen.

5.80 Half Fractional 2k-1 Design(Contd.)

Let us look in to the effects table in the next slide. The effects table clearly tells us the effects that are contributing to the model. In this 3 factor fractional factorial experiment, all the effects are contributed due to the main effects and that too the main effects of B and C. Look at the percentage contribution in the extreme right column. The main effect of A is hardly contributing to the response. However, the main effects B and C contribute, with B contributing almost two thirds of the variability in the response.

5.81 Half Fractional 2k-1 Design(Contd.)

Defining A:A as an Error term, let us move to the ANOVA analysis section. As we can see in the slide, the model itself is significant here, that is, it has a very low chance of happening due to noise alone. The main effects of B and C are shown as significant here with their low F values.

5.82 Half Fractional 2k-1 Design(Contd.)

In this slide, let us understand the behavior of response variable R1 across the range. The surface plot will help us understand the behavior across multiple variables.

5.83 Half Fractional 2k-1 Design(Contd.)

Let us now understand designing and then interpreting a two raised to k minus one design, this time for four factors. Using Design Expert software, let us create a fractional factorial experiment for four factors. New terms are observed in this design. Aliased terms are BC, BD, CD, ABC, ABD, ACD, BCD, and ABCD. Thus, for a fractional factorial experiment for four factors, the fourth order and third order interactions are all aliased. Some second order interactions are also aliased. Table mentioned on the slide gives the Alias list. As we can see from the Alias list, effects BC, BD, CD, ABC, ABD, ACD, BCD, and ABCD are aliased with the main effects and the second order interactions.

5.84 Half Fractional 2k-1 Design(Contd.)

Let us now study the effects list. Although the model is over specified, let us look at the individual contributions of each of the effects. Factor A contributes seven point five per cent, factor B contributes three point three three, factor C contributes approximately twenty one per cent, factor D contributes approximately three per cent, interaction AB contributes approximately twenty one per cent, interaction AC another three per cent, and finally interaction AD contributing the most with about forty one per cent. The ANOVA analysis for this model cannot be performed, as the model is over specified and all the degrees of freedom are in the model and none are assigned to the residual (or error). There were no calculated p-values here, as without the residual error there is nothing to test against. To fix the problem, one needs to return to the effects or model button and assign at least one term to error.

5.85 Quarter Fractional 2k-2 Design

After understanding how a half fractional factorial design works for four factors, let us know understand the quarter fractional factorial designs. These designs are generalized by the notation two raised to k minus two.Quarter fractional two raised to k minus two designs are available from five factors onwards. The basis of this experiment is to test k factors at two levels in two raised to k minus two runs. For five factors and two levels on one replicate, the experimental settings are as follows: Full factorial will have thirty two runs; half fractional factorial will have sixteen runs; and quarter fractional factorial will have eight runs. A sample five factor quarter fractional factorial experiment with two replicates, that is, sixteen runs has been setup with the help of ‘Design Expert’. The effects list for this experimental setting is shown in the next slide.

5.86 Quarter Fractional 2k-2 Design(Contd.)

In this slide, we will look into the effects list and also the alias representations. As seen from the table, the fifth, fourth, and third order interactions are completely aliased. Some second order interactions are also aliased. In short, a lot of interactions got aliased with the main effects and second order interactions. The percentage contributions for each of the effects have been given in the column on the extreme right. We will read these percentage contributions much the same way as read in the half fractional experiment.

5.87 Quarter Fractional 2k-2 Design(Contd.)

The ANOVA analysis for the quarter fractional factorial experiment tells us a lot of things. The first thing we note by looking at the probability value of zero point zero two nine two is that the model is a significant one. The significant terms for the model that need to be considered for the final model are E, BC, and BE. Let us now understand the ANOVA analysis given in the slide. As one can see, the "F-Value" in the model is 4.29 which gives the p-value (That is Probabilty > F) to be 0.0292. This is less than 0.05 making it significant. This implies that there is only 2.92 per cent chance that the "Model F-Value" could occur due to noise. Similarly, Models E and BC are also significant as their corresponding p-values are less than 0.05.

5.88 Quarter Fractional 2k-2 Design(Contd.)

The model analysis itself throws an interesting set of results. The predicted r square value is extremely less at about fifteen point seven nine per cent. The predicted r square metric is used to predict future observations for the model. The metric adequate precision tells that the presence of a signal can be detected here. The adequate precision value of five point three zero seven as compared to a threshold value of four indicates that this model can be used to understand variability and suggested improvements can be done on the basis of the model. On the slide, we have a detailed Model Analysis report. Let us now discuss the report. The predicted R-squared value is 0.1579, which is different from the adjusted R-squared value of 0.6053. This might indicate a large block effect or a possible problem with the model or the data. To validate if the model and the data are accurate, one must consider doing the following validations like model reduction, response transformation, outlier detection, etc. Also, it is required to check what the signal to noise ratio is. It is desirable to have the ratio to be greater than 4. In this case the "Adequate Precision" value that represents the signal to noise ratio have the value 5.307 which means the data is good and has adequate signal. Hence, this model can be used to navigate the design space.

5.89 3k Factorial Design

In this slide, we will understand the three raised to k factorial design, another variant of the full factorial designs.The two raised to k fractional factorial designs are most preferred in the industrial applications and research. A variant of this design is the three raised to k factorial design, (full factorial and fractional factorial.) While notations used in the two raised to k designs were plus and minus one as it facilitates the geometric view of the design, in the three raised to k designs we will use the notations, minus one, zero, and plus one. The three raised to k design is often used by the experimenters for estimating curvature. Considering the estimation of curvature, the two raised to k designs augmented with center points are often considered an excellent alternative. Another alternative to estimate curvature in a design is response surface design.

5.90 3k Factorial Design (Contd.)

The simplest form of three raised to k factorial design is the three raised to the power of two factorial design. In one replicate, this experiment has nine runs. The two factors are tested at three levels, zero, minus one, and one; or zero, one, and two. The sum of squares for second order interactions and higher are often partitioned into single degree of freedom components and multiple degree of freedom components. The sums of squares can be determined by two methods. One is the method used for determining the sum of squares for the two raised to k designs and the other is the Latin square method.

5.91 Response Surface Designs

Let us now understand response surface designs. In short, these designs help us in estimating the curvature effects in a model. The two raised to k and the three raised to k factorial designs we have studied so far will help us estimate the curvature effects of a model by the addition of a center point. Despite this, the two raised to k models are not considered robust models to study quadratic effects. In order to study quadratic effects, one should use response surface designs. Response surface designs help in finding improved or optimal designs, troubleshooting process issues, and making a product or process robust against external influences.

5.92 Response Surface Designs(Contd.)

We can choose from two types of response surface designs. They are central composite and box behnken designs. In this example, we will understand the central composite design for three factors. We will be covering this in detail in the next couple of slides. Each factor is varied at five levels. This makes a response surface design robust. The five levels are plus and minus one, plus and minus alpha (where alpha is an axial point), and zero as the center point. Use Design Expert to draw a response surface design experiment. Add two blocks to see how blocking helps a response surface design.

5.93 Response Surface Designs(Contd.)

Let us understand the first step. After setting all the factors, and adding the blocks, we will check the experiment setup. The setup snapshot is displayed on the slide.

5.94 Response Surface Designs(Contd.)

All we need to do now is to enter our responses for the level settings as shown on the slide. This is the next step.

5.95 Response Surface Designs(Contd.)

Let us discuss step number three. The block to block variability is an interesting plot in the response surface design. This plot tells whether Blocking as a technique has been able to take care of any nuisance factor error. Check the block to block variability. The black boxes indicate data for block 1 and red blocks indicate block 2 data.

5.96 Response Surface Designs(Contd.)

In the fourth step, check the design evaluation report. Observe the lack of fit degree of freedom most importantly. From the table given, we can find that the lack of fit degree of freedom is five. The aliases are calculated based on the response selection taking into account any missing data points, if necessary. One needs to watch for the aliases among the terms that need to be estimated.

5.97 Response Surface Designs(Contd.)

The results analysis is provided in the slide. The results analysis table doesn’t really reveal a lot about the model at least in the first reading.

5.98 Response Surface Designs(Contd.)

Further in-depth reading of the results page will tell us that the lack of fit doesn’t hold significance. It is pointed by the high p value of zero point nine five six seven. This further indicates that there is ninety five point six seven per cent chance that this lack of fit issue of the model is not significant.

5.99 Response Surface Designs(Contd.)

This slide summarizes all the findings that we had discussed in the previous slides about the Lack of fit insignificance and other things. Let us now analyze the report shown on the earlier slide. As one can see the values of "Prob > F" is greater than 0.05, which means that there are no significant model terms. If there are many insignificant model terms, model reduction will help improve the model. Next analysis shows the "Lack of fit F-Value" as 0.31. This implies that the Lack of Fit is not significant relative to the pure error. There is a 95.67% chance that a "Lack of Fit F-Value" this large, could occur due to noise. In summary, non-significant lack of fit is good as we want the model to fit. Further, one can see that the Predicted R-Squared value is negative, this implies that the overall mean is a better predictor of the response than the current model.

5.100 Nested Designs

In this slide and the next one, we will understand two special types of designs. We will have an overview of the same. First let us understand the nested design. A nested design is a type of design of experiment where each subject receives one, and only one, treatment condition. A nested design is recommended for studying the effect of sources of variability that repeat themselves over time. Data collection and analysis for nested designs are straightforward. Interactions are not significant any more, which are to be studied for time-dependent errors. We can see a diagrammatic representation of the nested design on the slide.

5.101 Split Plot Designs Introduction

The next design that we will discuss in this slide is split plot design. Split plot designs are blocked designs, where blocks serve as the experimental units for subset of factors. For a typical two level factorial experiment, the two levels and factors are setup in two blocks. These blocks are known as whole plots. The experimental units that are setup within these blocks are known as split plots or block plots. The entire design is randomized twice. This is to determine the block level treatment to whole plots, and also, to determine the split plot experimental unit treatments within each block. It differs from a completely randomized design because of random errors’ presence in split plots as well as whole plots. Randomization ensures that split plot errors are independently distributed and mutually independent within the whole plot.

5.102 Taguchi's Designs

Finally, let us move on to discuss Taguchi’s Designs. It was conceived and developed by Dr. Genichi Taguchi. ‘Taguchi’s designs’ focuses on the robustness of the product. In other words, it focuses on designing a product in such a way that it is insensitive to the common cause of variation existing in the process. ‘Taguchi’s designs’ quantifies the effects of deviation in a process to a financial loss with the function, L of y equals k multiplied by y minus m the whole square. Here, y is the value of quality characteristic, m is the target for quality characteristic, and k is the constant that signifies financial importance of quality characteristic.

5.102 Taguchi's Designs

Finally, let us move on to discuss Taguchi’s Designs. It was conceived and developed by Dr. Genichi Taguchi. ‘Taguchi’s designs’ focuses on the robustness of the product. In other words, it focuses on designing a product in such a way that it is insensitive to the common cause of variation existing in the process. ‘Taguchi’s designs’ quantifies the effects of deviation in a process to a financial loss with the function, L of y equals k multiplied by y minus m the whole square. Here, y is the value of quality characteristic, m is the target for quality characteristic, and k is the constant that signifies financial importance of quality characteristic.

5.103 Taguchi's Designs (Contd.)

The nominal signal to noise ratio is given by the formula, S divided by N nominal equals ten multiplied by logarithm of square of Ybar divided by square of s. The available Taguchi’s designs are displayed below. They are L four – Geometric design – Up to three factors; L eight – Geometric design – Up to seven factors; L twelve – Non-geometric design – Up to eleven factors; L sixteen – Geometric design – Up to fifteen factors; L twenty – Non-geometric design – Up to nineteen factors; and L twenty four – Non-geometric design – Up to twenty three factors.

5.103 Taguchi's Designs (Contd.)

The nominal signal to noise ratio is given by the formula, S divided by N nominal equals ten multiplied by logarithm of square of Ybar divided by square of s. The available Taguchi’s designs are displayed below. They are L four – Geometric design – Up to three factors; L eight – Geometric design – Up to seven factors; L twelve – Non-geometric design – Up to eleven factors; L sixteen – Geometric design – Up to fifteen factors; L twenty – Non-geometric design – Up to nineteen factors; and L twenty four – Non-geometric design – Up to twenty three factors.

5.104 Taguchi's L4 Design

Let us understand how to work with Taguchi’s L four design. The Taguchi’s L four design can be constructed using the tool, DOE, provided as a Microsoft Excel spreadsheet in the toolkit. Open the excel sheet, DOE, and the worksheet L four Taguchi. The instructions to update the L four design will be the same as that of full factorial and fractional factorial designs. Enter data in the yellow background cells. The effects and calculations will turn up in the grey colored cells. Look at the last table. If we look at the cells below the column p, we can see that all of them have the values as zero. We can say that by using the Taguchi’s design here, the factors are seen to have a significant impact on the response variable.

5.104 Taguchi's L4 Design

Let us understand how to work with Taguchi’s L four design. The Taguchi’s L four design can be constructed using the tool, DOE, provided as a Microsoft Excel spreadsheet in the toolkit. Open the excel sheet, DOE, and the worksheet L four Taguchi. The instructions to update the L four design will be the same as that of full factorial and fractional factorial designs. Enter data in the yellow background cells. The effects and calculations will turn up in the grey colored cells. Look at the last table. If we look at the cells below the column p, we can see that all of them have the values as zero. We can say that by using the Taguchi’s design here, the factors are seen to have a significant impact on the response variable.

5.105 Taguchi's L4 Design Graphs

In this slide, we can clearly see that the factors one and three have a positive impact on the response variable, while factor two is seen to have a negative effect on the response variable.

5.105 Taguchi's L4 Design Graphs

In this slide, we can clearly see that the factors one and three have a positive impact on the response variable, while factor two is seen to have a negative effect on the response variable.

5.106 Taguchi's L8 Design

We will now discuss the Taguchi’s L eight design, another variant of Taguchi’s designs. Open the worksheet labelled L eight Taguchi. The instructions to update this experimental setup remain the same. Please note that the Taguchi L eight design works the best when we have seven factors to test. The Taguchi L eight design will test the seven factors at two levels in eight runs. We can see the effects table displayed in the next slide.

5.106 Taguchi's L8 Design

We will now discuss the Taguchi’s L eight design, another variant of Taguchi’s designs. Open the worksheet labelled L eight Taguchi. The instructions to update this experimental setup remain the same. Please note that the Taguchi L eight design works the best when we have seven factors to test. The Taguchi L eight design will test the seven factors at two levels in eight runs. We can see the effects table displayed in the next slide.

5.107 Taguchi's L8 Design(Contd.)

Let us look into the results table for the L eight design. Take a look at the probability (p) values. All the values are zero. This clearly indicates that we have all these factors impacting the response variable. A study of the column titled effects will tell us which of the effects are positive and which of them are negative.

5.107 Taguchi's L8 Design(Contd.)

Let us look into the results table for the L eight design. Take a look at the probability (p) values. All the values are zero. This clearly indicates that we have all these factors impacting the response variable. A study of the column titled effects will tell us which of the effects are positive and which of them are negative.

5.108 Taguchi's L8 Design(Contd.)

The graph in this slide shows the 7 factors, its interaction with low and high values, and also, the effects on each of the response variable. .

5.108 Taguchi's L8 Design(Contd.)

The graph in this slide shows the 7 factors, its interaction with low and high values, and also, the effects on each of the response variable. .

5.109 Taguchi's L8 Design(Contd.)

The multiple graphs that we see on the slide shows the interaction effects, if any, between the factors that cause an impact on the response variable. Let us take a look at the first graph. This graph shows the interaction effect between factors one and two. As we can see the lines don’t meet or cross over, signalling the interaction effects are negligible First 4 graphs show the interaction between factor one and factors two, three, four and five. The last two graphs show the interaction between factor 2 with factors 3 and 4.

5.109 Taguchi's L8 Design(Contd.)

The multiple graphs that we see on the slide shows the interaction effects, if any, between the factors that cause an impact on the response variable. Let us take a look at the first graph. This graph shows the interaction effect between factors one and two. As we can see the lines don’t meet or cross over, signalling the interaction effects are negligible First 4 graphs show the interaction between factor one and factors two, three, four and five. The last two graphs show the interaction between factor 2 with factors 3 and 4.

5.110 Plackett Burman's Design

After working with the Taguchi’s designs and some variants of it, that is, the L four and the L eight designs, let us understand Plackett Burman designs. Resolution designs can be used to investigate k main effects using k plus one runs. The problem with highly fractional two raised to k minus p designs is that they miss a lot of interaction effects due to aliasing. Plackett Burman design allows investigation of k main effects using k+1 runs with the valid runs being multiples of 4. Valid runs for Plackett Burman’s designs are 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, and so on up to 100. Whenever the value of the run is a power of 2, the resulting design is identical to a fractional factorial design, so Plackett–Burman designs are mostly used when N is a multiple of 4 but not a power of 2, i.e., for runs with value 4, 8, 16, 32, 64, it is best to use a fractional factorial design.

5.110 Plackett Burman's Design

After working with the Taguchi’s designs and some variants of it, that is, the L four and the L eight designs, let us understand Plackett Burman designs. Resolution designs can be used to investigate k main effects using k plus one runs. The problem with highly fractional two raised to k minus p designs is that they miss a lot of interaction effects due to aliasing. Plackett Burman design allows investigation of k main effects using k+1 runs with the valid runs being multiples of 4. Valid runs for Plackett Burman’s designs are 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, and so on up to 100. Whenever the value of the run is a power of 2, the resulting design is identical to a fractional factorial design, so Plackett–Burman designs are mostly used when N is a multiple of 4 but not a power of 2, i.e., for runs with value 4, 8, 16, 32, 64, it is best to use a fractional factorial design.

5.111 Plackett Burman's Designs(Contd.)

In this slide, we can find a sample Plackett Burman results analysis. The analysis is done for an eight run design that allows estimation of the main effects for seven factors. The pertinent point to focus on in this table is right next to the text effects. We would find some effects being coloured green and some red. Understanding this is not tough. The green coloured effects are believed to have a positive impact on the response variable; whereas, the red coloured effects have a negative impact on the response variable.

5.111 Plackett Burman's Designs(Contd.)

In this slide, we can find a sample Plackett Burman results analysis. The analysis is done for an eight run design that allows estimation of the main effects for seven factors. The pertinent point to focus on in this table is right next to the text effects. We would find some effects being coloured green and some red. Understanding this is not tough. The green coloured effects are believed to have a positive impact on the response variable; whereas, the red coloured effects have a negative impact on the response variable.

5.112 Quality Function Deployment(House of Quality)

In this slide, we will cover the concepts of Quality Function Deployment and House of Quality. Quality function deployment (also known as QFD) is a method to capture user demands and transform them into design quality. It helps in guiding the deployment of functions forming quality. Also, QFD is a method to deploy methods for achieving the design quality in subsystems and component parts, and ultimately to specific elements of the manufacturing process. The QFD model is designed to help product and service planners focus on the characteristics of a new or existing product or service from the viewpoints of market segments, company, or technology-development needs. QFD helps transform customer needs into engineering characteristics for a product or service, prioritizing each product or service characteristic while setting development targets for product or service.

5.113 Summary

In this lesson we have learned how to do and interpret 2 to 2 design; general 2raised to k design; single replicate of 2raised to k design; half fractional 2raised to k minus 1design; quarter fraction 2 raised to k minus 2 design, the 3 raised to k design, and response surface designs. In addition, we have also been introduced to nested design, split plot design, Taguchi’s L4 and L6 designs, and Plackett Burman’s design.

5.114 Section V Lesson 4 Brainstorming Solutions Prioritization and Cost Benefit Analysis

In this lesson of the improve stage, we will understand three things namely, brainstorming, solutions prioritization, and cost benefit analysis.

5.115 Agenda

Let us look into the agenda of this lesson. We will start off the lesson with brainstorming and then will move on to multi-voting. Next, we will understand brainstorming, prioritization, and cost benefit analysis. We will end this lesson by understanding poka –yoke.

5.116 Brainstorming

Let us learn how brainstorming plays a crucial role in the improve phase. The analyze phase gave us inputs on why the input variable was varying. This statement was validated by conducting additional DOE tests to see if the variability was due to noise or any special cause. The KPOV was delivery hours and the KPIV identified was training time, packaging weight, and hold time (for illustration purpose) which was impacting the delivery hours. The Six Sigma team should now brainstorm for the possible solutions that could attack the root cause of the issue. Brainstorming is an open ended activity, which helps the team generate multiple solutions for the root cause of the problem. Assume that the root cause of the problem, hold time, was that “employees did not have enough on-hand assistance ever.”

5.117 Multi Voting

In this slide, we will cover the multi-voting technique, which is primarily used to help teams narrow down available options. Multi-voting is a voting or brainstorming technique that prioritizes ideas. Its primary goal is to reduce the range of options available, and thereby preventing an information overload. It encourages each team member to come up with their personal rankings of the options; and then, everyone’s rankings are collated to prioritize the ideas. This is also known as N/3 (read as “N” by Three) voting, in Multi-Voting, where N refers to the total number of ideas. Every team member is given the number of votes that is equivalent to approximately one-third of the total number of ideas. They are then instructed to vote for most important ideas from their perspective. While using this technique, do the following. The team member can only assign one vote per idea. Since there are lesser votes than the number of ideas, the less important idea will naturally receive lesser number of votes and will be ‘weeded out’ automatically. The reduced list of ideas will now have only important ideas that the team must deal with.

5.118 Brainstorming Prioritization and Cost Benefit Analysis

In the improve toolkit, there will be a file called countermeasures sheet. The improve toolkit contains all the tools we can put to use in the improve phase. The reason why these tools are separated from the main toolkit file is because most of these tools are non-statistical in nature. The snapshot of the countermeasures sheet can be seen on the slide. Let us now explain the important columns mentioned in this sheet. The first column as we can see is titled root cause. It is below this column that we would enter the root cause of the problem that we have identified from the Fishbone diagram and the RCA exercise. The next column is countermeasures or proposed solutions. This is where we would enter the way the root cause of the issues will be tackled. Once that countermeasure is updated, enter how feasible the solution is. Post which we should enter the specific on-ground actions that are taken by us to get the solution implemented. Once that is done enter the effectiveness and the overall rating for the solutions.

5.119 Brainstorming Prioritization and Cost Benefit Analysis(Contd.)

Let us with the help of an example understand how to update the countermeasures sheet. The root cause here is lack of hands on assistance. The suggested solution here is to update the website with all latest issues and how to fix them, which would provide the workers and employees with a sense of assistance. The feasibility is ranked four which means the team thinks implementing the solution is not tough. In terms of specific actions, the team thinks of holding the project manager accountable to setup the website with the necessary documentation. The effectiveness gets a rating of four, and the overall rating for the solution is sixteen with aimed value to the company at two thousand dollars. From the countermeasures sheet, it can be seen that solution 2 that is aimed at attacking the same root cause is probably more effective than solution 1, though solution 1 is more feasible. Note that the ratings should be provided as a team. The graph presented in the countermeasures sheet will tell us the priority of the solutions to be implemented. This tool can be used by the team to update the preventive actions, if there are any.

5.120 Brainstorming Prioritization and Cost Benefit Analysis(Contd.)

The Six Sigma team should also update the tool, ‘action plan’ provided with the improve toolkit. The action plan document is a documentary of the change that is proposed, and how the change would be carried out. Snapshot of the tool has been attached in the slide. All the rows in this tool are self-explanatory and any Black Belt should be able to use this tool with relative ease.

5.121 Brainstorming Prioritization and Cost Benefit Analysis(Contd.)

Let us now look into the concept of Cost Benefit Analysis. Cost benefit analysis is an important analysis the Six Sigma team needs to do, as every solution to be implemented is judged on two main things namely, the cost needed to implement the solution and the benefits the company would get out of implementing the solution. If the CEO of a company is investing money in a project, he definitely needs to know how much will he be getting out of the deal. Cost benefit analysis will help him find that. Cost benefit analysis is often expressed by three main metrics: B-C ratio (Benefit to Cost Ratio), net present value (also called as NPV), and internal rate of return (also called as IRR).

5.122 Brainstorming Prioritization and Cost Benefit Analysis(Contd.)

Let us understand the cost benefit analysis with the help of an example. Three solutions have been thought of by the Six Sigma improvement team. The associated costs and benefits for a six month review range are presented on the slide. B-C ratio is also provided. Solution A as we can see needs ten thousand dollars to be implemented against which it gives a total financial benefit (projected) of twenty thousand dollars. This benefit number is projected or forecasted for a fixed amount of time. Similarly, we can see that solution B has costs of twenty five thousand dollars and returns or projected benefits of seventy five thousand dollars. Solution C has five thousand dollars as costs and thirty thousand dollars as the projected benefits. The B to C ratio for all the solutions, found by dividing the benefits and costs, is two, three, and six for A, B, and C respectively. Now the question is out of the three solutions which one would be preferred by the Sponsor?

5.123 Brainstorming Prioritization and Cost Benefit Analysis(Contd.)

Let us understand how to calculate NPV, also known as the net present value. The cost needed to implement the solution is twenty thousand dollars. The rate of discount often fixed by the business is ten per cent. The benefits we would get by the end of year one is five thousand dollars, by the end of year two is five thousand dollars again, and by the end of year three, it is eleven thousand dollars. To calculate NPV, use the excel formula NPV as mentioned in the slide. Add a negative sign to the costs to calculate NPV. The benefit value after using the NPV formula is sixteen thousand nine hundred thirty two. Subtract twenty thousand dollars from sixteen thousand nine hundred thirty two dollars. Net present value therefore is negative three thousand and sixty eight dollars. Now, we need to calculate the profitability index which is given by NPV divided by total cost. In this case, profitability index is negative three thousand sixty eight dollars divided by twenty thousand dollars. The profitability index is negative zero point one five.

5.124 Brainstorming Prioritization and Cost Benefit Analysis(Contd.)

Let us discuss internal rate of return in the next slide. The company benchmarks its cost of capital on sixteen per cent. To calculate the internal rate of return, use the formula IRR as mentioned in the slide. Include the costs and benefits in the formula. Using the formula on the costs and benefits mentioned the IRR is calculated to be two per cent. The IRR is less than the cost of capital. Therefore, the solution with a lesser IRR value than the cost of capital will not be chosen due to benefit constraints. The profitability index from NPV calculations had a negative result. The solution definitely cannot be chosen for over the next three years, as this solution will not yield financial benefits.

5.125 Poka Yoke

In this slide, we will understand the poka-yoke concepts and how this can be used in designing new product or service for the first time. Poka-yoke is a Japanese term that means “mistake proofing”. Initially it was called “baka-yoke”, which meant “fool-proofing” or “preventing fools from making any mistakes”, people objected to it as it referred them as fools. After that, it was renamed to “Poka-Yoke” which help people in preventing mistakes from happening in the first place. Poka yoke is any mechanism in a lean manufacturing process that helps an equipment operator to avoid mistakes from happening in the process, product, or service. Its primary purpose is to eliminate product defects by preventing, correcting, or drawing attention to human errors as they are occurring; and in some cases, even stopping the machine or process, where the mistake is occurring to prevent any hazard. It forces the user to do a task only one way, the right way. Poka yoke creates less waste and increases the productivity. In summary, by implementing a fail-safe environment, less focus is placed on those tasks, workloads on the employees are decreased, and outputs are increased.

5.126 Summary

Here is the summary of what we have learned in this lesson. We started with understanding brainstorming and how to do it. Then we looked into how to use the Multi-voting techniques to narrow down the options and achieve Solutions Prioritization. It was Cost Benefit Analysis that we looked into next, which helps us finalize any solution. And finally, we reviewed the poka yoke technique for any solution that is to be implemented.

5.127 Section V Lesson 5 Piloting Validating and FMEA

Let us move on to the last lesson of the Improve Phase, Piloting Validating and FMEA. In this lesson, we will discuss some piloting techniques and validating techniques to understand the results and improvements and the FMEA or Failure Mode Effects and Analysis.

5.128 Agenda

Let us discuss the agenda of this lesson. In this lesson, we will learn why to pilot the solutions, and what tools are to be used while piloting. We will also understand how to validate or test our solutions using paired t test. After that we will look into the steps that come after the improve phase. And finally, we will discuss how to work with failure mode effects analysis.

5.129 Pilot Solutions

Why should we pilot our solutions? Let us look into that in this slide. This is often an important consideration for any Six Sigma practitioner. Anyone who wishes to do a Six Sigma project has to ensure that the solutions are piloted in the improve phase. Lean Six Sigma Black Belts are trained to implement and work on enterprise wide deployments. Deploying a solution in the improve stage on the entire enterprise may be fraught with risk. The solution is theoretical, and has only been tested from a cost benefit angle. Evidence needs to be gathered on the on-ground success of the solution. On-ground success or failure of the solution can be determined by piloting, which is, deploying the change effort in small teams. The ratio to be followed is ten is to forty is to fifty. That is, the solution should be first deployed on ten per cent of the entire scope span of control, then forty per cent, and finally, fifty per cent. This allows the Six Sigma team to delimit the possible risks of the solution.

5.130 Piloting Tools

Let us now discuss the tools used for piloting. Once the Six Sigma team is able to identify the first phase pilot study and is able to outline the possible risks in order to proof their solutions, piloting should start. The first tool the Six Sigma team will use is known as the risk assessment matrix. The risk assessment matrix has the business description to be entered, the business impact rating, and probability of occurrence. This matrix has to be updated first in the phase 1 pilot and then in the subsequent phases. This sheet is provided in the toolkit by the name ‘risk assessment matrix.’

5.131 Piloting Tools (Contd.)

Let us understand PDPC or process decision programme chart. This tool has to be used as a contingency planning tool. This pre-empts possible reasons why a change effort could fail. A snapshot of this tool is shown on the slide.

5.132 Paired t Test

Let us now look into the paired t test, one of the hypothesis tests, which we covered in section four, Analyse. The reason why a paired t test is done here is because this test is relevant and is an extremely important test in the improve phase. The piloting phase must last at least for a week, during which the Six Sigma team must understand the possible risks. In the week that comes after that, the Six Sigma team must collect the post improvement data. Any improvement made must be statistically validated. The paired t test is an excellent tool for the statistical validation of the data collected. Data collected for ‘hold time’ before and after improvement given on a pilot run, is shown on the slide.

5.133 Paired t Test(Contd.)

The paired t test results done with Microsoft Excel are shown in the slide. The critical things that we need to look at are the one-tail and two-tail tests. In both the cases the value of p is less than 0.05. for one-tail it is zero point zero and for two-tail it is zero point zero zero one. Based on this, we can reject the null hypothesis. This is nothing but the p value. When p value is less than zero point zero five, we know that the null hypothesis has to be rejected.

5.134 Paired t Test Interpretations

Let us now look into the paired t test interpretations. The null should be rejected due to a significant p – value. This means that the before group mean and the after group mean are different. To find whether the after group mean is less than the before group mean, look at the box plot. If the box plot shows that the after hold time is less than the before hold time, it means that the improvement measure has worked. The solution, if all risks have been identified, can be deployed across the enterprise.

5.135 Paired t Test(Contd.)

The box plot is plotted to check if the improvement in hold time is for better or worse. As we can see from the plot, the after group hold time is considerably less than before improvement hold time.

5.141 Failure Mode Effects Analysis(Contd.)

Let us learn some more points about the paired t test. The paired t test has been done on the KPIV that is hold time, as our improvement measure was directed at improving the KPIV. Do a paired t test on the KPOV performance and see if improvement in KPOV is validated or not. If the test passes, that is, if the null for the KPOV groups could be rejected, the improvement measure has been able to work on the KPOV as well. It is important to note the following conditions for a paired t test. One, data must come from a normal distribution. Two, data must come from related groups and not independent groups. Three, sample size must be less than thirty.

5.137 Improve Next Steps

Let us look into the next steps in an improve phase. If the pilot run has been successful, i.e., if we have managed to statistically validate the change, the following steps would be to run the solution through phase 2, and phase 3 of implementations. The pilot run for each phase should be 1 to 2 weeks. To ensure repeatability and reproducibility, do not change the operators and the measurement system. Validate all solution deployments statistically with the help of a paired t test. Test with simple linear or multiple linear regression, and half normal plots to see if all relationship conditions are met by the model. If all deployments across phases are statistically validated, conduct an enterprise wide deployment study for one month. Re-validate the data using a paired t test.

5.138 Failure Mode Effects Analysis

In this slide, we will understand the failure mode effects analysis popularly known as FMEA. This is a tool used in the Green Belt training program. Failure Mode Effects Analysis (FMEA) is commonly referred to as a pre-emptive tool. We can use this tool along with risk assessment matrix to assess possible risks to the process.. The FMEA sheet is provided in the improve toolkit. FMEA is also used as a business measure to show improvement. The key metric to be noted in an FMEA matrix is RPN or risk priority number. Risk priority number is severity multiplied by occurrence multiplied by detection, where Severity shows how severe the failure mode is; Occurrence shows how probable the failure mode is to occur; and Detection shows how easy it would be for the process team to detect the failure mode.

5.139 Failure Mode Effects Analysis(Contd.)

We will continue with FMEA in this slide too. The first document that needs to be updated is the FMEA checklist. The FMEA document should only be updated once the checklist questions are answered. Snapshot of the FMEA Checklist is given in the slide.

5.140 Failure Mode Effects Analysis(Contd.)

The FMEA template is attached in the toolkit. Note that a Green Belt will know how to update the template. A Black Belt should ensure that along with the entire Six Sigma team, the Process Expert is also present while the FMEA is being updated. It is not mandatory for the Black Belt to update the FMEA template. But, the Black Belt needs to be present while documenting the FMEA.

5.142 Failure Mode Effects Analysis(Contd.)

A sample part or a failure mode is updated in the FMEA template so that we know how to use and review this tool. The failure mode is that the part number one hundred three is found to be deformed in a nut bolting process. The deformed part cannot be fit into car wheels, which is as per the end customer product. Improper fitting procedure and bolting time has resulted in this failure mode. The severity, occurrence, and detection rating guidelines are presented in the FMEA template. Updating the individual ratings, the existing RPN pre-improvement was seven hundred. The improvement measures taken are documented in the extreme right columns. These measures would have been implemented in the pilot and enterprise wide deployments. The revised RPN is then calculated and presented as a business measure to the Sponsor of the Six Sigma team. As we can see, all the cells corresponding to individual column details are updated with all the details. Ensure that the Green Belt updates these details for all the failure modes in the process.

5.143 Summary

In this lesson we discussed piloting, validating, and FMEA. By the end of the improve phase, we should have not only validated the improvements, but also measured the revised or new Cp or Cpk value to show the process has indeed improved.

5.144 Improve Activity Summary

Here is the activity summary for the improve phase in chronological order. First, test model parameters using designs of experiments; Next, understand improvement model using DOE with regression; Third is to brainstorm for solutions; Fourth one is to do a cost benefit analysis using NPV or IRR; Piloting the solutions is the next activity; After that validate using paired t; Seventh one is phase two deployment; Next is to validate using paired t; Phase three deployment is the ninth one; Validate using paired t will be tenth one; The last step has two options. In this step, if no improvements are made, re-engineer using DFSS; and if improvements are made, update the FMEA and calculate new Cp and Cpk.

5.145 Quiz

Now let us attempt the quiz questions. In the next session, we will be discussing the control phase.

  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

We use cookies on this site for functional and analytical purposes. By using the site, you agree to be cookied and to our Terms of Use. Find out more

Request more information

For individuals
For business
Name*
Email*
Phone Number*
Your Message (Optional)

By proceeding, you agree to our Terms of Use and Privacy Policy

We are looking into your query.
Our consultants will get in touch with you soon.

A Simplilearn representative will get back to you in one business day.

First Name*
Last Name*
Email*
Phone Number*
Company*
Job Title*

By proceeding, you agree to our Terms of Use and Privacy Policy