Tuesday, July 12, 2016

Marriage between Lean Manufacturing and Six Sigma

Foreword

I decided to write about this topic when I was told to explain Lean Manufacturing methods, techniques and philosophy. Thinking about the correct approach I realized it was not possible to think about Lean Manufacturing without Six Sigma and other Problem Solving methodologies.
Therefore prior to talk about the merge of Lean Manufacturing and Six Sigma that nowadays is yet occurring, I would like to explain both concepts separately. Once we have a clear concept of each one then the idea of merging them will become simple.

Lean Manufacturing

As most of you already may know, the first manufacturer that applied Lean Manufacturing was Toyota some years later of the end of the Second World War. At that moment all manufacturers were taking the advantage of the pros of Mass Production, economies of scale advantages mainly.
As costumers requirements started to become more and more restrictive, a new way of thinking turned out to be essential. More final quality of goods, better productivity figures and lower costs were required, so Mass Production did not gave new solutions to these new requirements.
Lean is based on the reduction of waste. Lean is not only some techniques and methods, since the philosophy is as important as techniques to not lose what Toyota calls the “true north”.
There are many words written about the seven or the eight (lately one additional type of waste has been added to the list) components of waste but not much about the “ideal” process. Everything is intended to go step by step to this process called by some authors as “one piece process flow”. Let’s imagine a process where costumer’s orders go directly to the queue of orders, that is not ideally a queue since the takt time (demand) and cycle time (real manufacturing process) are the same (perfect line balancing), and without any waste the final product goes off the production line Just in the Time required to process all the operations needed to finish the product, that is without delays, scrap, inventory, etc…
This vision of an ideal process could be considered as part of the philosophy of Lean or as it is sometimes called “Lean Thinking”.
To achieve this ideal state we would need to identify and remove all kind of waste and to do so would also need motivated people and empowered as well as a Continuous Improvement mindset. Continuous Improvement is another important leg of Lean together with the People. I personally like to explain Continuous Improvement using a concept from Physics, the idea is not mine of course… I am talking about ENTROPY.
Entropy means transformation/evolution in Greek. The universe has a natural tendency to evolve from an ordered state of the energy to a chaotic state. This evolution is natural, not forced, and if you leave the systems alone they will tend to behave in this way. So it is observable and demonstrable that the whole universe is going from an ordered state to a more chaotic one.
Entropy also applies to manufacturing processes very well. If you ask any experienced Team Leader, Manager, etc., about what happens with processes if you leave them alone once you have reached a certain level of performance they will answer “they will go back to a lower performance level by their own”, that’s entropy!
It becomes obvious that Continuous Improvement is not an option, as we have only two options: IMPROVE or DECLINE.
PDCA (Plan, Do, Check, Act) is part of the philosophy although could be also considered as a technique as due to its application we setup real objectives and the Continuous Improvement thinking really works. So Continuous Improvement would be the philosophy and PDCA cycle is the way to reach it.
                And here we go one of the reasons of having Lean Metrics - another leg of Lean. We need Lean metrics such as DTD, OEE, FTT, BTS, WIP, etc. They are not only due to the fact that we have to measure to be sure that process performance is improving, but also due to the implicit reason that we will never be working as the ideal process is supposed to do, one piece flow, and therefore need to know where we are and quantify the progress made.
                The other leg we need is how to get closer to the ideal process. We need techniques and methods, Lean Manufacturing itself. They are intended to be as closer as possible to the ideal process so we have to consider them as a tool box and each expert has to decide which tool has to be applied to each process and waste. These tools are 5S, Kanban, SMED, Poka Yoke, Yamazumi, Value Stream Analysis, etc… Everything tied together with PDCA.
                Only as a clarification, 5S is the only one that is not optional. We need always to implement 5S as the basics for Lean Manufacturing.
                It is convenient not to forget that PDCA is a scientific method used to tackle processes and improve their performance. As all scientific methods need to be based on data. Together with PDCA, a wide set of statistical tools were used to Describe and Control processes. It is known that PDCA and SPC (Statistical Process Control) were widely used by Edwards Deming, father of PDCA cycle.
                So now we have the complete picture of Lean Manufacturing, as we have the philosophy, methods, techniques and scientific tools.
                But why do all this sometimes does not work?
                Real World is complex. Complexity means there are many of factors which can affect the output. System thinking gives us this point of view. As the relationship between inputs and process outputs is not obvious, we’d rather say, it is complex, we need a tool to analyze and improve the real world to the extent we need.
Moreover, thinking that only waste can affect process performance is a simplification of the reality, as mentioned before. System thinking gives us a more comprehensive approach to model processes as it says that input settings, noise factors and transfer function of the process is what gives us the final model.
It is true that just applying Lean Manufacturing we will always obtain big results, but after that we will need to go deeper in the understanding of the whole system to obtain even better results (a 4-sigma process has a level of defects of 32 defects per million, a 6-sigma process has much less defects, it can be considered as a defect-free process).
                That’s why in the late 90’s and at beginning of 21st Century 6 Sigma appears in the scene. It is a methodology which includes everything. Six Sigma has the best from Lean Manufacturing as the first thing to do in a 6-Sigma project is applying Lean Tools where needed.

SIX SIGMA

DMAIC (6 Sigma process, Define, Measure, Analyze, Improve and Control) has a clear connection to PDCA, uses a wide range of statistical tools, to extent that in most companies 6 Sigma Master Black Belts are considered as statistical experts in their companies, includes many tools and techniques of Lean Manufacturing and has a clear systemic approach.
                We could say that Six Sigma is not another method for problem solving- it is the method. It can be applied to either manufacturing process or transactional ones. Even it is applicable not only to an existing process but also to processes in the Design Phases with a slight change of the method called DFSS – Design For Six Sigma which applies DCOV phases (Define, Characterize, Optimize and Validate).
                Six-Sigma is not an artificial invention. It is the natural result of a history of scientific methods applied to industrial processes performance improvement.
                The first thing a Black Belt has to do is to Define the issue, mapping the process (Value Stream Mapping) and then setting up a Baseline Performance Level and a Goal (an affordable one). Nothing new, Kaizen philosophy PDCA is based on the same approach.
                As Lean is based on Waste Identification and removal, Six Sigma deals with defects. A defect can be defined as a traditional quality defect or in a more general concept the state of your output when not meeting costumer (internal/external) expectations (Voice of Costumer- VoC), also called error state of the output.
                Can you imagine a process with no waste and any defect? That would be a process that always exceeds costumer expectations and uses the minimum resources to do it. That is what Six Sigma does when is well applied together with Lean Manufacturing.

                Not by chance, lately Six Sigma is being called worldwide “Lean Six Sigma”, what is not a different thing, is just the same with its correct name.

New Features in Minitab 17








New features in Minitab 17

Within the next lines those features that are important for 6 sigma projects and should be included in BB training material are explained

Features to be included

1.Tool for parameter estimation
2.Poisson hypothesis testing

1. Parameter estimation



Examples of possible applications:
-Process capability
-In most cases, current approach for process capability is deterministic. We estimate a value of dpu, p, mean, standard deviation, etc., and consider that it is the population parameter.
-The right approach should consider sampling, that means, confidence interval due to sampling to estimate parameters that drives to capability estimation. Then the capability would appear as an estimation as well with a confidence interval.
-MSA
-When we have a machine to substitute people at measuring attribute characteristics (example: artificial vision devices).
-IN GENERAL: whenever we need to estimate a parameter

•Path in Minitab 17



•Available for all the main statistical distributions


Process Capability

1.Let’s imagine we are measuring process capability using binomial. Our parameter is p.
2.We have followed practical rules to estimate our parameter that will be used as a baseline:
1.Long term estimation for p: n≈1000
2.Sample shows 10 out of 1000 of NOK parts. So p=0,01 -> DPMO=10000 -> Zlt = 2,33 -> Ppk= 0,78
3.Let’s work out actual estimation using new Minitab 17 features:

Process Capability


Therefore capability could be (with a 95% of confidence level) between 0,0183 and 0,0048.
-Let’s imagine our target, based on costumer/business requirement is FTT≥98,5%-> p<0,015
-Following current guidelines (deterministic method) we could thought that process already meets capability target, since initial estimation was p=0,01 with n=1000.
-But to tell the truth there is a real risk of not being meeting the target with this sample size.
-The approach should be to determine a suitable sample size (n) in a second interaction using the new feature of Minitab 17

Process Capability


-In a second step or interaction, if we want to be sure that we are meeting the target a sample size of n=3322 when an initial parameter estimation is p=0,01 will ensure us that population p is under 0,014 (0,01 + 0,004) and therefore meets the target.
-For sure, a third step when the 3322 are taken out from the process is needed to a more accurate parameter estimation. In this 3rd interaction we have to ensure again that upper value of the confidence interval is meeting the target. For instance:
-Third estimation: P’=0,009, with n=3322, then:
-Now, we are 95% sure that we meet the target since p=(0,0128 , 0,0061) THE SAME WILL APPLY TO ANY DISTRIBUTION MESSAGE: Process Capability is also an estimation (sampling). Therefore we need to find out a suitable n for our process vs target comparison

MSA

1.Let’s imagine we have an Artificial Vision Device to measure surface defects in our product.
2.These kind of machines are intended to be calibrated against an expert inspector that is our master.
3.In order to assess Type-I & Type-II errors, we run an experiment which consists of measuring 177 units/defects with the machine in comparison to an expert. Then we assess α and β.
4.α = Assess NOK when it is OK = p(NOK/OK)
5.β = Assess OK when it is NOK = p(OK/NOK)
6.From the experiment MSA will result in:
7.- 3 were assessed NOK when they actually were OK. 3 errors out of 100 units OK. This is an estimation of alpha of 3%, p=0,03.
8.- 4 were assessed OK when they actually were NOK. 4 errors out of 77 units NOK. This is an estimation of beta of 5,2%, p=0,052.
9.It seems to be OK for most characteristics. Normally alpha ≤ 5% and beta ≤ 10% are the most common criteria of acceptance.
10.Let’s see what happens when sample size and confidence interval is taken into account. See next slide

MSA


-Estimation for alpha is:
-CI of 95% : (0,085 , 0,006)
-Estimation for beta is:
-CI of 95%: (0,128 , 0,014) Alpha = 8,5% & Beta = 12,8% will not be acceptable at all.
-Let’s find out which will be the correct sample size for this calibration/MSA:



We will need at least 461 OK units and 154 NOK units, 615 units in total to be sure we have acceptable alpha and beta risks

2. Hypothesis testing for Poisson.

Examples of possible applications:
-HYPOTHESIS TESTING in Processes to be improved when the natural metric is D/1000 and/or DPU (Poisson) instead of p, FTT and/or R/1000 (binomial).

•Path in Minitab 17



•It is available for 1-Sample and 2-Sample tests. Power & sample size will be also needed to perform H. Test:


Hypothesis testing for Poisson

1.Reasons why Poisson has to be included in Hypothesis testing Road Map:
1.FTT has being replaced by D/1000 as internal indicator for Manufacturing Processes. D/1000 metric is more sensitive to process improvements than FTT.
2.There are processes where FTT / p for defective simply does not work properly. Processes with high rate of DPU (>>1).
1.Example: In a process improvement from DPU=3 to 2. That is a big difference in terms of level of defects (issues), we are having 33% less issues, repair time, etc., in comparison to the project start. A Big improvement!!!
2.FTT will change from approx. 5% to approx. 13,5%. Less than 9% improvement! Is it true?
3.Don’t you think cost impact and therefore level of issues avoided are more related to 33% than 9%?

Video: New features in Minitab 17




Complete study about the use of control charts



1. Background



Within manufacturing use of control charts is a daily business to track and control special characteristics of geometry. Due to cost and measuring time, the most extended chart is X-Rm and X-R. It is usual to do adjustments to geometry as part of continuous improvement process.


                When an adjustment is done in a piece of equipment or tooling, to check the result some parts or sub-assemblies are manufactured which will be measured afterwards to validate the change. Manufacturing processes and more specifically geometry characteristics usually have a behavior that can be statistically assimilated to a normal distribution, and therefore controlled by using control charts of the type previously commented. This point means that such distribution is explained by normal parameters - , which are estimated by .


                Geometry adjustments usually affect parameter , since those are changes in  the relative location of different parts and subassemblies. Eventually, due to maintenance interventions or improvements, variability reductions are also produced and therefore affect parameter s. This study is focused on the most usual type of changes which are those that affect the mean location.


2. Study objective


                The objective of the present study is to develop and deliver a tool for the different departments which are involved in manufacturing and therefore have to validate geometry changes.


                By means of the methodology suggested in this study, it could be stated with a confidence level of 99.9% that the change in the geometry has been made.
3. Statistics’ basics


                The aim is to find out the sample size to have a confidence level of 99.9% within a mean shift at the manufacturing process.


                To produce a mean shift is to move Gauss Bell which can be parameterized by  or graphically represented by lines within a control chart, as follows:

                                                                                                                            
                UCL and LCL stand for Upper Control Limit and Lower Control Limit which are placed from average +3.3S (3.3 times the standard deviation) and -3.3S. Although the use of 3S is more common, it does not make a practical difference in terms of sample size. The use of 3 or 3,3 is a decision based on process stability.


                Within X-Rm control charts, average value is represented by  since it is the mean of the individuals, therefore n=1.


                When a mean shift is produced, to have enough certainty that the change has been made we need that measurements after the adjustment surpass old control limits. This condition could be represented graphically as follows:

                This could also be analytically explained by means of the expression LCL1≥UCL0. If we select a sample size that ensures us the measurement points fall outside the old control limits when the change is made, then, we can state with a certainty of 99.9% (1-α) that the change has been made.


                At the same time if the point fall inside the old control limits, so the probability of having so is also 99.9% (1-β) as we have placed H1 for our hypothesis test as shown on the picture above. Note that the process of the alternate hypothesis is represented on the right by LCL1- 1-UCL1 and is completely displaced from the current (old) process that is LCLo- o-UCLo for the Null hypothesis.


3.1. - Method development



                According to AIAG SPC manual, control charts  & X – Rm, are obtained from the following equations:


                Within the control charts used we do not normally have , but parameter Rtotal is available (total range). R is the difference between maximum value and minimum within a sample. If such sample has enough size (between 30 and 50) and the normality of data is previously checked, we can state that s ≈ R total / 4. Such relationship is an approximation (rough estimation) and therefore we cannot use symbol ‘=’. Moreover, it would depend on distribution and sample size. With sample sizes from 30 to 50, the error made is quite acceptable and does not improve as the sample size increases, since for n>150 → R/S >> 4.


                SPC manual estimates s using the expression


 ->


And therefore


Equation (1) can be expressed as





 If we need that at least LCL1UCLo is true, then (from now on we will use symbol = instead of ≈ to simplify notation)


This drives to


Also expressed as




Parameter  corresponds to the adjustment or change that we want to check, it is therefore a known parameter. Total R and s represent the variability of the process they are also known, and A2 and d2 are corrective coefficients which depend on the sample size n which is what we want to find out. These values are found in tables. For this study we have used constants tables as a source for these values, although they can be easily found in the statistical technical literature, occasionally they can be found with another nomenclature.


Therefore, the problem boils down to find out what value of n makes it possible to comply with the following equations:
 


NOTE: When X-Rm charts are used, we only have to use E2 instead of A2, although it would only work to confirm if the charts are valid to check the change made, since these charts are only run with n=2 (individual measurements compared to previous one to obtain Rm).


            3.2. Tables



By combining the different tables of constants the following one is obtained:


n
A2
d2
E2*
A2*d2
E2*d2
2
1,880
1,128
2,659
2,121
3
3
1,023
1,693
1,772
1,732
3 (n=2)
4
0,729
2,059
1,457
1,501
3 (n=2)
5
0,577
2,326
1,290
1,342
3 (n=2)
6
0,483
2,534
1,184
1,224
3 (n=2)
7
0,419
2,704
1,109
1,133
3 (n=2)
8
0,373
2,847
1,054
1,062
3 (n=2)
9
0,337
2,970
1,010
1,001
3 (n=2)
10
0,308
3,078
0,975
0,948
3 (n=2)


Table 1.1


 


From previous point expressions we calculate the following tables. From the second half of the expression we work out these values:


 
R total
0,1
0,5
1
1,5
2
2,5
3
0,1
1,818
0,364
0,182
0,121
0,091
0,073
0,061
0,25
4,545
0,909
0,455
0,303
0,227
0,182
0,152
0,5
9,091
1,818
0,909
0,606
0,455
0,364
0,303
1
18,182
3,636
1,818
1,212
0,909
0,727
0,606
1,5
27,273
5,455
2,727
1,818
1,364
1,091
0,909
2
36,364
7,273
3,636
2,424
1,818
1,455
1,212
2,5
45,455
9,091
4,545
3,030
2,273
1,818
1,515
3
54,545
10,909
5,455
3,636
2,727
2,182
1,818
3,5
63,636
12,727
6,364
4,242
3,182
2,545
2,121
4
72,727
14,545
7,273
4,848
3,636
2,909
2,424
4,5
81,818
16,364
8,182
5,455
4,091
3,273
2,727
5
90,909
18,182
9,091
6,061
4,545
3,636
3,030


Table 1.2



 
S
0,05
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,1
0,909
0,455
0,227
0,152
0,114
0,091
0,076
0,065
0,25
2,273
1,136
0,568
0,379
0,284
0,227
0,189
0,162
0,5
4,545
2,273
1,136
0,758
0,568
0,455
0,379
0,325
1
9,091
4,545
2,273
1,515
1,136
0,909
0,758
0,649
1,5
13,636
6,818
3,409
2,273
1,705
1,364
1,136
0,974
2
18,182
9,091
4,545
3,030
2,273
1,818
1,515
1,299
2,5
22,727
11,364
5,682
3,788
2,841
2,273
1,894
1,623
3
27,273
13,636
6,818
4,545
3,409
2,727
2,273
1,948
3,5
31,818
15,909
7,955
5,303
3,977
3,182
2,652
2,273
4
36,364
18,182
9,091
6,061
4,545
3,636
3,030
2,597
4,5
40,909
20,455
10,227
6,818
5,114
4,091
3,409
2,922
5
45,455
22,727
11,364
7,576
5,682
4,545
3,788
3,247


Table 2.2


4. Method for estimation of sample size using the tables



The result from tables 1.2 and 2.2 are the inputs for table 1.1. Therefore to check if the change is done, we would look for the value of , and in the case of not finding it, it is always recomendable to use the next lower value. It is the same with the variability parameter, either with R (table 1.2) or s (table 2.2), although in this case the most restrictive value for the inference is the next upper value. This value is  in the case of  charts or  in the case of X – Rm. Therefore, for a given variability (s or R) and a given  tables 1.2 and 2.2 gives us a number to be used on table 1.1 as explained below.


Once we have the value from tables 1.2 or 2.2 as explained above, we look for such value within table 1.1 (column A2*d2 for  charts or E2*d2 for X-Rm) that would correspond to a specific simple size (n) enough to say that the change has been made with a probability of 99,9%. In the case of not finding the exact value (that is what usually happens) we have to look for the value that gives us more certainty which is always the next lower, this obviously means a bigger sample size.


Note that expression from table 1.1 is always 3. This is because for charts X-Rm, n is always 2 (moveable range). Therefore, we have highlighted values from 3 and higher within tables 1.2 and 2.2 meaning that control charts X-Rm only in those cases of average shift and variability can be used with the sufficient reliability to state that the change has effectively been made.


As an example, in processes with a total range of 1, only in the cases where we want to produce a change of 2 or bigger, we could say that the change has been made or not with an almost total certainty by using a chart X-Rm. As we can see, the first value of the table that is equal to or bigger than 3 is 3,636 and corresponds to  = 2, as mentioned before.


In other words, we can also use the tables in a reverse way. Therefore, to know what type of chart and sample size is needed to make an inference o prediction with an almost total certainty. For instance, in a process with R=0.5 and a geometry change of 0.5, we should use n=3. Therefore, the most recommendable is to use a chart type .


We can also state that we should measure 3 parts, sub-assemblies or units to be almost sure that the change has been made when our Process has a total Range of 0.5 and we want to adjust the mean in 0,5.