Posted Under: Deming,General Management,Statistical Thinking
One can hardly walk into a bookstore these days without being deluged with Six Sigma volumes of one sort or the other. It seems to be an answer for every businessman’s problem from controlling quality to designing new products to guaranteeing gleeful customers. As is usually the case, with these fads, the truth is somewhat less than the claims. Will adopting Six Sigma make a manufacturing company more effective? Is this a direction your organization should take? What about “Lean Six Sigma”? Is that the wave of the future?
Six Sigma attempts a measure of process capability. Indeed the estimation of six sigma itself is based on this concept. But, a process that is drifting into and out of control does not have a predictable capability associated with it. Six Sigma statistical calculations do not require a statistically stable process. There is no requirement to assure a stable process average or process variation and this results in estimates of six sigma that vary significantly depending on when the sample used to make the calculation was drawn.Six sigma advocates indicate that detecting a 1.5 sigma shift is an adequate safeguard for this problem but as Donald Wheeler (the world’s foremost expert on statistical process control) shows in his paper, “The Six Sigma Zone”, no such assurance exists.
In fact, this causes inappropriate action, searching for trends when there are none and ghost hunts. Finally the calculation of six-sigma itself is accomplished by dividing a denominator based on a subjective assumption (The number of opportunities over which a defect can occur) into a measure of the number of defects where defects have been so ill-defined as to produce no meaningful measurement. Also the so-called area of opportunity is essentially infinite. In a an unstable process, problems can arise from many locations.
As far as I can tell, operational definitions are not used or advocated in any literature that I could find in my thorough search. Yet, as Deming points out, they are vital.For example perhaps there is a requirement in a restaurant that the customers’ tables be wiped clean before they are seated. Therefore a ‘dirty’ table is a defect. But, what does this word ‘dirty’ mean. No casual food lying on it? No standing water? Clean enough to eat off of without a plate? Clean enough for surgery? What does it mean to say the table must be clean?
It means nothing until you say what it means operationally – in this case, for this purpose. This operational definition is a must. The statistical conversion from a defect rate (assuming a meaningful one can be found) to a probability density function that can yield percentage estimates of areas under a curve is too tortuous to discuss here. Suffice it to say that this too, has serious statistical faults.Finally there is no mention in the Six Sigma literature of some important scientific principles. The elements of prediction are not discussed.
Operational definitions (which are critical to training and measurement) are not discussed anywhere that was apparent. Logical thinking is not mentioned. The dangers of copying and other Post Hoc fallacies (e.g. confusing correlation with causation) are not discussed. Hypothesis testing is taught as a statistical method with no mention of the serious shortcomings associated with hypothesis testing and prediction. In short, there is not the emphasis on scientific and statistical thinking that will be an integral part of the new strategy.