Is Noraml Distribution a Continuous Cycle
Statistical Theory of Lean Six Sigma (6σ) Strategies
Salman Taghizadegan , in Essentials of Lean Six Sigma, 2006
2.1 NORMAL DISTRIBUTION CURVE
The concept of the normal distribution curve is the most important continuous distribution in statistics. The normal distribution curve plays a key role in statistical methodology and applications. For instance, suppose for each of six days samples of 11 parts were collected and measured for a critical dimension concerning a shrinkage issue. The number of parts with dimensions is listed in Table 2.1.
Row | Number of parts | Dimensions |
---|---|---|
1 | 1 | 0.620 |
2 | 3 | 0.621 = LSL |
3 | 5 | 0.622 |
4 | 8 | 0.623 |
5 | 10 | 0.624 |
6 | 12 | 0.625 = Mean |
7 | 10 | 0.626 |
8 | 8 | 0.627 |
9 | 5 | 0.628 |
10 | 3 | 0.629 = USL |
11 | 1 | 0.630 |
LSL = 0.621, Mean = 0.625, USL = 0.629
Figure 2.1 illustrates the graphical representation of frequency distribution for the data in Table 2.1, having an upper specification limit (USL) 0.629, mean 0.625, and lower specification limit (LSL) 0.621 (tolerance = ±0.004). This means that any data above 0.629 and below 0.621 are assumed defects (out of specification). Figure 2.1 indicates that data (population) are symmetrically distributed. By locating bullets on the middle top of each column (as shown in Figure 2.2) and connecting them, we set a bell-shaped curve, as in Figure 2.3, which is also called a normal distribution curve (Figure 2.4). (This is discussed in detail in Section 3.2.) The area under the distribution curve is the probability of variations from the mean of any process.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123705020500047
Control of Quality
Stewart C. Black BSc, MSc, CEng, FIEE, FIMechE , ... S.J. Martin CEng, FIMech, FIProdE , in Principles of Engineering Manufacture (Third Edition), 1996
Exercises –; 17
- 1.
-
Plot a normal distribution curve and use it to estimate the percentage of the total area under the curve lying between the following limits:
- 2.
-
From the information given in Figure 17.7 determine, for samples of 5 pieces, the values of A 0.001, A' 0.001, A 0.025 and A' 0.025
On a particular control chart it is required to draw control limits for average which are likely to be exceeded 1 in every 20 times. If samples of 4 are taken, find the constants required to set these limits: (a) assuming is known; (b) assuming σ is known. (i.e. find A' 0.05 and A 0.05.)
- 3.
-
After a machining operation on the diameter of a component specified as 47.500 ±0.025 mm, a sample of 300 components was inspected the dimension being measured to the nearest 0.001 mm. The readings have been grouped into discrete classes having equal intervals and the frequencies of occurrence are tabulated below:
Diameter of components (mm) Frequency 47.480–84 8 85–89 21 90–94 38 95–99 54 47.500–04 66 05–09 52 10–14 34 15–19 20 20–24 7 - 4.
-
During the grinding of a large batch of components three modifications were made to the operating conditions with the object of improving the surface finish produced.
Measurements were made on components processed by the original method and after each modification.
Number of components Surface finish CLA Method 1 Method 2 Method 3 Method 4 1μ 90 70 116 110 2μ 124 164 190 240 3μ 178 94 64 92 4μ 240 100 130 100 - 5.
-
- (a)
-
To what kind of manufacturing process is statistical quality control specially suited? Describe the steps which should be taken in applying statistical quality control to a typical manufacturing process.
- (b)
-
Write down the formula for each standard deviation of a group of parts when:
- (i)
-
the whole of the parts are measured;
- (ii)
-
a sample batch is inspected.
- (c)
-
Make a drawing of a normal distribution curve and show the percentage of parts included in variations from the average of ±σ, ±2σ, and ±3σ.
- 6.
-
A certain dimension of a component produced in quantity on an automatic lathe is specified as 84.60 ±0.05 mm. A 5 per cent inspection check resulted in the following variation of the dimensions measured to the nearest 0.01 mm.
Dimension (mm) 84.56 84.57 84.58 84.59 84.60 84.61 84.62 84.63 84.64 Frequency 1 8 54 123 248 115 44 6 1 - 7.
-
- (a)
-
What is the significance of the RPI when considering the suitability of a process to produce to a particular specification?
- (b)
-
Give the relationship between σ and the process tolerance for low and high relative precision.
- (c)
-
When producing under low RPI conditions, show how the percentage of scrap may be estimated.
- (d)
-
Why can the control limits for average be widened when the RPI is known to be high?
- 8.
-
For a controlled operation on a lathe, a particular dimension of a part is specified as 55 ±0.25 mm. Samples of five components were each measured to the nearest 0.001 in at 10 equal time intervals and the following readings obtained:
Sample No. 1 2 3 4 5 6 7 8 9 10 Dimension (in) 55.21 55.13 55.11 54.93 54.88 55.22 55.11 55.02 55.12 55.02 54.99 55.03 55.01 55.12 55.14 55.15 55.07 55.06 55.03 54.97 55.10 54.98 54.99 55.04 55.05 54.97 54.99 54.97 54.95 55.06 55.02 54.96 54.97 54.98 55.11 54.95 54.93 54.99 54.98 54.99 54.95 55.04 54.98 54.91 54.97 55.02 55.00 54.01 54.93 55.01 From the given data and calculated limits construct control charts for means and ranges of operation. What do you deduce from your charts?
- 9.
-
The table gives the number of defectives found in 30 consecutive samples.
10 7 9 12 8 11 10 17 13 6 8 11 8 10 9 6 12 14 7 3 9 11 4 15 11 17 5 9 11 8 - (a)
-
Using information from the first 20 samples, draw a control chart for fraction defective.
- (b)
-
Plot the remainder of the results and comment upon the quality of work which they indicate.
- 10.
-
A sampling scheme is operated from the following instructions:
'From incoming batches take samples of 50 and inspect. If the sample contains no more than 3 defectives accept the batch; if it contains more than 3 defectives reject the batch.'
Using the Poisson distribution, plot the operating characteristic for up to 10 per cent defectives in a batch. State the producer's risk of having batches containing 2 per cent defectives rejected, and the consumer risk of having batches containing 8 per cent defectives accepted.
- 11.
-
A sampling inspection plan uses a single sample of size 200 and an acceptance number of 5. It is used for lots which are large in relation to the sample size. Using the Poisson distribution, determine the approximate probabilities of acceptance for lots which are 1, 2, 3 and 5 per cent defective.
- 12.
-
An acceptance sampling plan uses a single sample of 150 items and an acceptance number of 3. Using the Poisson distribution to determine approximate probabilities of acceptance, construct an operating-characteristic curve.
- 13.
-
A single-sampling acceptance inspection plan is required for a purchased product. The probability of acceptance must be 0.95 or more if the per cent defective is 0.5 per cent or less. Find three different combinations of sample size and acceptance number to meet this condition. The sample size must not be greater than 400 items. The Poisson distribution may be used as an approximation.
- 14.
-
The following is a double-sampling procedure: (a) draw a first sample of 200 items, (b) if 1 or less defectives are found, accept the lot at once, (c) if 4 or more are found, reject the lot at once, (d) otherwise draw a second sample of 200 items, (e) if the total number of defectives found in the combined samples is 4 or less, accept the lot, and (f) if the total number is 5 or more reject it.
- (a)
-
What will be the probability of acceptance for a lot that is 1 per cent defective.
- (b)
-
What will be the average or expected amount of inspection if this plan is applied to a number of 1 per cent defective lots?
- 15.
-
A single-sample acceptance plan is required to provide the following operating characteristics; (a) if the lot per cent defective is 0.5 per cent or less, the probability of acceptance should be 0.95 or more, and (b) if the lot per cent defective is 3 per cent or greater, the probability of acceptance should be 0.10 or less. Using the Poisson distribution as an approximation, determine suitable values for the sample size and acceptance number.
- 16.
-
Consider the double sampling plan N 1 = 50, N 2 = 100, C 1 = 2, C 2 = 5.
- (a)
-
Determine the probability of acceptance for a lot 6 per cent defective on the first and second samples, hence find the probability of acceptance total for this lot. Also find probability of rejection for the first and second samples.
- (b)
-
Determine the probability of rejection on first and second samples for lots 3 per cent and 8 per cent defective, then comment on the principle of double sampling.
- 17.
-
The required quality of a mass produced assembly is obtained by using the following limits because the permitted extremes of fit are just acceptable: bore 25.020/25.000 mm diameter shaft 24.992/24.980 mm diameter.
During manufacture it is found that excessive scrap is produced because of the very close limits required, and in order to widen these, it is decided to accept a chance of 1 assembly per 1000 falling below the desired standard.
Given that the range for 99.8 per cent good work is ±3.09σ and for 97 per cent good work is ±2.2σ, estimate suitable new limits on a statistical basis, and illustrate both sets of conditions in a conventional diagram.
Why is it desirable to use quality control methods when the new limits are introduced?
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780340631959500481
NUMERICAL CHARACTERISTICS OF RANDOM VARIABLES
V.S. PUGACHEV , in Probability Theory and Mathematical Statistics for Engineers, 1984
3.6.2 Moments
It follows immediately from the symmetry of the normal distribution curve with respect to the point x = a that the expectation of a random variable X distributed according to the normal law is equal to the parameter a in the expression of the normal density: m x = a. A formal calculation of m x by formula (3.6) leads to the same result:
by virtue of formula (3.86) and the equality to zero of the last integral, as an integral of an odd function with symmetrical limits with respect to the origin.
Thus the parameter a in the expression (3.85) of the one-dimensional normal density represents the expectation of the random variable.
The central moments of a normally distributed random variable X are determined according to (3.78) and (3.3) by the formula
It is seen from here that all odd central moments of a normally distributed random variable are equal to zero. For central moments of even orders we obtain
(3.87)
Integrating by parts we have
Thus we obtain the recursive formula
(3.88)
Putting here p = 1 and remembering that all zero-order moments are equal to 1 we find the variance of a normally distributed random variable X:
Thus the parameter c in the expression (3.85) of a normal density is inverse to the variance of the random variable, c = D x −1 = μ2 −1.
In order to obtain a general formula for even central moments we rewrite (3.88) in the form
Putting here the successively p = 2, 3,…, k and taking the products of the left-hand and the right-hand sides of the obtained equalities we get after cancellations
(3.89)
This formula expresses all even central moments of a normally distributed random variable X in terms of its variance D x = μ2. In particular at k = 2 formula (3.89) determines the fourth-order central moment μ4 = 3μ2 2. Substituting this expression into (3.83) we see that the excess of a normal distribution is equal to zero.
The normal distribution occurs widely in nature. In the majority of practical problems the distribution of a random variable may be considered as normal. Therefore a normal distribution is usually assumed as the standard for the comparison of distributions. Asymmetry and excess are introduced for characterizing the deviation of a distribution from a normal one. Therefore it is expedient to define them in such a way that they be equal to zero for a normal distribution. This is the reason why the item −3 is introduced in the definition of excess (3.83).
Taking into account that a = m x , c = D x −1 = σ x −2 the expression (3.85) for the one-dimensional normal density is often written in the form
(3.90)
omitting for brevity the subscripts of the expectation, m = m x , variance D = D x , and mean square deviation, σ = σ x .
Formula (3.90) shows that a normal distribution is completely determined by the first- and the second-order moments. Knowing the expectation and the variance of a normally distributed random variable one may find its density.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080291482500070
Lateritic and alluvial gold sampling
Eoin H. Macdonald , in Handbook of Gold Exploration and Evaluation, 2007
Chi-square test
The chi-square table is used to test for goodness of fit and may be compared with the normal curve distribution to determine if the sample data represent a normal population. The table is also used to test the validity of hypotheses and is based upon the differences between observed frequencies (f 0) and expected (theoretical) frequencies (ft ) as follows:
6.11
Use of the chi-square table indicates the range of probability. A low value indicates a small probability that any differences are accidental or could have evolved through sampling variation. A large probability value indicates that the differences could have arisen due to chance or sampling variation.
The 'standard deviation' (S) is the positive square root of the variance and is the most commonly referred to statistical function in alluvial sampling practice.
6.12
Σd 2 denotes the sum of the individual squared deviations from the mean. For a frequency distribution in which the grade intervals are of equal size, the deviation may be taken in terms of grade intervals from a selected mid-point of one of the grade intervals. Each value in the distribution affects the standard deviation, which, by its nature responds to the varied distribution of gold in the placer and to the amount of errors made in obtaining and analysing the samples.
The value S 2 is thus not without bias hence, it is often desirable to use the variance. The standard error reduces progressively with increasing numbers of sample analyses, the amount of reduction becoming smaller with each new set of data until it becomes insignificant. At this point, no additional amount of drilling will reduce the standard deviation for the particular drilling and sampling techniques used, but it will reduce the standard error of the mean:
6.13
where sx is the standard error of the mean, s is the standard deviation and N is the total number of analyses in the distribution.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781845691752500064
Optimization techniques in diesel engine system design
Qianfan Xin , in Diesel Engine System Design, 2013
3.4.5 Probabilistic simulation in diesel engine system design
A Monte Carlo simulation is conducted in Tables 3.6 and 3.7 and Figs 3.24–3.26 to investigate the impact of variability on the probability distributions of different engine performance parameters for a heavy-duty diesel engine at the rated power condition. The variability studied includes mainly the tolerances in engine design and control parameters. There are six cases in the simulation, corresponding to five different ambient conditions (Cases 1–5) and one sensitivity case to analyze the effect of exhaust restriction variation (Case 4S). Case 1 is at sea-level altitude (0 ft.) and 77 °F (25 °C) normal ambient in standard laboratory conditions. Case 2 is at sea-level altitude and 100 °F (38 °C) hot ambient for in-vehicle conditions with an increased air temperature at the compressor inlet by an amount of rise-over- ambient (ROA). Case 3 is at sea-level altitude, 122 °F (50 °C) hot ambient and in-vehicle. Case 4 is at 5500 feet (1676 meters) high altitude, 100 °F (38 °C) hot ambient and in-vehicle. Case 5 is at 10,000 feet (3048 meters) high altitude, 85 °F (29 °C) ambient and in-vehicle. Case 4S is the same as Case 4 except for using a 10% higher exhaust restriction flow coefficient that simulates a less restrictive aftertreatment system such as a clean DPF after soot regeneration. The standard deviation of the exhaust restriction flow coefficient of Case 4S is the same as that in Case 4. Table 3.6 shows the probability input data used in Case 4. There are in total 17 random input factors, all assumed in the normal distribution. Basically, the same coefficients of variation (i.e., the ratio of standard deviation to mean of the samples) are used for the other five cases. When the turbine wastegate is fully closed (as in Case 5), the standard deviation of the wastegate opening is assumed to be zero. The engine performance results are obtained with GT-POWER simulation for each sample of the Monte Carlo simulation.
Input factor code | Parameter name | Unit | Type of factors | Baseline Mean | Baseline standard deviation | Coefficient of variation | Statistical distribution assumed in the model |
---|---|---|---|---|---|---|---|
X 1 | Engine compression ratio | - | Random | 16 | 0.2 | 1.25% | Normal distribution |
X 2 | HP turbine wastegate opening | mm | Random | 4.68462 | 0.140539 | 3.00% | Normal distribution |
X 3 | EGR valve opening (flow coefficient) | - | Random | 0.128757 | 0.001289 | 1.00% | Normal distribution |
X 4 | Fuel mass flow rate | Random | Baseline | 0.5% | Normal distribution | ||
X5 | Exhaust restriction flow coefficient | - | Random | 0.39 | 0.02 | 5.13% | Normal distribution |
X 6 | HP compressor efficiency multiplier | - | Random | 1 | 0.013 | 1.30% | Normal distribution |
X 7 | LP compressor efficiency multiplier | - | Random | 1 | 0.013 | 1.30% | Normal distribution |
X8 | HP turbine efficiency multiplier | - | Random | 0.95 | 0.013 | 1.37% | Normal distribution |
X 9 | LP turbine efficiency multiplier | - | Random | 0.95 | 0.013 | 1.37% | Normal distribution |
X 10 | Normalized HP turbine area (mass multiplier) | - | Random | 1.1 | 0.01 | 0.91% | Normal distribution |
X 11 | Normalized LP turbine area (mass multiplier) | - | Random | 1 | 0.01 | 1.00% | Normal distribution |
X 12 | Start-of-combustion timing | degree | Random | − 10 | 0.1 | 1.00% | Normal distribution |
X 13 | Inter-stage cooler coolant inlet temperature | °F | Random | 147.9 | 2 | 1.35% | Normal distribution |
X 14 | EGR cooler coolant inlet temperature | °F | Random | 206.5 | 2 | 0.97% | Normal distribution |
X 15 | Engine coolant inlet temperature | °F | Random | 216.8 | 2 | 0.92% | Normal distribution |
X 16 | Charge air cooler cooling air inlet temperature | °F | Random | 113 | 2 | 1.77% | Normal distribution |
X 17 | LP-stage compressor inlet air temperature (TAMB + ∆TROA ) | °F | Random | 115 | 3 | 2.61% | Normal distribution |
Notes:
(1) The coefficient of variation is calculated as the ratio of standard deviation to mean.
(2) X 16 and X 17 are only applicable for in-vehicle conditions rather than the standard lab engine condition at sea level (0 ft. altitude) 77 °F ambient.
(3) In the case of sensitivity analysis on the effect of exhaust restriction variation, the mean value of the exhaust restriction flow coefficient is increased by 10% from the baseline mean 0.39 to 0.429.
Case 1 | Case 2 | Case 3 | Case 4 | Case 4S | Case 5 | Average of five cases | |
---|---|---|---|---|---|---|---|
Engine brake power | 0.58% | 0.58% | 0.59% | 0.57% | 0.57% | 0.58% | 0.58% |
BMEP | 0.58% | 0.58% | 0.59% | 0.57% | 0.57% | 0.58% | 0.58% |
Gross pumping loss PMEP | 1.59% | 1.48% | 1.48% | 1.40% | 1.40% | 1.30% | 1.45% |
360° gross IMEP | 0.57% | 0.55% | 0.55% | 0.54% | 0.54% | 0.51% | 0.55% |
Engine delta P | 1.95% | 1.91% | 1.82% | 1.78% | 1.78% | 1.63% | 1.82% |
BSFC | 0.28% | 0.27% | 0.28% | 0.30% | 0.30% | 0.31% | 0.29% |
Peak cylinder gas pressure (maximum of all cylinders) | 2.60% | 2.29% | 2.38% | 2.28% | 2.28% | 1.94% | 2.30% |
Peak cylinder gas temperature | 0.96% | 0.81% | 0.76% | 0.78% | 0.78% | 0.64% | 0.79% |
EGR rate | 1.23% | 1.16% | 1.08% | 1.10% | 1.10% | 1.02% | 1.12% |
Intake manifold gas temperature | 0.68% | 0.80% | 0.73% | 0.79% | 0.79% | 0.83% | 0.77% |
A/F ratio | 2.11% | 1.78% | 1.65% | 1.64% | 1.64% | 1.25% | 1.69% |
Intake manifold oxygen mass fraction | 0.84% | 0.65% | 0.55% | 0.63% | 0.63% | 0.50% | 0.63% |
Exhaust manifold gas temperature | 1.27% | 1.03% | 0.97% | 0.99% | 0.99% | 0.79% | 1.01% |
HP compressor outlet temperature | 1.30% | 1.18% | 1.10% | 1.12% | 1.12% | 1.21% | 1.18% |
Intake manifold boost pressure | 1.80% | 1.51% | 1.48% | 1.36% | 1.36% | 1.05% | 1.44% |
Exhaust manifold pressure | 1.62% | 1.36% | 1.34% | 1.23% | 1.23% | 1.04% | 1.32% |
Intake manifold mixture volumetric efficiency | 0.07% | 0.09% | 0.10% | 0.10% | 0.10% | 0.10% | 0.09% |
Total exhaust restriction (pressure drop) | 10.95% | 11.26% | 11.25% | 11.41% | 11.41% | 12.08% | 11.39% |
Engine coolant heat rejection | 0.67% | 0.71% | 0.74% | 0.72% | 0.72% | 0.75% | 0.71% |
Total engine coolant plus CAC heat rejection | 0.81% | 0.82% | 0.86% | 0.80% | 0.80% | 0.82% | 0.82% |
EGR cooler heat rejection | 1.22% | 1.17% | 1.14% | 1.14% | 1.14% | 1.22% | 1.18% |
Charge air cooler heat rejection | 2.99% | 2.75% | 2.53% | 2.31% | 2.31% | 2.12% | 2.54% |
HP-stage turbocharger actual speed | 1.47% | 1.05% | 1.08% | 1.16% | 1.16% | 1.23% | 1.20% |
LP-stage turbocharger actual speed | 1.38% | 1.26% | 1.25% | 1.39% | 1.39% | 1.43% | 1.34% |
Notes: The coefficient of variation is computed as the ratio of standard deviation to mean of the samples. The average of five cases includes Cases 1,2, 3, 4 and 5.
Figure 3.24 shows the probability distributions of some input factors. The oscillating data of the distribution in the figures are the raw data of the Monte Carlo simulation with 1000 random samples. The smooth curves of the distributions are the fitted results by using the normal distribution. The probability distribution of the raw data is obtained by using 100 bins that span over the entire range of the sample values for a given parameter. Fewer bins make the distribution appear less oscillating but may exhibit 'step changes' in the shape of the probability distribution curve because more samples will fall into each bin. In contrast, more bins make the distribution appear more oscillating and may even exhibit zero probability at certain parameter values because some bins may not have samples at all. The degree of the data oscillation is related to the number of bins used for data display, and does not indicate simulation accuracy. It is the number of samples in the Monte Carlo simulation that affects accuracy.
Figure 3.25 shows an example of the response parameters, displayed with both raw data of the Monte Carlo simulation and the fitted normal distribution curves. Figure 3.26 shows the probability distribution curves of all important response parameters in engine performance. Table 3.7 shows the calculated coefficients of variation for the responses of all six cases. A summary of this investigation is given as follows.
- 1.
-
The variation range of each response parameter can be clearly observed from this study (Table 3.7 and Fig. 3.26). It should be noted that the variation range of the response is governed by the assumptions made in the variation ranges of the input factors shown in Table 3.6.
- 2.
-
The shapes of the probability distributions of a given response parameter at different ambient conditions are different. This indicates the effect of the shapes of the probability distributions of the input factors (especially the turbine wastegate opening and the EGR valve opening), and the effect of complex nonlinear engine behavior at different ambient conditions.
- 3.
-
Different engine performance parameters exhibit their extreme values at different ambient conditions. For example, peak cylinder pressure has its worst (highest) value in Case 1 (sea level, 77 °F), while exhaust manifold temperature reaches its worst (highest) value in Case 5 (10,000 feet, 85 °F) even with 4% fueling derating. Different types of engine design constraints or limits are marked on Fig. 3.26 as examples. The probability distributions of the responses can be checked against these design limits to assess the probability of failure and reliability issues.
- 4.
-
Case 4S illustrates a typical sensitivity analysis on the effect of design or calibration changes. Case 4S has a much lower exhaust restriction pressure drop compared with Case 4. This gives higher peak cylinder pressure in Case 4S due to its higher engine air flow rate. The sensitivity of the probability distribution shape of the engine response to the probability distributions of the input factors can be analyzed by this method.
- 5.
-
This Monte Carlo simulation combines multiple engine operating conditions (i.e., at different ambient conditions) on one probability distribution chart to compare them conveniently. Similar plotting can be conducted by combining different engine speeds or loads on one probability chart.
- 6.
-
Such a probabilistic analysis in engine system design provides much more information than the traditional deterministic approach in order to evaluate variability, reliability and safety margins in design.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781845697150500030
A methodology for evaluation of charge air coolers for low pressure EGR systems with respect to corrosion
B. Grünenwald , ... C. Saumweber , in Vehicle Thermal Management Systems Conference and Exhibition (VTMS10), 2011
2.3 Statistical analysis
The analysis procedure of both test bench prototypes and field retrievals plays an important role to establish a reliable life time prediction. Photos and micrographs of corroded surfaces or cross sectioned cooler positions can give an overview of the corrosion situation and mechanisms. In order to quantify corrosion in terms of wall thickness reduction an automatic measurement procedure is necessary to acquire on the one hand the essential number of values for statistical analysis and allows on the other hand a quantitative differentiation between uniform (surface) and penetrating (pitting and intercrystalline) corrosion components.
Due to tolerances caused in the material and tube manufacturing processes the initial wall thickness situation of the components follows a normal distribution. After corrosion testing this normal distribution curve is shifted towards lower wall thickness values with an additional spreading effect. According to Fig. 5 this curve transformation can be described with two mathematical steps.
- -
-
the difference between the average thickness values of unexposed and corroded tubes which represents the surface (uniform) corrosion component Ru.
- -
-
the difference between the defined minimum thickness values of unexposed and corroded tubes leading to a total corrosion component R which subtracted by Ru results in the penetrating corrosion (pitting and intercrystalline) component Rp.
As base of this statistical analysis the BEHR internal standard defines at least 10.000 measured values per tube distributed over a defined number of cross sections alongside the tube length.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780857091727500063
Six Sigma Improvements in Business and Manufacturing
M. Joseph GordonJr., in Six Sigma Quality for Business and Manufacture, 2002
METRIC EXAMPLE
As the process nears Six Sigma control, the process may drift ± 1.5 sigma from the targeted process mean value as calculated for the control limits. When this occurs, the tails of the bell curve normal distribution will be at the extreme end of one side of the ± 3-sigma normal process limits. This means for Six Sigma control, only 1.0 part per billion is outside Six Sigma for a process centered on the mean. For a drift of 1.5 sigma from the mean for Six Sigma control, 3.4 parts per million will be outside the Six Sigma limit.
If the process drifts, 1.5 sigma from the mean the process control charts monitoring the process will note the shift. It is now up to the six Sigma team to determine the cause or reasons for the shift. The same tools of analysis, DOE, FMEA, fishbone, CpK software, and reviewing documentation of raw material tests, machine maintenance and wear factors and other documented information is used to bring the process into or reaching Six Sigma control.
Problem solving skills are now employed to review the data, collect new data, and even perform high/low value analysis use DOE if the process still cannot reach or maintain continuous Six Sigma process control. In all situations try to use the most logical problem solving solution and analysis. This may be a review of the FMEA, fishbone of the process, or problem solution logbook for the machine and process. Then if the problem still seems unsolvable, the DOE should be considered.
The solution may be harder to find since the drift caused by a variable slightly out of control may be very minor. But, it may be enough to affect the process and it may not occur continuously for a trend to develop and identify it as the primary root cause of the problem.
Insure all tests, process, maintenance, and other data collected during the analysis are in your systems computer database. This will save the team precious time when they analyze the data from all the sources in the process as a group.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444510471500082
The Mathematics of Failure and Reliability
Milton Ohring , Lucian Kasprzak , in Reliability and Failure of Electronic Materials and Devices (Second Edition), 2015
Exercises
- 4.1
-
Calculate the mean and standard deviation for the fuse data listed in Table 4.1. What percentage of the measurements fall within the interval between μ −σ and μ +σ ? How does this percentage compare with the predicted area under a normal distribution curve?
- 4.2
-
During the cyclic stress or fatigue testing of a large number of solder joints, the following failures were recorded in the indicated intervals.
Number of cycles Percentage of failed joints during given interval of testing 0–50 1.21 50–100 0.12 100–150 0.13 150–200 0.62 200–250 0.36 250–300 0.95 300–350 4.51 350–400 6.73 400–450 3.24 450–500 13.2 500–600 4.65 600–700 8.58 700–800 2.24
Calculate the failure rates per cycle and plot the results as a function of the number of cycles in order to obtain a "bathtub" curve.
- 4.3
-
A failure mechanism that follows the exponential distribution function will not benefit from burn-in. Why?
- 4.4
-
"Grandfather's clock … stood ninety years on the floor … but it stopped short, never to go again when the old man died (at exactly 90)." Even more reliable was "The Deacon's Masterpiece," the wonderful one-horse shay that lasted "one hundred years and a day." Each of these fabled marvels of American craftsmanship had a reliability of unity until failure, at which time it became zero. Assuming Weibull statistics can be applied in these cases, calculate the characteristic lifetimes (α) and shape parameters (β) in each case.
- 4.5
-
The extreme value distribution (due to Gummel) has a probability density function given by f(t) = (1/b) exp [(t −τ)/b] exp (−exp [(t −τ)/b]), where b and τ are constants. What are the mathematical forms of F(t) and λ(t)?
- 4.6
-
Demonstrate the explicit relationship connecting F(t) and λ(t), namely,
- 4.7
-
A LED has a MTTF of 250,000 h.
- a.
-
Based on the exponential distribution, predict the probability of failure within a year.
- b.
-
At what time would half of the devices fail?
- c.
-
Suppose failures are modeled with a lognormal distribution function with a standard deviation σ = 1.5. What is the probability of LED failure in one year?
- 4.8
-
Fifteen 75-W light bulbs were tested for 1000 h, and the failure times in hours were 890; 808; 501; 760; 490; 658; 832; 743; 993; 812; 378; 576; 899; 910; 959.
- a.
-
What is the MTTF?
- b.
-
Assuming constant failure-rate modeling. What is the reliability after 400 and after 800 h?
- 4.9
-
The failure fraction of a laser transmitter is 3% over a period of 25 years. Express the hazard rate in FITs.
- 4.10
-
Certain device failures obey lognormal statistics with σ = 4.5.
- a.
-
What is the maximum value of the normalized failure rate and at what normalized time does it occur?
- b.
-
How long does it take to reach a maximum failure rate of 10 FITs?
- 4.11
-
- a.
-
Provided that lognormal behavior is obeyed, what minimum median lifetime (t 50) is required for devices to function 40 years and suffer only 100 FITs if σ = 1.0?
- b.
-
What maximum standard deviation should be targeted if no more than 10 FITs can be tolerated in devices with a median lifetime of 1 × 108 h?
- 4.12
-
A reliability engineer using the MIL-HDBK-217 calculates a failure rate of 225 FITs for a particular device using exponential distribution function statistics.
- a.
-
What is the CDF for 20 years of operation?
- b.
-
As a result of accelerated testing it was found that wear-out phenomena were governed by Weibull failure functions. The Weibull parameters were α = 5 × 106 h, β = 3. What is the predicted CDF after 20 years?
- 4.13
-
A bathtub curve consists of three linear regions as a function of time (t), namely,
- 1.
-
an infant mortality failure rate that decreases as λ(t) =C 1 −C 2 t;
- 2.
-
a zero random failure rate;
- 3.
-
a wear-out region that varies as C 3(t −t o).
- a.
-
Sketch the bathtub curve.
- b.
-
Calculate f(t) and F(t) values in each of the three regions.
- 4.14
-
The Rayleigh distribution is defined by a single parameter k, and the hazard rate increases linearly with time such that λ =kt. From this information calculate f(t) and F(t) for Rayleigh distributions. What is the connection between the Rayleigh and Weibull distribution functions?
- 4.15
-
A missile guidance system contains 100 transistors each of which is 99.9% reliable. The failure of any transistor will cause the guidance system to fail. How reliable is the guidance system? Suppose there are 1000 transistors. What is the system reliability?
- 4.16
-
For a particular telephone IC circuit pack, Weibull infant mortality failure rates are characterized by λ(t) = 34,100t −0.728 FITs.
- a.
-
If the failure rate at the end of a year is 25 FITs, calculate the percentage of devices failing during the first 6 months.
- b.
-
Suppose the circuit pack contained 250 such components. How many would fail after 6 months?
- 4.17
-
A communications system consists of a transmitter, a receiver, and an encoder such that the failure of any of these components will cause failure of the system. If the individual reliabilities are 0.93, 0.99, and 0.95, respectively, what is the system reliability?
- a.
-
If the exponential distribution governs failure in each of the components, what is the system failure rate?
- 4.18
-
Consider a control system that contains subsystems having the following reliability characteristics.
Central processing unit | Weibull β = 1.05, α = 100 h |
I/O card | Exponential 1/λ o = 750 h |
Actuator | Lognormal μ = 6.4, σ = 1.5 |
It is necessary for all subsystems to function for the system to operate. What is the reliability of the system at 300 h? From Ref. [45]
- 4.19
-
Sketch what happens to the bathtub curve if
- a.
-
component dimensions shrink further.
- b.
-
devices are powered under successively higher current–voltage stressing.
- 4.20
-
Consider the bathtub curve for humans. Specify examples of the causes of death that are operative in each of the three time domains.
- 4.21
-
A 16-position connector is subjected to five different degradation mechanisms. The reliability for each mechanism is R i (i = 1−5).
- a.
-
Write an expression for the overall reliability R of the connector.
- b.
-
If the R i are all equal to R o, what is value of R?
- c.
-
What contact reliability is required to produce a connector reliability of 0.9999?
- 4.22
-
In a population of photodiodes, 20% display freak (f) failures at a MTTF of 12,000 h, while the main (m) population has an MTTF of 28,000 h.
- a.
-
Sketch the probability density function, f(t), for this population if they are normal distributions.
- b.
-
Suppose σ(f) > 2σ(m). What is the ratio of the peak values of f(t), i.e., f f(t)/f m(t)?
- 4.23
-
In GaAs transistors were tested at 180, 196, and 210 °C, and the times to failure in hours were:
-
180 °C: 250, 420, 1000, 1300, 1300, 1300, 1300, 1500, 1500, 1600
-
196 °C: 190, 200, 330, 330, 400, 400, 420, 600, 800
-
210 °C: 105, 110, 120, 130, 210, 230, 290, 300, 320, 400
- a.
-
What are the MTTF at each temperature if the data follow lognormal statistics?
- b.
-
What is the activation energy for failure?
- c.
-
What is the mean life at 110 °C?
- 4.24
-
While testing a 10,000 FIT device, 23 failures were observed within the first hour of burn-in. What was the total number of devices in the burn-in lot?
- 4.25
-
The overall reaction rate that leads to failure of a component is the result of two mechanisms having different activation energies, E 1 and E 2. In failure mode A these mechanisms occur in series (sequentially), while for failure mode B they add in parallel. Sketch the overall reaction rate versus temperature in the form of an Arrhenius plot for both failure modes. What are the reliability implications of the thermal dependencies of these two failure modes with respect to predicting low-temperature failure?
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780120885749000045
Quality
John R. WagnerJr., ... Harold F. GilesJr., in Extrusion (Second Edition), 2014
26.2 Process Capability
Process capability is a measure of the inherent process performance. It is defined by sigma (σ), the standard deviation. Different σ levels are used to determine process capability, depending on the customer's needs and specifications. The data included in different standard deviation ranges are as follows:
- •
-
±1σ includes 68.2% of the total area under a normal distribution curve. If the process is run at ±1 σ capability, 317,300 parts out of every million fall outside the specification limits.
- •
-
±2σ includes 95.45% of the total area under the normal distribution curve, with 45,500 parts out of a million falling outside the control limits.
- •
-
±3σ includes 99.73% of the total area under the normal distribution curve or virtually the entire area. At ±3σ, there are still 2700 defective parts out of each million produced.
- •
-
±6σ includes 99.9,999,998% of the area under the normal distribution curve, and 0.002 parts per million are expected to be defective.
In a 6σ process, statisticians allow for a 1.5σ shift. This adjustment results in 3.4 defects per million parts produced [3].
As an example, assume that a sheet product is being shipped to customer RSQ, who requests the impact strength to be 13 ± 3 ft-lbs at a ±3σ level. To supply samples to RSQ, some sheet is produced and impact properties are measured. Thirty-seven data points are gathered and plotted to give a normal distribution, as shown in Figure 26.3. Based on the data, can your company supply product to RSQ that meets the customer requirements 100% of the time? The average impact value is 13 with a standard deviation of 1.25. At 3σ, the data are anticipated to range from 13 ± 3 (1.25), giving a range of 9.25–16.75. Based on the data without process improvements to lower the standard deviation, it is impossible to satisfy RSQ's impact requirements. The process is incapable of producing product with 13 ± 3 ft-lbs at 3° that meets the customer's requirements 100% of the time. If the order is accepted based on the current operation, the product will be produced and sent to the customer that is outside the specification limit.
Process capability measures the process repeatability relative to the customer specifications. Figure 26.4 shows two normal distribution curves defining product property profiles with specification limits. Process A is capable of producing a product that meets the customer's specifications 100% of the time, whereas process B is an incapable process.
Process capability is measured through a capability index, C pk, defined by Eqns (26.4) and (26.5), where U SL is the upper specification limit and L SL is the lower specification limit. A C pk value <1 indicates that the process is not in control.
(26.4)
(26.5)
C pk values of 1.33 and 2 indicate that six parts out of every 100,000 and two parts out of 1,000,000,000 are defective or outside the allowable spread in specifications, respectively. Parts within the tolerance limits with a C pk value of 1.33 are 99.994% in specification.
The final concept in process capability is the drive for continuous process improvements and zero defects. Zero defects is a quality system goal to remove all defects from the product [4].
Review Questions
- 1.
-
What are some of the functions of the Quality Assurance Department?
- 2.
-
What is C pk? What does it measure? How is it used?
- 3.
-
Using control charts, define five situations in which a process is out of control and how it is recognizable on a control chart.
- 4.
-
What are some possible procedures for checking incoming raw materials?
- 5.
-
What are the purposes of using control charts, and how can they improve quality and productivity?
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781437734812000260
Mathematical Concepts of Lean Six Sigma (6σ) Engineering Strategies
Salman Taghizadegan , in Essentials of Lean Six Sigma, 2006
3.2 THE NORMAL DISTRIBUTION
Normal distributions are probability curves that have the same symmetric shapes. They are symmetric with data numbers more concentrated in the center than in the tails. The term bell-shaped curve is often used to describe normal distribution. The area under the curve is unity. The height of a normal distribution can be expressed mathematically in two parameters: of mean (μ) and the standard deviation (σ). The mean is a measure of center or location of average, and the standard deviation is a measure of spread. The mean can be any value from minus infinity to plus infinity (in between ±∞), and the standard deviation must be positive. Thus, the probability of f(x) (see Equation 3.9) is equal to one. Suppose that x has a continuous distribution. Then for any given value of x, the function must meet the following criteria:
Since the normal distribution curve (symmetric from the mean) meets the x-axis in the infinity (as shown in Figure 3.2), the area under the curve and above the x-axis is one. This can be calculated by integrating the probability density on a continuous interval from minus infinity to plus infinity (Equation 3.10).
(3.9)
Where the height of a normal curve (the normal density function) for random variable x is defined as
(3.10)
where f(x) is the height of a normal distribution curve [f(x) ≥ 0]; μ is the mean; π is the constant 3.14159; e is the base of natural logarithms, which is equal to 2.718282; σ is the standard deviation of population; and (this will be discussed in the z-distribution Section 3.3). The normal distribution curve for nσ (n-sigma) is shown in Figure 3.3.
When n = 3, then for statistical quality control purposes, USL is equal to the mean (μ) plus three times the standard deviation (μ + 3σ), and LSL is equal to the mean minus three times standard deviation (μ – 3σ).
For any upper specification limit (USL) and lower specification limit (LSL) the probability density function (Equation 3.11) can be mathematically expressed as
(3.11)
Thus, the probability (by definition probability is the area under the normal distribution curve A(x)) will be equal to 0.9973 when the process is centered on the target. This is the probability that 99.73% (Table 2.3) of data will fall within μ ± 3σ. This is also called 3σ capability, as shown in Equation 3.12.
(3.12)
Now, by implementing the random variable z (standard z-transform or standard normal deviation), the new probability density function f(z) is described as in the preceding section.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123705020500059
richardsonvatint64.blogspot.com
Source: https://www.sciencedirect.com/topics/engineering/normal-distribution-curve
0 Response to "Is Noraml Distribution a Continuous Cycle"
Post a Comment