Design for Excellence in Electronics Manufacturing. Cheryl Tulkoff

Чтение книги онлайн.

Читать онлайн книгу Design for Excellence in Electronics Manufacturing - Cheryl Tulkoff страница 16

Design for Excellence in Electronics Manufacturing - Cheryl Tulkoff

Скачать книгу

DPPM is equal to (20 / 1000) =.02 or 2.0% defect ive. 0.020 1,000,000 = 20,000 DPPM.

      2.5.2.1 Basic Statistics Assumptions and Caveats

      Statistics are used to describe samples of populations. If the sample size is small, every member of the population can be tested, and statistics are not needed. Consider ahead of time whether the tool or technique being applied to a data set is appropriate. Many statistical tools are valid only if the data is random or independent. In most cases, samples must be chosen randomly – every member must have an equal chance of being selected. When testing to failure, if the testing is ended before all items have failed, the sample is not random. In this case, the data is censored. Type I censoring is time‐based. Type II censoring is failure‐based. Censored data can be analyzed using special techniques. Testing of a small lot of parts is typically not effective for detecting issues occurring at a rate below 5% of the population, and the ability to detect all quality and reliability issues by testing alone is simply not practical. This is especially true when the primary objective is the lowest possible cost.

      2.5.2.2 Variation Statistics

      Variation statistics that most are familiar with are those for central tendency, which include the mean, median, and mode. Dispersion/spread statistics include range, variance, standard deviation, skewness, and kurtosis. The cumulative distribution function provides the probability that a measured value falls between negative infinity to +infinity. The reliability function provides the probability that an item will survive for a given interval. The hazard function provides the conditional probability of failure in a range given that there was no failure by a certain value (time). A higher failure rate gives rise to a greater probability of impending failure. This is also called the instantaneous failure rate.

      2.5.2.3 Statistical Distributions Used in Reliability

       Discrete Distributions

      The binomial distribution is used when there are only two possible outcomes: success/fail, go/no go, good/bad. Random experiments with success or failure outcomes are often referred to as Bernoulli trials. For the analysis to be valid, the trials must be independent, and the probability of success for each trial remains constant. The distribution is very useful in reliability and quality work and can model the probability of getting X good items in a sample of Y items. The Poisson distribution is used to model rates of events that occur at a constant average rate with only one of two outcomes countable. It models the number of failures in a given time and describes the distribution of isolated events in a large population. As an example, if the average number of defective units per year per 10,000 units is 0.05, you can evaluate the probabilities of zero, one, two or more faults occurring in a 5‐, 10‐, or 15‐year period.

       Continuous Distributions

      Commonly used continuous distributions include normal (Gaussian), log‐normal, exponential, gamma, X2, t, F, and extreme value. The normal distribution is most frequently (and often incorrectly) used. It's selected because the math is easiest and it's the default for spreadsheet tools. The normal distribution is typically good for quality control activities like SPC. Reliability applications include calculating the lifetimes of items subject to wearout and the sizes of machined parts. The log‐normal distribution is usually a better fit for reliability data; simply transform the data by taking a logarithm. It's also a good distribution to use for wearout items. The exponential distribution is extremely important in reliability work; it applies well to life‐cycle statistics and assumes a constant hazard or failure rate. The gamma distribution is used in situations where partial failures can exist, such as when multiple subsystems must fail before the full system fails.

      The X2, t, and F sampling distributions are used for statistical testing, fit, and confidence and used to make decisions, not to model physical characteristics. The Weibull distribution is the most widely used distribution in reliability applications, especially at the end of the infant mortality period. Adjusting the distribution shape parameter makes it fit many different life distributions. Finally, the extreme value distribution is used when the concern is with extreme values that can lead to failure. It is less concerned with the bulk of a population.

      

      Collecting and analyzing data on product performance, customer satisfaction, warranty, and field repairs is an essential element of a reliability program. Problems do not go away on their own if ignored. So, how can you know what to fix? How do you know if you are working on the most critical problems? How can you avoid the cost of building products with recurring problems that will have to be repaired later? That's where reliability analysis and prediction methods come in. Methods of reliability analysis and prediction include Pareto analysis, MTBF (BOM‐based, parts count), reliability growth modeling (design‐build‐test‐fix), CUSUM (cumulative sum charts) and trend charting, block diagrams, and most recently, automated design analysis (ADA) reliability physics predictions using software. A few of these methods are highlighted next.

       Pareto Chart

      The Pareto chart is a standby classic. It is a histogram that rank orders categories from smallest to largest values (or vice versa) and provides for rapid visual identification of problem‐causing issues.

       MTBF

      MIL‐HDBK‐217, “Military Handbook for Reliability Prediction of Electronic Equipment,” paved the way for MTBF calculations. The handbook was based on statistical and actuarial research work done by the Reliability Analysis Center at the Rome Laboratory at Griffiths AFB, Rome, NY and became required due to Department of Defense backing. Widely used as a purchase requirement for all military electronic applications, it spawned several civilian versions. It contains “statistical” constant (random) failure rate models for electronic parts like ICs, transistors, diodes, resistors, capacitors, relays, switches, connectors, etc. The failure rate models were based on statistical projections of the “best available” historical field failure rate data for each component type. Many assumptions were used, including the belief that past data can be used to predict future performance. It is also known as the parts counting technique. To perform an MTBF calculation, select the failure rate for each part in the design from the historical data and apply a thermal stress factor intended to account for diffusion, a solid‐state failure mechanism that plagued early electronics (1960s–1980s). Then, calculate the R(t):

      (2.1)StartLayout 1st Row 1st Column upper R left-parenthesis t right-parenthesis equals e Superscript minus lamda t Baseline and upper R t equals upper R Baseline 1 times upper R Baseline 2 times upper R Baseline 3 period period period times upper R n 2nd Column Blank EndLayout

      MTBF was intended to enable reliability comparison of different design alternatives, foster the use of the most reliable components, drive reliability design improvement and determine expected failure rates for military logistics planning (a key driver). However, MIL‐HDBK‐217 was eventually proven to be both inaccurate and misleading. It frequently caused design resources to be expended on non‐value‐added efforts. Key disadvantages of the method included:

       Assumption that past performance

Скачать книгу