Design for Excellence in Electronics Manufacturing. Cheryl Tulkoff

Чтение книги онлайн.

Читать онлайн книгу Design for Excellence in Electronics Manufacturing - Cheryl Tulkoff страница 9

Design for Excellence in Electronics Manufacturing - Cheryl Tulkoff

Скачать книгу

is more robust than one mil of paper. This measurement is often defaulted to time zero, which can be either immediately after manufacturing or when the product first arrives at the customer. Reliability is the measure of a product's ability to perform a required function under stated conditions for an expected duration.

       Myth 5: Reliability is all predictive statistics. Companies that produce some of the most reliable products in the world spend a relatively insignificant percentage of their product development performing predictive statistical assessments. For example, many original equipment manufacturers (OEMs) in telecommunications, military, avionics, and industrial controls require a mean time between failures (MTBF) number from their suppliers. MTBF, sometimes referred to as average lifetime, defines the time over which the probability of failure is 63%. The base process of calculating MTBF involves applying a constant failure rate to each part and summing the parts in the design. While there have been numerous claims over the years of improvement on this number by applying additional failure rates or modifying factors to consider temperature, humidity, printed circuit boards, solder joints, etc., there are several flaws to this approach. The first is misunderstanding what it means. The average engineer often expects a product with an MTBF of 10 years to operate reliably for a minimum of 10 years. In practice, this product will likely fail far before 10 years. Second, the primary approach for increasing MTBF is to reduce parts count. This can be detrimental if the parts removed are critical for certain functions, such as filtering, timing, etc., that won't affect product performance under test, but will influence product reliability in the field.Unlike many other elements of the design and development process, reliability requires thinking about failure. For example, successful reliability testing requires failure, unlike most other forms of testing, where the goal is to pass.

      

       Best Practices

      There are no universal best practices. Every company must choose the appropriate set of practices and implement a program that optimizes return on investment in reliability activities. Reliability is all about cost‐benefit trade‐offs. Since reliability activities are not a direct revenue generator, they are strongly driven by cost. By increasing efficiency in reliability activities, companies can achieve a lower risk at the same cost; and addressing reliability during the design phase is the most efficient way to increase the cost‐benefit ratio. Industry rules of thumb indicate the following returns on investment (Ireson and Coombs 1989):

       Issue caught during design: 1 cost

       Issue caught during engineering: 10 cost

       Issues caught during production: 100 cost

       Additional Economic Drivers

       Use environment and design life

       Manufacturing volume

       Product complexity

       Margin and profit requirements

       Schedule and delivery needs

       Field performance expectations and warranty budget

      2.2.1 Best‐in‐Class Reliability Program Practices

       Establish a reliability goal and use it to determine reliability budgeting.

       Quantify the use environment. Use industry standards and guidelines when aspects of the use environment are common. Use actual measures when aspects of the use environment are unique, or there is a strong relationship with the end customer. Don't mistake test specifications for the actual use environment. Clearly define the median and realistic worst‐case conditions through close cooperation between marketing, sales, and the reliability team.

       Perform assessments appropriate for the product and end‐user. These assessments require an understanding of material‐degradation behavior, either by test to failure or by using supplier‐provided data. The recommended assessments include:– Thermal stress– Margin or safety‐factor demonstration (stress analysis that includes step stress tests (e.g. HALT) to define design margins)– Electrical stress (circuit, component derating, electromagnetic interference [EMI])– Mechanical stress (finite element analysis)– Applicable product characterization tests (not necessarily verification and validation tests)– Life‐prediction validation (accelerated life test [ALT])– Mechanical loading (vibration, mechanical shock)– Contaminant testing

       Perform design review based on failure mode (DRBFM, Toyota methodology). This readily identifies CTQ (critical to quality) parameters and tolerances and allows for the development of comprehensive control plans.

       Perform Design for Manufacturability (DfM) and Design for Reliability (DfR) and involve the actual manufacturers in the DfM process.

       Perform root cause analysis (RCA) on test failures and field returns to initiate a full feedback loop.

      Best‐in‐class companies have a strong understanding of critical components. Component engineering typically starts the process through the qualification of suppliers and their parts. They only allow bill of materials (BOM) development using an approved vendor list (AVL). Most small to mid‐size (and even large) companies do not have the resources to assess every part and part supplier. Those who are best in class focus resources on those components critical to the design. Component engineering, often in partnership with design engineers, also perform tasks to ensure the success of critical components. This includes design of experiments, test to failure, modeling, supplier assessments, etc. Typical critical component drivers are:

       Complexity of the component

       Number of components within the circuit

       Past experiences with component

       Sensitivity of the circuit to component performance

       Potential wearout during the desired lifetime

       Industry‐wide experiences

       Custom design

       Single supplier source

      From the component perspective, reliability assurance requires the identification of reliability‐critical components and drives the need to develop internal knowledge on margins and wearout behaviors. RCA requires the true identification of drivers for field issues combined with an aggressive feedback loop to reliability and engineering teams and suppliers.

      Best in class companies provide strong financial motivation for suppliers to perform well by creating agreements with the supply chain to accept financial incentives and penalties based on field reliability. These practices allow companies to implement aggressive development cycles, proactively respond to change, and optimize field performance.

      Establishing a successful, comprehensive reliability program requires planning and commitment. Requisite priorities include:

      1 Focus: Reliability must be the goal of the entire organization and must be implemented early in the product development cycle. Separate reliability from regulatory‐required verification and validation activities and mindset.

      2 Dedicated

Скачать книгу