Design for Excellence in Electronics Manufacturing. Cheryl Tulkoff

Чтение книги онлайн.

Читать онлайн книгу Design for Excellence in Electronics Manufacturing - Cheryl Tulkoff страница 12

Design for Excellence in Electronics Manufacturing - Cheryl Tulkoff

Скачать книгу

against a software development validation plan that complies with the requirement of ANSI/IEEE 829, “Standard for Software Test Documentation.” Customers may request the use of an alternative, similar format pending approval by the product team. The plan should be in an electronic format (spreadsheet or database) that allows it to be a living document for program tracking and management purposes. The purpose of the software plan is to facilitate an effective test‐planning process and to communicate to the entire product, systems engineering, and software team the scope, approach, resources, and schedule of the development validation activities.

      The plan also identifies the input/output (I/O), items and features to be tested, testing tasks to be performed, and person responsible for each task. Since it is impossible to test every aspect and I/O configuration of complex software, the plan should also identify any items or features scheduled for limited testing or not to be tested along with the related rationale and risk. Finally, the test plan should also identify and lead to the creation of feature‐specific test procedure scripts as needed.

       Software Incident Reporting and Tracking

      The number of software discrepancy issues for complex programs typically range between several hundred to more than a thousand. To monitor and verify issue resolution and maintain traceability of all incidents, the software design organization should develop a closed‐loop database for reporting software incidents and tracking corrective actions that complies with ANSI/IEEE 829 or a similar format.

      The database/spreadsheet should be able to generate status reports by sorting and filtering of any category field and content. The database/spreadsheet should generate a monthly histogram bar chart and a monthly mountain chart of issues identification and corrective action performance tracking charts that show the number of:

       New incidents opened

       Total open incidents

       Newly closed incidents

       Total closed incidents

       Success Testing Procedures

      Success testing (a.k.a. test‐to‐pass procedures) is the basic preliminary testing designed to verify conformance to requirements. By themselves, these compliance‐verification tests do not provide total functionality/software validation because they do not identify the true limits and capabilities of the product or identify many types of faults. (Validation procedures for these type issues are covered in the section “Fault Tolerance and Robustness Testing.”) However, just as one would not test a new vehicle at top speed until validation of basic steering, braking, and acceleration functions is achieved, these preliminary tests are vital first steps in the total validation process. The following procedures are recommended for incorporation in the success‐testing portion of the software validation plan.

       Functionality Requirement Verification via Dynamic Black Box Testing

      Dynamic black box testing is an empirical, observational evaluation of inputs and outputs to a device that contains the embedded software or runs a transferable software program. The testing is performed with the device functioning while connected to either a system simulator or a test bench in real time (i.e. dynamic mode). This evaluation is solely based on I/O observation as a result of input stimulation and a combination of I/O states. It is performed without knowledge of how the device works or what the device is doing to process the I/O. It is a basic confirmation of individual CTS functional requirements.

       User Interface Testing

      User interface testing applies only to the functions of devices that interact with system users/operators. This may be in the form of control panels, displays, alerts, and instrumentation. The objective of user interface testing is to confirm that the user interface functions per CTS requirements. Note that all aspects of user interface validation may not be possible at the supplier. A system simulation bench and/or hardware in the loop testing may be required for some features or aspects. However, it is required that the supplier validate as much of the user interface requirement as possible within the constraints of their systems authority and testing resources.

       Software Operational Stability Verification via Dynamic White box Testing

      This procedure applies to devices with embedded software. White box testing, sometimes known as X‐ray or software structural testing, is a detailed evaluation of how well the software operates internally during actual operation, which may be under either real‐time or individual instruction stepping operating conditions. Example of internal software items and functions to be evaluated are:

       Memory management functions and memory integrity

       Interrupt operation and prioritization performance

       Worst‐case stack depth penetration

       Power on initialization, shutdown, and reset performance

       Critical code timing performance

       I/O handling and data conversion

       Calibration variables functionality

       Software Self‐Diagnostics and Trouble Code Function Verification

      In addition to the performance evaluations of the device's functional requirements, the self‐diagnostic and trouble‐code‐logging features of the device should also be validated. Discrepancies, faults, and false or overly sensitive diagnostic trouble code triggers are a cause of not‐required warranty service with costly “no trouble found” (NTF) warranty events.

       Assembly, Service, and Telemetric Interface Feature Verification

      All assembly, service, and telemetric interface features should also be validated. Interface faults, initial and service programming and calibration discrepancies and interactive diagnostics are a cause of costly launch issues and NTF warranty events.

       Fault Tolerance and Robustness Testing

      Fault tolerance testing should demonstrate that the software program can tolerate abnormal or disrupted inputs, power‐feed abnormalities, and stressful environment conditions without unanticipated or undesired behavior, such as:

       A fault in one circuit that affects or disrupts any other circuits or the entire system

       A minor disruption that results in the failure of the component to provide useful services

       Permanent crashing of the component

       Lock up or hanging of the component in a busy‐loop or anticipation‐loop waiting for an expected response or input

      Robustness evaluations are in addition to system‐level functionality testing that confirms that devices perform their intended function as specified. Robustness evaluations are intended to confirm that the device also doesn't do what it isn't supposed to do, such as shut down or lock up when confronted

Скачать книгу