Case Studies in Maintenance and Reliability: A Wealth of Best Practices. V. Narayan

Чтение книги онлайн.

Читать онлайн книгу Case Studies in Maintenance and Reliability: A Wealth of Best Practices - V. Narayan страница 19

Автор:
Жанр:
Серия:
Издательство:
Case Studies in Maintenance and Reliability: A Wealth of Best Practices - V. Narayan

Скачать книгу

7.6.

      Valve Gland Packing Renewal

      During a shutdown, various types of gate valves need to be repacked (all old packing rings to be removed from the stuffing box and renewed). The total number of valves to be repacked is approximately 40 pcs of sizes 4, 6, and 8 inch. Estimate the man-hours required per piece of each size. See results in Figure 7.7.

image

      This kind of benchmarking proved quite simple to carry out. It proved useful in checking contract prices and in preparing estimates prior to inviting competitive bids.

      7.6 Lessons Learned

      1.Although competitive bidding is a safeguard against overpricing, it fails when contractors form alliances.

      2.All norms should be reviewed regularly and updated if necessary.

      3.An outside pair of eyes can reveal weaknesses in your systems, which you yourself are too close to observe.

      4.Benchmarking is a powerful tool for assessing comparative performance.

      7.7 Principles

      Without in-house capability for making realistic estimates, there is no way of knowing whether you get value for money from your contractors.

      Externally-enforced maintenance cost reductions can hurt the long-term viability of the company, cutting away some flesh and bone along with the fat. Internal audits of current practices can help identify out-dated procedures that add costs without adding value. Some of these practices may have started off as well-intentioned streamlining exercises, to improve efficiency of repetitive work. Periodic audits will demonstrate that controls are constantly reviewed, and thus minimize external pressures.

       Benchmarking

       Benchmarking is about being humble enough to admit that someone else is better at something than you; and wise enough to try to learn how to match and even surpass them at it.

       American Productivity and Quality Center.

      Author: Jim Wardhaugh

       Location: 2.3.3 Corporate Technical Headquarters

      8.1 Background

      Our little group was providing a benchmarking and consultancy service to our own facilities and to a few others with whom we had technical support agreements. These sites were scattered around the world. They operated in different geographical areas, under different government regulatory regimes. They were of different ages and sizes; they used different feed-stocks to make different portfolios of products. Our task was to scrutinize data from these locations, identify those whose performance could be improved, and arrange to help those who needed it.

      8.2 Company Performance Analysis Methodology

      We had a systematic methodology for capturing performance data from the sites. There were structured questionnaires asking for relevant data. These were backed up by copious notes explaining in detail the methodology, terminology, and definitions. Some returns were required every quarter while the rest were required annually. Each client location would then send the requested data, which was checked rigorously for any apparent errors. The data was used by a number of different groups in the head office, each looking at different aspects of performance. Our group looked at aspects of maintenance performance.

      We did not want to ask a site for data that it was already sending to the head office in any report. So we took great pains to extract data from a variety of sources. In this way, the input effort by the sites was minimized and little additional information was needed from them.

      When satisfied that all the data looked sensible we massaged the data to identify the performance of each site (or a facility on that site) in a number of ways. The main performance features published for each site were:

      For each of the major plants on site [e.g., Crude Distillation Unit (CDU), Catalytic Cracker (CCU), Hydro-cracker (HCU), Reformer (PFU), Thermal Cracker (TCU/VBU)]:

      •Downtime averaged over the turnaround cycle (whether 3, 4, or 5 years). This smoothed out the effect of major turnarounds (also called shutdowns)

      For the whole site:

      •Maintenance cost, averaged over the turnaround cycle, as a percentage of replacement value

      •Maintenance cost, averaged over the turnaround cycle, in US$/bbl.

      •Maintenance man-hours per unit of complexity.

      This information was published annually and provided in a number of forms, but the two most common provided comparisons with their peers and were:

      •A straight-forward bar chart showing a ranking from best to worst (see an example in Figure 8.1).

      •A radar diagram which sites found useful because it could show at a glance a number of aspects (see idealized version in Figure 8.2). Comparisons could then be made against the performance of the best (see Figure 8.3).

      On each spoke of the diagram, the length of the spoke represents the actual value for each facility. The shaded polygon shows the data points for the best performers; these are the values of the item in the ranked order, one-third of the way from the best to the worst performer.

      Comparisons were made against two yardsticks:

      •The average performance of the group of plants or refineries

      •The performance of the plant or refinery one-third of the way down the ranking order.

      Because the facilities were of different sizes and complexities, we had to normalize the data. We used a number of normalizing factors to achieve this. For example, when measuring maintenance costs, we used factors such as asset replacement value and intake barrels of feedstock as the divisors.

      These divisors gave different answers and thus somewhat different rankings. Not surprisingly, those deemed to be top performers, liked the divisor we used. Those deemed poor were highly vexed. Although there were exceptions, whatever the divisor used, those in the top-performing bunch stayed at the top, those in the bottom bunch stayed at the bottom. Only minor changes in position or performance were identified. Those in the middle of the performance band could show significant movement, however. Normalizing methods are discussed in Appendix 8-B.

image image

Скачать книгу