Skip to content

Safety Moment #12: Process Safety Emergency Response

Process Safety Moment for Emergency Response – the Two Second Rule

Safety Moments

Safety Moments are short stories or anecdotes that illustrate some aspect of personal or process safety. Generally they feature a near miss or foolish event that had the potential to be much worse than it was. They can be used to start a meeting or a task assignment.

We publish a wide range of Safety Moments. Many of the Safety Moments include a video that is generally posted to YouTube. Browse through them and select one that has the most relevance to your audience. This Safety Moment — the Two Second Rule — illustrates the importance of having an emergency response program as part of your overall Process Safety Management (PSM) system.

The Two Second Rule

Emergency Response is a key part of any Process Safety Management (PSM) program. No matter how effective preventive measures may be, there is always a chance that something will go badly awry. In which case an emergency response program is needed.

The Two Second Rule is taught in driver’s education. The basic idea is simple: stay two seconds behind the vehicle that you are following in order to ensure sufficient stopping distance. In other words, in an emergency situation you know what to do and you have sufficient time to take the appropriate action.

The concept of a “two second rule” can also be used in process safety management programs. But in this Safety Moment the idea is put forward that within a short period of time, everyone knows what to do if a catastrophe is looming.

History of Risk and Safety

Information to do with the History of Risk and Safety in the process industries is available both as an ebook and a video.

Behavior-Based Safety

Information to do with Behavior Based Safety is available here.

Inherent Safety

Information to do with the topic of Inherent Safety is available here.

 

The Purloined Letter, VW Diesels and Incident Investigation

The content of this post is now located here.

 

Process Risk and Reliability Management: Welcome

Process-Risk-Reliability-Management-2nd

Welcome to our blog Process Risk and Reliability Management. Other blogs in the The PSM Report group are:

The posts here are based on the book Process Risk and Reliability Management, published by Elsevier. Details to do with the book are provided here; the Table of Contents (.pdf format) is here; and purchasing information is here.

The posts that we have published so far are shown below, organized by chapters of the book.

Chapter 1 — Risk Management

Chapter 2 — Compliance and Standards

Chapter 3 — Culture and Participation

Chapter 4 — Technical Information

Chapter 5 — Hazard Identification

Chapter 6 — Operating Procedures

Chapter 7 — Training and Competence

Chapter 8 — Prestartup Reviews

Chapter 9 — Asset Integrity

Chapter 10 — Management of Change

Chapter 11 — Incident Investigation and Root Cause Analysis

Chapter 12 — Emergency Management

Chapter 13 — Audits and Assessments

Chapter 14 — Consequence Analysis

Chapter 15 — Frequency Analysis

Chapter 16 — Reliability, Availability and Maintainability

Chapter 17 — Managing a Risk Program

Chapter 18 — Project Management

Chapter 19 — Contractors

Chapter 20 — The Risk Management Professional

Prestartup Reviews

Prestartup (Safety) Reviews

In this post we examine the important, yet frequently misunderstood, topic of Prestartup Reviews in a nine minute video.

Ian Sutton

The video is organized into the following sections.

  • Introduction
  • What the Review Is Not
  • Regulations
  • Types of Review
  • Restart Reviews
  • Organizational Responsibility
  • Elements of Process Safety Management

Contractors in the Process Industries

Suddenly, a heated exchange took place between the king and the moat contractor (Larson)

Suddenly, a heated exchange took place between the king and the moat contractor (Larson)

The material in this post is extracted from Chapter 19 of the book Process Risk and Reliability Management and from Chapter 6 of the 2nd edition of the book Offshore Safety Management.

Contractors play a vital and increasingly important role in the design, construction, operation and maintenance of process and energy facilities, as can be seen from the chart shown below. It can be seen that, over the last twenty years or so, the number of contractor work hours has increased by about a fact of fifteen, whereas the number of hours worked by employees of the host companies has hardly doubled. (The offshore oil and gas industry is particularly reliant on contractors.)

Contractor-Growth

We have prepared three videos to do with contractors in the process industries. They are:

  1. Contractors and Operators
  2. Regulations and Standards
  3. Management of Contractors

The first video — Contractors and Operators — lists the different types of contractor and contract company, it describes some of the issues and difficulties that occur at the operator/contractor interface, and highlights some of the legal issues that need to be thought about. Strategies for contractor selection are described and references to the Center for Offshore Safety Contractor-Operators templates are provided.

The second video — Regulations and Standards — reviews regulations to do with contractors from OSHA (the Occupational Safety & Health Administration) and BSEE (the Bureau of Safety and Environmental Enforcement), along with the industry standard API Recommended Practice 76. These regulations and standards provide a basis for developing contractor management systems, regardless of location or industry.

The third video — Management of Contractors — discusses how an operating company can develop bridging documents with the hundreds of contractor companies with which it works. The development of maps using a Safety Management System, such as shown in the drawing below, is one way of doing this efficiently and with minimal redundancy.

Bridging Document

Bridging Document

Risk Perception

Rock-Climbing-1

In earlier posts we discussed the concepts of Perfect Safety and Acceptable Risk. This post concludes this short sequence with some thoughts to do with the subjective nature of risk. The material below has been adapted from the book Process Risk and Reliability Management.

The discussion would seem to have little relevance to the work of those working in the process industries. But such a conclusion would be misleading, as can be seen from a recent LinkedIn post Perception trumps truth. The premise of the post, and of many of the comments on it, is that the environmental consequences of fracking are not as bad as many of its opponents proclaim, hence we can go ahead as long as we put in effective environmental controls. But some environmentalists have a world view that states that, since fracking inevitably causes at least some environmental damage, this activity should be banned altogether.

The two parties have different belief systems and are therefore talking entirely at cross purposes. Indeed, A truth ceases to be a truth as soon as two people perceive it. Oscar Wilde (1854-1900)

Wilde-Oscar-3

Oscar Wilde

The above observation regarding different truths applies to hazards analysis and risk management work. Each person participating in a hazards analysis has his or her own opinions, memories, attitudes and overall ‘world view’. Most people are — in the strict sense of the word — prejudiced; that is, they pre-judge situations rather than trying to analyze the facts rationally and logically. People jump to preconceived conclusions, and those conclusions will often differ from those of other people who are looking at the same information with their own world view. With regard to risk management, even highly-trained, seasoned experts — who generally regard themselves as being governed only by the facts — will reach different conclusions when presented with the same data. Indeed, Slovic (1992) states that there is no such thing as ‘real risk’ or ‘objective risk’. His point is that if risk can never be measured objectively then objective risk does not exist at all.

In his book Bad Science the author Ben Goldacre (Goldacre 2008) has a chapter entitled “Why Clever People Believe Stupid Things”. Many of his insights can be applied to the risk assessment of process facilities. He arrives at the following conclusions:

  1. We see patterns where there is only random noise.
  2. We see causal relationships where there are none.
  3. We overvalue confirmatory information for any given hypothesis.
  4. We seek out confirmatory information for any given hypothesis.
  5. Our assessment of the quality of new evidence is biased by our previous beliefs.

The subjective component of risk becomes even more pronounced when the perceptions of non-specialists, particularly members of the public, are considered. Hence successful risk management involves understanding the opinions, emotions, hopes and fears of many people, including managers, workers and members of the public.

Some of the factors that affect risk perception are discussed below.

Degree of Control

Voluntary risks are accepted more readily than those that are imposed. For example, someone who believes that the presence of a chemical facility in his community poses an unacceptable risk to himself and his family may willingly go rock-climbing on weekends because he feels that he has some control over the risk associated with the latter activity, whereas he has no control at all over the chemical facility, or of the mysterious odors it produces. Hence rock climbers will quickly point to evidence that their sport is safer than say driving to the mountains. But their response misses the point — they are going to climb rocks anyway; they then assemble the evidence to justify what they are doing (see point #4 above).

Similarly, most people feel safer when driving a car rather than riding as a passenger, even though half of them must be wrong. The feeling of being in control is one of the reasons that people accept highway fatalities more readily than the same number of fatalities in airplane crashes.

The desire for control also means that most people generally resist risks that they feel they are being forced to accept; they will magnify the perceived risk associated with tasks that are forced upon them.

Familiarity with the Hazard

Most people understand and accept the possibility of the risks associated with day-to-day living, but they do not understand the risk associated with industrial processes, thus making those risks less acceptable. A cabinet full of household cleaning agents, for example, may actually pose more danger to an individual than the emissions from the factory that makes those chemicals. But the perceived risk is less.

Hazards that are both unfamiliar and mysterious are particularly unacceptable, as can be seen by the deep distrust that the public feels with regard to nuclear power facilities.

Direct Benefit

People are more willing to accept risk if they are direct recipients of the benefits associated with that risk. The reality is that most industrial facilities provide little special benefit to the immediate community apart from offering some job opportunities and an increased local tax base. On the other hand, it is the community that has to take all of the associated risk associated with those facilities, thus creating the response of NIMBY (‘Not in My Backyard’).

Personal Impact

The effect of the consequence term will depend to some degree on the persons who are impacted by it. For example, if an office worker suffers a sprained ankle he or she may be able to continue work during the recovery period; an outside operator, however, may not be able to work at his normal job during that time. Or, to take another example, the consequence of a broken finger will be more significant to a concert pianist than to a process engineer.

Natural vs. Man-Made Risks

Natural risks are generally considered to be more acceptable than man-made risks. For example, communities located in areas of high seismic activity understand and accept the risks associated with earthquakes. Similarly people living in hurricane-prone areas regard major storms as being a normal part of life. However, these same people are less likely to understand or accept the risks associated with industrial facilities.

Recency of Events

People tend to attribute a higher level of risk to events that have actually occurred in the recent past. For example, the concerns to do with nuclear power facilities in the 1980s and 90s were very high because the memories of Chernobyl and Three Mile Island were so recent. This concern is easing given that these two events occurred decades ago, and few people have a direct memory of them.

Perception of the Consequence Term

The Risk Equation (1.1) is linear; it gives equal value to changes in the consequence and frequency terms, implying a linear trade-off between the two. For example, according to Equation (1.1), a hazard resulting in one fatality every hundred years has the same risk value as a hazard resulting in ten fatalities every thousand years. In both cases the fatality rate is one in a hundred years, or 0.01 fatalities yr-1. But the two risks are not perceived to be the same. In general, people feel that high-consequence events that occur only rarely are less acceptable than more frequent, low consequence accidents. Hence, the second of the two alternatives shown above is perceived as being worse than the first.

The same way of looking at risk can be seen in everyday life. In a typical large American city around 500 people die each year in road accidents. Although many efforts are made to reduce this fatality rate the fact remains that this loss of life is perceived as a necessary component of modern life, hence there is little outrage on the part of the public. Yet, were an airplane carrying 500 people to crash at that same city’s airport every year, there would be an outcry. Yet the fatality rate is the same in each case, i.e., 500 deaths per city per year. The difference between the two risks is a perception rooted in feelings and values.

To accommodate the difference in perception regarding risk Equation (1.1) can be modified so as to take the form of Equation (1.3).

   RiskHazard  =  Consequencen  *  Likelihood…………………… (1.3)

where  n > 1

Equation (1.3) shows that the contribution of the consequence term has been raised by the exponent n, where n > 1. In other words, high consequence/low frequency accidents are assigned a higher perceived risk value than low consequence/high frequency accidents.

Since the variable ‘n’ represents subjective feelings it is impossible to assign it an objective value. However, if a value of say 1.5 is given to ‘n’ then Equation (1.3) for the two scenarios just discussed — the airplane crash and the highway fatalities — becomes Equations (1.4) and (1.5) respectively.

   Riskairplane  =  500 1.5  *  1………………………………………………. (1.4)

   =  11180

   Riskauto     =    1 1.5  *  500………………………………………………. (1.5)

   =    500

The 500 auto fatalities are perceived as being equivalent to over 11,000 airplane fatalities, i.e., the apparent risk to do with the airplane crash is 17.3 times greater than for the multiple automobile fatalities.

In the case of hazards that have very high consequences, such as the meltdown of the core of a nuclear power facility, perceived risk rises very fast as a result of the exponential term in Equation (1.3), thus explaining public fear to do with such facilities. Over the years, managers and engineers in such facilities have reduced the objective risk associated with nuclear power plants to an extremely low value, largely through the extensive use of sophisticated instrumentation systems. However, since the worst-case scenario — core meltdown — remains the same the public remains nervous and antagonistic. In such cases management would be better advised to address the consequence term rather than the likelihood term. With regard to nuclear power, the route to public acceptance is to make the absolute worst-case scenario one of low consequence.

The subjective and emotional nature of risk is summarized by Brander (1995) with reference to the changes in safety standards that were introduced following the Titanic tragedy.

They [scientists and engineers] tend to argue with facts, formulas, simulations, and other kinds of sweet reason. These don’t work well. What does work well are shameless appeals to emotion – like political cartoons. Like baby seals covered in oil. And always, always, casualty lists. Best of all are individual stories of casualties, to make the deaths real. We only learn from blood.

Comprehension Time

When people are informed that a significant new risk has entered their lives it can take time for them to digest that information. For example, when a patient is informed by a doctor that he or she has a serious medical condition, the doctor should not immediately launch into a discussion of possible treatments. He should allow the patient time to absorb the news before moving on to the next step. So it is with industrial risk. If people — particularly members of the public — are informed of a new risk associated with a process facility, then those people need to time to grasp and come to terms with what has been said. There is a difference between having an intellectual grasp of risk and of subjectively understanding how things have changed.

Randomness

Human beings tend to create order out of a random series of events. People have to do this in order to make sense of the world in which they live. The catch is that there is a tendency to create order, even when events are statistically independent of one another.

For example, psychologists gave test subjects a set of headphones and then played a series of random beeps. The subjects were told to imagine that each beep corresponded to an automobile going by. They were then asked if the beeps were coming in batches, such as would occur when cars were leaving a red traffic light, or whether the cars were spaced randomly, such as would happen on a freeway. The subjects generally said that the beeps were in groups, even though they were in fact occurring at random.

Therefore it is important for those working in process risk management not to create patterns and order out of randomly occurring events. For example, if two or three near miss incidents can be attributed to a failure of the Management of Change (MOC) system this does not necessarily mean that the MOC system is any more deficient than the other elements of the process safety program.

Regression to the Mean

Related to the above discussion concerning the tendency to create non-existent order out of random events, people will also create causal relationships where there are none, particularly when a system was regressing to the mean anyway.

For example, a facility may have suffered from a series of serious incidents. In response to this situation management implements a much more rigorous process safety management program than they had before. The number of incidents then drops. It is natural to explain the improvement as a consequence of the new PSM program. Yet, if the serious events were occurring randomly then it is likely that their frequency would have gone down anyway because systems generally have a tendency to revert to the mean.

Bias toward Positive Evidence / Prior Beliefs

People tend to seek out and information that confirms their opinions and they tend to overvalue information that confirms those opinions. It is particularly important to recognize this trait when conducting incident investigations. As discussed in Chapter 12, it is vital that the persons leading an investigation listen to what is being said without interjecting with their own opinions or prejudices.

We also tend to expose ourselves to situations and people that confirm our existing beliefs. For example, most people will watch TV channels that reinforce their political opinions. This can lead to shock when it turns out in an election that those beliefs did not constitute a majority opinion.

Availability

People tend to notice items which are outstanding or different in some way. For example, someone entering their own house will not see all of the items of furniture, but she will notice that the television has been stolen or that a saucepan has boiled dry. Similarly anecdotes and emotional descriptions have a disproportionate impact on people’s perceptions (as illustrated in the discussion to do with the Titanic tragedy provided earlier in this chapter).

Goldacre notes that, as information about the dangers of cigarette smoking became more available, it was the oncologists and chest surgeons who were the first to quit smoking because they were the one who saw the damage caused to human lungs by cigarettes.

Acceptable Risk

dice-4

The topic of “Acceptable Risk” is a difficult one because most process safety professionals aim for an environment of no incidents — they have trouble accepting the fact that some incidents are always going to occur, even though they know that risk can never be zero.

In an earlier post we discussed the concept of Perfect Safety. This post develops some of the ideas presented there. The material has been extracted from the bookProcess Risk and Reliability Management.

****************************** 

A fundamental aspect of understanding culture is to have a clear understanding as to what levels of risk are acceptable. Given that risk is basically subjective it is not possible to dispassionately define what level of risk is acceptable and what is not. After all, if a facility operates for long enough, it is certain – statistically speaking – that there will be an accident. Yet, given that real-world targets are needed for investing in PSM, a target for “acceptable safety” is needed. This is tricky. Regulatory agencies in particular will never place a numerical value on human life and suffering because any number that they develop would inevitably generate controversy. Yet working targets have to be provided, otherwise the facility personnel do not know what they are shooting for.

The difficulty with attempting to identify an acceptable level of risk is that, as discussed in the sections above, the amount of risk people are willing to accept depends on many, hard-to-pin down factors. Hence no external agency, whether it be a regulatory body, a professional society or the author of a book such as this can provide an objective value for risk. Yet individuals and organizations are constantly gauging the level of risk that they face in their personal and work lives, and then acting on their assessment of that risk. For example, at a personal level, an individual has to make a judgment as to whether it is safe or not to cross a busy road. In industrial facilities managers make risk-based decisions regarding issues such as whether to shut down an equipment item for maintenance or to keep it running for another week. Other risk-based decisions made by managers are whether or not an operator needs additional training, whether to install an additional safety shower in a hazardous area, and whether a full Hazard and Operability Analysis (HAZOP) is needed to review a proposed change. Engineering standards, and other professional documents, can provide guidance. But, at the end of the day, the manager has a risk-based decision to make. That decision implies that some estimate of ‘acceptable risk’ has been made.

One company provided the criteria shown in Table 1.8 for its design personnel.

Table 1.8
Example of Risk Thresholds

  Fatalities per year (employees and contractors)
Intolerable risk >5 x 10-4
High risk <5 x 10-4 and >1 x 10-6
Broadly tolerable risk <1 x 10-6

Their instructions were that risk must never be in the ‘intolerable’ range. High risk scenarios are ‘tolerable’, but every effort must be made to reduce the risk level, i.e.,to the ‘broadly tolerable’ level.

The Third Law

The Third Law of Thermodynamics states that it is impossible for any system to reduce its entropy to zero in a finite number of operations. A safety incident is an example of a system that is not in a zero entropy state, i.e., one that is not perfectly ordered. And it makes sense. No person is perfect, no organization is perfect. No matter how much time, effort and money and goodwill we spend on improving safety, incidents will occur. Indeed, the data shown in Figure 1.15 suggest that safety trends offshore have reached an asymptote. The data, which were published by the United States Bureau of Safety and Environmental Enforcement (BSEE), show a steady improvement from the mid-1990s to the year 2008. But since then there seems to have been a leveling out. Whether this trend will continue is to be seen, but it does suggest that some type of limit may have been reached.

Figure 1.15
Offshore Safety Trends (United States)

Figure 3

Looked at in this light, perfect safety can never happen. Nevertheless we should strive toward it because otherwise we accept that people will be injured — which is something that none of us want or accept, and we certainly do not want to quantify (although a goal of zero incidents over a specified time frame may be achievable).

Perfection as a Slogan

Although perfect safety may not be theoretically achievable, many companies will use slogans such as Accidents Big or Small, Avoid them All. The idea behind such slogans is that the organization should strive for perfect safety, even though it is technically not achievable.

Whether such slogans have a positive effect is debatable. Many people view them as being simplistic and not reflecting the real world of process safety. They seem to over-simplify a discipline that requires dedication, hard work, education, imagination and a substantial investment. For example, a large sign at the front gate of a facility showing the number of days since a lost-time injury is not likely to change the behavior of the workers at that facility. Indeed, it may encourage them to cover up events that really should have been reported. Or to be cynical about the reporting system.

As Low as Reasonably Practical – ALARP

Some risk analysts use the term ‘As Low as Reasonably Practical (ALARP)’ for setting a value for acceptable risk. The basic idea behind this concept is that risk should be reduced to a level that is as low as possible without requiring ‘excessive’ investment. Boundaries of risk that are ‘definitely acceptable’ or ‘definitely not acceptable’ are established as shown in Figure 1.15 which is an FN curve family. Between those boundaries, a balance between risk and benefit must be established. If a facility proposes to take a high level of risk, then the resulting benefit must be very high.

Figure 1.15
Risk Boundaries

Risk-Boundaries

Risk matrices (discussed below) can be used to set the boundaries of acceptable and unacceptable risk. The middle squares in such a matrix represent the risk levels that are marginally acceptable.

One panel has developed the following guidance for determining the meaning of the term ‘As Low as Reasonably Practical’.

  • Use of best available technology capable of being installed, operated and maintained in the work environment by the people prepared to work in that environment;
  • Use of the best operations and maintenance management systems relevant to safety;
  • Maintenance of the equipment and management systems to a high standard;
  • Exposure of employees to a low level of risk.

The fundamental difficulty with the concept of ALARP is that the term is inherently circular and self-referential. For example, the phrase ‘best available technology’ used in the list above can be defined as that level of technology which reduces risk to an acceptable level – in other words to the ALARP level. Terms such as ‘best operations’ and ‘high standard’ are equally question-begging.

Another difficulty with the use of ALARP is that the term is defined by those who will not be exposed to the risk, i.e., the managers, consultants and engineers who work safely in offices located a long way from the facility being analyzed. Were the workers at the site be allowed to define ALARP it is more than likely that they would come up with a much lower value.

Realistically, it has to be concluded that the term ‘ALARP’ really does not provide much help to risk management professionals and facility managers in defining what levels of risk are acceptable. It may be for this reason that the United Kingdom HSE (Health and Safety Executive) chose in the year 2006 to minimize its emphasis to do with ALARP requirements from the Safety Case Regime for offshore facilities. Other major companies have also elected to move away from ALARP toward a continuous risk reduction model (Broadribb 2008).

De Minimis Risk

The notion of de minimis risk is similar to that of ALARP. A risk threshold is deemed to exist for all activities. Any activity whose risk falls below that threshold value can be ignored ¾ no action needs to be taken to manage this de minimis risk. The term is borrowed from common law, where it is used in the expression of the doctrine de minimis non curat lex, or, ‘the law does not concern itself with trifles’. In other words, there is no need to worry about low risk situations. Once more, however, an inherent circularity becomes apparent: for a risk to be de minimis it must be ‘low’, but no prescriptive guidance as to the meaning of the word ‘low’ is provided.

Citations / ‘Case Law’

Citations from regulatory agencies provide some measure for acceptable risk. For example, if an agency fines a company say $50,000 following a fatal accident, then it could be argued that the agency has set $50,000 as being the value of a human life. (Naturally, the agency’s authority over what level of fines to set is constrained by many legal, political and precedent boundaries outside their control, so the above line of reasoning provides only limited guidance at best.) Even if the magnitude of the penalties is ignored, an agency’s investigative and citation record serve to show which issues are of the greatest concern to it and to the community at large.

RAGAGEP

With regard to acceptable risk in the context of engineering design, a term that is sometimes used is ‘Recognized and Generally Accepted Good Engineering Practice’ (RAGAGEP). The term is described in Chapter 9 — Asset Integrity of Process Risk and Reliability Management.

Indexing Methods

Some companies and industries use indexing methods to evaluate acceptable risk. A facility receives positive and negative scores for design, environmental and operating factors. For example, a pipeline would receive positive points if it was in a remote location or if the fluid inside the pipe was not toxic or flammable (Muhlbauer 2003). Negative points are assigned if the pipeline was corroded or if the operators had not had sufficient training. The overall score is then compared to a target value (the acceptable risk level) in order to determine whether the operation, in its current mode, is safe or not.

Although indexing systems are very useful, particularly for comparing alternatives, it has to be recognized that, as with ALARP, a fundamental circularity exists. Not only has an arbitrary value for the target value to be assigned, but the ranking system itself is built on judgment and experience, therefore it is basically subjective. The biggest benefit of such systems, as with so many other risk-ranking exercises, is in comparing options. The focus is on relative risk, not on trying to determine absolute values for risk and for threshold values.