top of page

165 results found with an empty search

  • Does Your Facility Have the Flu? Use Bayes Rule to Treat the Problem Instead of the Symptom

    Is our industry addressing the problems facing it today? We idealize infinitesimally small event rates for highly catastrophic hazards, yet are we any safer? Have we solved the world’s problems? Layers of protection analysis (LOPA) drives hazardous event rates to 10-4 per year or less, yet industry is still experiencing several disastrous events per year. If one estimates 3,000 operating units worldwide and industry experiences approximately 3 major incidents per year, the true industry accident rate is a staggering 3 / 3,000 per year (i.e. 10-3). All the while our LOPA calculations are assuring us we have achieved an event rate of 10-6. Something is not adding up! Rather than fussing over an unobtainable numbers game; wouldn’t it be wiser to address protection layers which are operating below requirements? We are (hopefully) performing audits and assessments on our protection layers and generating findings. Why are we not focusing our efforts on the results of these findings? Instead we demand more bandages (protect layers) for amputated limbs (LOPA scenarios) instead of upgrading those bandages to tourniquets. Perhaps the dilemma is we cannot effectively prioritize our corrective actions based on findings. Likely we have too much information and the real problems are lost in the chaos. What if there was a way to decipher the information overload and visualize the impact of our short comings? Enter Bayes rule to provide a means to visualize findings through a protection layer health meter approach; to prioritize action items and staunch the bleeding. by Keith Brumbaugh Topics include: Bayes, Bayes rule, Bayes theory, LOPA, IPL, SIS, SIF, SIL Calculations, systematic failure, human factors, human reliability, operations, maintenance, IEC 61511, ANSI/ISA 61511, hardware reliability, proven in use, confidence interval, credible range, safety lifecycle , functional safety assessment , FSA stage 4, health meter. Click here to view the complete whitepaper

  • Designing Operator Tasks to Minimize the Impact of Heuristics and Biases

    Often times when a person is blamed for “not thinking,” the reality is they were thinking, but were not aware of it. This is the theory of System 1 (i.e., Fast) versus System 2 (i.e., Slow) thinking that explains we are really two people: Our conscious aware selves (System 2 thinking), and a dominant “fast” subconscious making most of our decisions (System 1 thinking) without being consciously aware of it in the moment (to the point that some have argued there is no such thing as “free will”). The heuristics (i.e., mental short cuts) we use to think in System 1 are necessary to make it through a day (it is exhausting to maintain a continuous conscious stream of thought), and often lead to good outcomes. However, System 1 thinking can make us vulnerable to systematic biases (i.e., mental traps) that arise from the use of those heuristics. It is necessary to be aware of the traps System 1 thinking can create, because often times that is our only defense against them. In this respect, “fast thinking” represents one of the fundamental limits to achieving safe operation. In addition to awareness, there is a need where possible to design operator tasks and the interfaces they use to minimize the likelihood of systematic bias occurring when thinking in System 1. Lastly, it would be useful to provide designs that could increase the potential for the operator to engage System 2 thinking (consciousness) when required, which is less susceptible to biases. This paper proposes a combined approach of discussing the cognitive psychology behind System 1 and System 2 thinking, the types of heuristics we use, the biases that result, and operator task and interface design that can minimize the likelihood of systematic bias. The paper will incorporate the learnings from 5 years of safety critical Task Analysis performed for field and control room tasks. A practical operator response to abnormal situation model will be described that will link the heuristics used and potential biases that may occur, as well as design features to minimize the likelihood of those occurring. As presented at the 2020 AIChE Spring Meeting & 16th Global Congress on Process Safety. Click here to view the complete whitepaper Process Safety Services

  • How Taking Credit for Planned and Unplanned Shutdowns Can Help You Achieve Your SIL Targets

    by Keith Brumbaugh , P.E., CFSE ​ Achieving Safety Integrity Level (SIL) targets can be difficult when proof test intervals approach turnaround intervals of five years or more. However, some process units have planned and predictable unplanned shutdowns multiple times a year. During these shutdowns, it may be possible to document that the safety devices functioned properly. This can be incorporated into SIL verification calculations to show that performance targets can now be met without incorporating expensive fault tolerance , online testing schemes, etc. This can result in considerable cost savings for an operating unit. The problem If a process plant is following the ANSI/ ISA 84.00.0 1 process safety lifecycle (i.e. ISA 84) or similar, as part of the allocation of safety functions to protection layers phase, a SIL assessment (e.g., a Layers of Protection Analysis (LOPA)) would be undertaken to assign Safety Integrity Levels (SIL) targets to a Safety Instrumented Function (SIF) . A scenario could occur in the design and engineering phase of the ISA 84 safety lifecycle when performing the SIL verification calculations, that the team discovers the SIFs do not meet their performance target. Assuming the calculation was done properly using valid data and assumptions, something would need to change in order to meet or exceed the required performance targets. This issue could occur in a Greenfield plant when first designing a SIF, but is more likely to be discovered during a revalidation cycle of a brownfield plant. Click here to view the complete whitepaper

  • FGS 1400 MK II - Evolution of the traditional Fire panel

    by Warren Johnson, PE, PMP ​ In 2005, aeSolutions recognized an industry need for Fire and Gas panels based on a SIL capable PLC safety control platform. Large industrial clients were looking for a system capable of monitoring and controlling Fire system 1/0, combustible gas, toxic gas, and oxygen depletion detectors, initiating suppression release, controlling HV AC, and performing process safety shutdowns. To develop the Fire and Gas system requirements needed by industry, we first needed to understand the regulatory requirements, applicable industry standards, and the types of fire and gas systems currently in use .. Here are some of the key regulatory requirements mandated by OSHA. - OSHA 1910.155 Fire Detection- 3rd party approval by Nationally recognized laboratory - OSHA 1910.164 Fire Detection Systems - Circuit Supervision - OSHA 1910.165 Employee Alarm Systems - Circuit supervision - Power Supply Monitoring Other key drivers are determining which industry standards are applicable. Are the standards mandatory? Many local and state codes reference the International building code. This code requires the use of NFPA 72 for fire alarm signaling systems. The authority having jurisdiction (AHJ) in each jurisdiction has the final authority in determining the applicable standards that the fire alarm system must meet. Click here to view the complete whitepaper

  • A Database Approach to the Safety Life Cycle

    by Michael D. Scott , Founder, P.E. & Ken O’Malley , Founder, P.E. ABSTRACT A systematic database approach can be used to design, develop and test a Safety Instrumented System (SIS) using methodologies that are in compliance with the safety lifecycle management requirements specified in ANSI/ISA S84.01. This paper will demonstrate that through a database approach, the design deliverables and system configuration quality are improved and the implementation effort is reduced. Topics Include: ANSI/ISA S84.01 , Safety Instrumented Systems , Safety Instrumented Functions , Safety Integrity Levels, Safety Lifecycle Click here to view the complete whitepaper During the SIL Verification process, the type of equipment specified, voting architecture, diagnostics and testing parameters are verified by calculation, producing the Probability of Failure on Demand, and Spurious Trip Rate for each SIF. Additionally, we consider hardware fault tolerance (HFT) required. The SIL Verification calculation Reports are provided from all tools and calculations we perform. A Design Verification Report (DVR) details the calculation parameters, assumptions, limitations, and sources of data for SIL calculations performed. Recommendations for optimized SIF performance (taking into account both safety integrity and spurious trip evaluation), are also reported in this document. aeSolutions' SIS Engineers are trained and experienced in the fundamentals and the advanced parameters of SIL Verificat ion Calculations. Our engineers, many of which have CFSE, CFSP, and ISA84 Expert certifications, work with our clients to evaluate the SIS options for optimized investment.

  • Implementing Safety Instrumented BMS: Challenges and Opportunities

    by Michael D. Scott , PE, CFSE, aeSolutions Founder & Brittany Lampson, PhD Implementing a Safety Instrumented Burner Management (SI‐BMS) can be challenging, costly, and time consuming. Simply identifying design shortfalls/gaps can be costly, and this does not include costs associated with the capital project to target the gap closure effort itself. Additionally, when one multiplies the costs by the total number of heaters at different sites, these total costs can escalate quickly. However, a “template” approach to implementing SI‐BMS in a brownfield environment can offer a very cost effective solution for end users. Creating standard “templates” for all deliverables associated with a SI‐BMS will allow each subsequent SI‐BMS to be implemented at a fraction of the cost of the first. This is because a template approach minimizes rework associated with creating a new SIBMS package. The ultimate goal is to standardize implementation of SI‐BMS in order to reduce engineering effort, create standard products, and ultimately reduce cost of ownership. Click here to view the complete whitepaper What is a BMS? What is Safety Instrumented Function (SIF) What is Function Safety?

  • Identifying Required Safety Instrumented Functions for HIGH-TECH & SEMICONDUCTOR MANUFACTURING

    by Michael D. Scott , P.E., aeSolutions founder &  Ken O’Malley , P.E., aeSolutions founder This paper will discuss the issues, decisions, and challenges encountered when attempting to initially apply the concepts of the Safety Lifecycle per ANSI / ISA S84.01 to the design of a Life Safety System at a state of the art fiber optic manufacturing facility. More specifically, the methodology / procedures utilized for identification of Safety Instrumented Functions (SIF) and subsequent Safety Integrity Level (SIL) determination will be discussed in detail. In addition, industry specific issues associated with the design of Life Safety Systems and the use of mitigation versus prevention techniques (typically encountered in the process industry) will also be discussed. Topics include: ANSI / ISA S84.01, Safety Instrumented Systems, Safety Instrumented Functions, Safety Integrity Levels, Life Safety Systems IDENTIFYING REQUIRED SAFETY INSTRUMENTED FUNCTIONS FOR LIFE SAFETY SYSTEMS IN THE HIGH-TECH AND SEMICONDUCTOR MANUFACTURING INDUSTRIES Click here to view the complete whitepaper

  • Can we achieve Safety Integrity Level 3 (SIL 3) without analyzing Human Factors?

    by Keith Brumbaugh P.E Many operating units have a common reliability factor which is being overlooked or ignored during the design, engineering, and operation of high integrity Safety Instrumented Functions (SIFs) . That is the Human Reliability Factor. In industry, there is an over focus on hardware reliability to the n’th decimal point when evaluating high integrity SIFs (such as SIL 3), all to the detriment of the human factors that could also affect the Independent Protection Layer (IPL) . Most major accident hazards arise from human failure, not failure of hardware. If all that were needed to prevent process safety incidents is to improve hardware reliability of IPLs to some threshold, the frequency of near miss and actual incidents should have tailed off long ago - but it hasn’t. Evaluating the human impact on a Safety Instrumented Function requires performing a Human Factors Analysis . Human performance does not conform to standard methods of statistical uncertainty, but Human Reliability as a science has established quantitative limits of human performance. How do these limits affect what we can reasonably achieve with our high integrity SIFs? What is the uncertainty impacts introduced to our IPLs if we ignore these realities? This paper will examine how we can incorporate quantitative Human Factors into a SIL analysis. Representative operating units at various stages of maturity in human factors analysis and the I EC/ ISA 61511 Safety Lifecycle will be examined. The authors will also share a checklist of the human factor considerations that should be taken into account when designing a SIF or writing a Functional Test Plan. Click here to view the complete whitepaper

  • Improving Human Factors Review in PHA and LOPA

    Human Reliability practitioners utilize a variety of tools in their work that could improve the facilitation of PHA ‐ LOPA related to identifying and evaluating scenarios with a significant human factors component. These tools are derived from human factors engineering and cognitive psychology and include, (1) task analysis, (2) procedures and checklists, (3) human error rates, (4) systematic bias, and (5) Barrier effectiveness using Bow‐tie. Human error is not random, although the absent minded slips we all experience seem to come out of nowhere. Instead, human error is often predictable based on situations created external or internal to the mind. Human error is part of the human condition (part of being a human) and as such cannot be eliminated completely. For example, a task performed at high frequency (e.g., daily or weekly) develops a highly‐skilled operator with an expectation of a low error probability for that task. However, as the operator’ skill increases, their reliance on procedures decreases, leaving them open to memory lapses caused by internal or external distractions. The fact that a skilled operator becomes less dependent on procedures is not a conscious decision. It is part of the human condition. Forcing a skilled operator to read the procedure while performing the task they are skilled at, is like asking you to think about what your feet are doing as you walk down a flight of stairs. In both cases a loss of adroitness will occur. A large portion of this paper will be to describe with practical examples the five tools mentioned above. Task analysis is a talk‐through and walk‐through exercise of a task (typically focusing on one or two critical steps of a procedure) that is used to identify error likely situations (ELS). Quantitative human error rates can be attached to the ELS depending on if the error is associated with skill, rule, or knowledge (SRK) based performance. Systematic biases produced by Type 1 (fast) thinking cause judgment and diagnosis errors related to response to abnormal situations. Having a working knowledge of these five tools will improve a PHA‐LOPA facilitator’s awareness and ability to better evaluate human error related scenarios and Barrier failure. In addition the facilitator will feel confident about recommending the need for a more detailed follow‐up study such as an HRA (Human Reliability Analysis) . Click here to view the complete whitepaper Topics include: Human Factors, Human Error, PHA, LOPA, Facilitator, Task Analysis, Bias, Cognitive Psychology

  • How Can I Effectively Place My Gas Detectors

    Several Recognized and Generally Accepted Good Engineering Practices (RAGAGEPs) exist to help someone make their selection and placement of gas detectors (e.g. ISA-TR84.00.07, NFPA 72, UL-2075). However, there are no real consistent approaches widely used by companies. Historically, gas detection has been selected based on rules of thumb and largely dependent on experience. Over the last several years there has been a growing interest in determining not only the confidence but also the effectiveness of those gas detection systems. In fact, incorrect detector placement far outweighs the probability of failure on demand (of the individual system components) in limiting the effectiveness of the gas detection system. An effective gas detection system has three elements: 1. A comprehensive Gas Detection Philosophy 2. Appropriate Detector Technology Selection 3. Correct Detector Placement The Gas Detection Philosophy clearly specifies the chemicals of concern and the intended purposes, i.e. detection of toxic or combustible levels, voting requirements, alarm rationalization , and control actions. Appropriate Detector Technology Selection includes consideration of the target gas and the required detection concentration levels. The primary approaches for Detector Placement are geographic and scenario-based coverage. Geographic coverage places detectors on a uniform grid, and sometimes areas risk ranked to reduce the number of detectors required. Scenario-based coverage has a range of leak models and places gas detectors based on the dispersion modeling results. All three elements for effective gas detection (philosophy, technology, and placement) are interdependent but understanding their relationships is of paramount importance to design an effective gas detection system. The intention of this paper is to present the main considerations that design engineers and process safety professionals should address for each gas detection system element in order to obtain the best return on your investment when placing your gas detectors. Topics include: Instrumentation, Reduction of Risk, Risk Assessment, Protection, Detection System, Alarms and Operator Interventions, Detector, Gas Detection/Dispersion Prediction Click here to view the complete whitepaper

  • Understanding Flammable Mist Explosion Hazards

    While there is extensive testing and validation of hazards from flammable vapors, less information is available regarding flammable liquid mists. A method is suggested for reasonably estimating the nature and severity of flammable liquid mist hazards by applying published mist property correlations to model inputs and outputs in dispersion modeling software. Better estimating these hazards is important to properly evaluate what mitigations will be needed. One common high flash point liquid that can pose a flammable mist hazard is heating oil. Published literature has documented that the lower explosion point (LEP) temperature of a flammable mist can be much lower than the flash point of the vapor-phase material, and the lower flammability limit (LFL) concentration of a flammable mist can be as low as 10% of the material’s vapor-phase LFL. The actual LFL of a flammable mist has been experimentally observed to be a function of the droplet size. Since many oils consist of a blend of hydrocarbons with various carbon chain lengths, only a few compounds may be chosen to represent the material in commercially available consequence modeling software. This paper will propose: 1) further guidance on an approach that will reasonably approximate the mist properties in the model; and 2) a practical example of modeling the consequences of a mist release. Finally, a case study will be provided where a range of known real world preventative and mitigative measures were tabulated, the existing measures were evaluated against these measures, and then upgrades were proposed based on the model observations. Click here to view the complete whitepaper

  • Lessons Learned on SIL Verification and SIS Conceptual Design

    by Richard E. Hanner & Ravneet Singh There are many critical activities and decisions that take place prior to and during the Safety Integrity Level (SIL) Verification and other Conceptual Design phases of projects conforming to ISA84 & ISA/IEC 61511. These activities and decisions introduce either opportunities to optimize, or obstacles that impede project flow, depending when and how these decisions are managed. Implementing Safety Instrumented System (SIS) projects that support the long‐term viability of the Process Safety Lifecycle requires that SIS Engineering is in itself an engineering discipline that receives from, and feeds to, other engineering disciplines. This paper will examine lessons learned within the SIS Engineering discipline and between engineering disciplines that help or hinder SIS project execution in achieving the long‐term viability of the Safety Lifecycle. Avoiding these pitfalls can allow your projects to achieve the intended risk reduction and conformance to the ISA/IEC 61511 Safety Lifecycle, while avoiding the costs and delays of late‐stage design changes. Alternate execution strategies will be explored, as well as the risks of moving forward when limited information is available. Click here to view the complete whitepaper Topics Include: IEC 61511, ISA/IEC 61511 , Safety Instrumented Systems (SIS) , Independent Protection Layers (IPL) , Functional Safety Assessment (FSA) , Safety Requirement Specification (SRS) , Safety Lifecycle , Functional Safety Management Plan (FSMP ), Project Execution Plan (PEP), SIS Front‐End Loading (SIS FEL), Layer of Protection Analysis (LOPA ), SIL Verification ​

bottom of page