by Keith Brumbaugh
Is our industry stuck in the past? The current industry trend is to only look at random hardware failures in safety integrity level (SIL) probability of failure on demand (PFD) calculations. No one would appear to be updating assumptions as operating experience is gained. Hardware failure rates are generally fixed in time, assumed to be average point values (rather than distributions), and either generic in nature or specific to a certain set of hardware and/or conditions which the vendors determine by suitable tests or failure mode analysis.
But are random hardware failures the only thing that cause a safety instrumented function (SIF) to fail? What if our assumptions are wrong? What if our installations do not match vendor assumptions? What else might we be missing? How are we addressing systematic failures?
One obvious problem with incorporating systematic failures is their non-random nature. Many functional safety practitioners claim that systematic errors are addressed (i.e., minimized or eliminated) by following all the procedures in the ISA/IEC 61511 standard. Yet even if the standard were strictly adhered to, could anyone realistically claim a 0% chance of a SIF failing due to a human factor? Some will say that systematic errors cannot be predicted, much less modeled. But is that true?
This paper will examine factors which tend to be ignored when performing hardware-based reliability calculations. Traditional PFD calculations are merely a starting point. This paper will examine how to incorporate systematic errors into a SIF’s real-world model. It will cover how to use Bayes theorem to capture data after a SIF has been installed — either through operating experience or industry incidents — and update the function’s predicted performance. This methodology can also be used to justify prior use of existing and non-certified equipment.
Unlock this download by completing the form: