Risk Assessment Criteria

How arbitrary is risk assessment?
Copyright, June, 2003
E + C + (M + F + U)/3 = uncommon sense.

One method for calculating the necessity for equipment maintenance involves the use of a formula that takes into account variables such as how critical a device can be to a patient, location of use, and potential for failure. These criteria are weighted, and related to each other, producing a numerical value. The following is taken from a hospital policy manual and shows the result of analyzing potential risk when all of the indicators are set to a middle weight.

Equipment Function non-patient 1, to life support 10 E: 5
Clinical Application non-patient 1, to death 5 C: 3
Scheduled Maintenance Requirement not required 1, to monthly 5 M: 3
Likelihood of Equipment Failure >5 years 1, to <3 months 5 F: 3
Environment Use Classification non-patient 1, to anesthesia 5 U: 3

The formula that applied: E + C + (M + F + U)/3 = 11

In order for this formula to work, we must be able to discern between a three and four, on a scale of five, or ten. What’s so hard about doing that? Nothing actually, we all draw these kinds of distinctions in everyday life. Most of the time, they are easy to do because there is little consequence to our actions (a little mustard, or a little more mustard?). The problem in deciding between “may cause injury,” or “may cause death” is that often we can’t really be sure (some injury, or a little more injury?). Yet the premise may still sound reasonable, and seeing a formula we are comforted with the “assurance” of the integrity of numbers. But thinking in these absolute terms also has the effect of drawing us away from the actual issues we are looking at, and introduce a more arbitrary result.

We may know that a device fails often, but we also know that failures of this device are easy for the user to observe, are always reported, and consequently never discovered during an inspection. Or, because a machine requires lubrication or a filter change and therefore we must perform required maintenance.

The result of applying this formula in this case is 11. According to the accompanying policy, the number 12 is the demarcation point between maintenance required (anything over 12), and not required (anything under 12). So a device must be over the midpoint in order to qualify for maintenance. That may sound reasonable, but why 12? Why not 11, or 13? Because it is based on middle settings, is still an arbitrary choice. There are other reasonable settings.

There is an equal relationship between M, F, and U. But if periodic lubrication is required, why, in that instance, is maintenance equal in importance to the environment that device is used in, or the likelihood of failure? The facts, the characteristics of a particular device, should preclude the criteria from being treated as though they were all of relatively equal importance. A device criteria should match its characteristics, which rules out using a formula methodology.

A device characteristics are its criteria.

Patients can be injured in the most “innocent” of environments, and because of this, we make a business of assessing risk. In effect, we look at the odds of producing an inappropriate treatment using a specific technology in a particular environment. In order to calculate this risk, we must be able to define the risk components, and this is where arbitrary selection often comes into play.

How can you, with any absolute certainty, determine a device’s “critical nature,” or what the “consequence of failure” will be, or what effect location will have on a patient’s treatment, without making arbitrary decisions? Everything’s critical, or nothing’s that critical: it all depends on your point of view.

Technology and manufacturing capabilities have improved to the extent that we no longer regularly inspect each and every pressure transducer, nor do we check catheters, temperature probes, or a host of other disposables, attachments and ancillary devices. These pieces are often essential to the clinical integrity of instrumentation and treatment, yet in some cases we make a decision not to manage them the same way we do other pieces of equipment.

In the work-a-day world of managing maintenance, we trust that certain devices will not fail, or that someone will notice a failure before harm is done. We also track the devices that require periodic calibration, lubrication, or parts replacement, and we know from experience which equipment failures constitute the greatest risk to a patient. Risk criteria can take into account advances made in technology and manufacturing, whether failures will be apparent to a clinician, the specific maintenance a device requires, and the consequence of failure.

The following four criteria reflect these issues and can be used to direct the type of maintenance that will be applied.

1. 100% self testing. From the point of view of some manufacturers, an electronic device that performs a self-test on start-up is sufficient to assure functionality. Each device should be considered on an individual basis, but depending on a self-testing electronic device is no worse than depending on the manufacturing reliability of transducers, catheters, and probes.

2. Failure apparent to the user. A problem that is not evident to the user can stay hidden until someone inspects the device. Sometimes this characteristic is obvious – compare the verification procedure for insufflator inflation pressure to that for oto/ophthalmoscope quality of light – and sometimes it is not, but this critical issue demands resolution, an “Apparent,” or “Not Apparent” determination.

3. Calibration, lubrication, and parts replacement. If it’s necessary, it’s necessary.

4. Consequence of failure to the patient. In some cases this is clear, like an insufflator compared with an oto/ophthalmoscope (great, and slight), in others it is not as clear, as in a transport cardiac monitor (moderate?). Using a scale limited to 1-3, or Little, Moderate, and Great, makes it possible to visualize a comparative process, and easier to force the determination of consequence of a failure.

The following examples demonstrate how the cumulative effect of these four criteria are ascertained when analyzing risk for a specific device (the number of permutations is greater than this – one formulation I have worked with contains 15).

 100% Self Test   Failure   Cal, Lube, Part Replace   Consequence   Maintenance
YES
APPARENT
NO
LITTLE
Maint Not Required 
(either)
(either)
YES
(either)
Maint
Required
NO
NOT APPARENT
NO
GREAT
Short Maint Interval
NO
APPARENT
NO
MODERATE
Resolve *

* Other issues must be considered: staff experience, environment, past maintenance history (maintenance that leads to repair–see section below), manufacturer’s and ECRI recommendations, etc..

The chart below is presented only as an aide to understanding the process—the more boxes checked above the line, the greater the need for maintenance, the more checked below, the lesser the need.


 100% Self Test
Yes Resolve No

 Failure Apparent
Yes Resolve No

 Cal, lube, part
No Resolve Yes

 Consequence
Little Mod/Unsure Great

As you can see, not all of the assessed risks are easy to resolve. There is no panacea, but this system uses fewer of the more arbitrary settings relied on in some formula methodologies (e.g. 1-10 weighted equipment functions).

Maintenance that leads to a repair. When assessments do not fall conveniently into a category (calibration, lubrication, parts replacement are required, the consequence of failure to the patient is great and will not be apparent to the user, or the device has a 100% self test feature) we can look at the results of any previous corrective maintenance repairs (not repairs per se). Assuming we have “sufficient” data to look at (and that’s for each to decide), we can determine the number of times a problem was uncovered by a maintenance inspection–as compared with problems identified by a user. With these analyses, we can estimate to what extent these inspections will be useful (we are using past data to predict future events, and these may change requiring periodic update analysis).

Repairs that lead to maintenance. One method of risk assessment reported by a Clinical Engineering Director in AAMI’s “Biomedical Instrumentation & Technology” (November/December 2000), includes locating failures that may have been prevented by a maintenance inspection. Presumably, this would entail a detailed analysis of repair data to see if specific maintenance applied on a regular bases (or predictive relying on an indicator) would have prevented a particular failure, not always an easy thing to do (more detail?). This method differs from analyzing repairs that are revealed through periodic maintenance inspections, here you are assessing ALL repairs to see if some form of periodic corrective measures would have prevented the failure necessitating the repair.

One last word concerning repair analysis – generally speaking, it is not unlike (I love double negatives) predicting a stock price based upon past performance. If life were that easy, we’d all be rich!

Comments are closed.