Even though we now understand that the root cause of hundreds of thousands of errors each year lies at the “blunt end,” the proximate cause is often an act committed (or neglected, or performed incorrectly) by a provider. Even as we continue to embrace the systems approach as the most useful paradigm, it would be shortsighted not to tackle these human errors simultaneously. After all, even a room filled with flammable gas will not explode unless someone strikes a match.
When analyzing human errors, it is useful to distinguish “slips” from “mistakes.” To do so, one must appreciate the difference between conscious behavior and automatic behavior. Conscious behavior is what we do when we “pay attention” to a task, and is especially important when we are doing something new, like learning to play the piano for the first time. On the other hand, automatic behaviors are the things we do almost unconsciously. While these tasks may have required a lot of thought initially, after a while we do them virtually “in our sleep.” Humans prefer automatic behaviors because they take less energy, have predictable outcomes, and allow us to “multitask”—like drinking coffee while driving a car. However, some behaviors that feel automatic may require conscious thought. For example, when a doctor tries to write a “routine” prescription while simultaneously pondering the differential diagnosis for a challenging patient, he is at risk, both for making errors in the routine, automatic process of writing the prescription (“slips”) and in the conscious process of determining the diagnosis (“mistakes”).
Now that we’ve distinguished the two types of tasks, let's turn to slips versus mistakes. Slips are inadvertent, unconscious lapses in the performance of some automatic task: you absently drive to work on Sunday morning because your automatic behavior kicks in and dictates your actions. Slips occur most often when we put an activity on “autopilot” so we can manage new sensory inputs, think through a problem, or deal with emotional upset, fatigue, or stress (a pretty good summing up of most healthcare environments). Mistakes, on the other hand, result from incorrect choices. Rather than blundering into them while we are distracted, we usually make mistakes because of insufficient knowledge, lack of experience or training, inadequate information (or inability to interpret available information properly), or applying the wrong set of rules or algorithms to a decision (we’ll delve into this area more deeply when we discuss diagnostic errors in Chapter 6).
When measured on an “errors per action” yardstick, conscious behaviors are more prone to mistakes than automatic behaviors are prone to slips. However, slips probably represent the greater threat to patient safety because so much of what healthcare providers do is automatic. Doctors and nurses are most likely to slip while doing something they have done correctly a thousand times: asking patients if they are allergic to any medications before prescribing an antibiotic, remembering to verify a patient's identity before sending her off to a procedure, or loading a syringe with heparin (and not insulin) before flushing an IV line (the latter two cases are described in Chapters 15 and 4, respectively).
The complexity of healthcare work adds to the risks. Like pilots, soldiers, and others trained to work in high-risk occupations, doctors and nurses are programmed to do many specific tasks, under pressure, with a high degree of accuracy. But unlike most other professions, medical jobs typically combine three very different types of tasks: lots of conscious behaviors (complex decisions, judgment calls), many “customer” interactions, and innumerable automatic behaviors. Physician training, in particular, has traditionally emphasized the highly cognitive aspects of clinical practice, has focused less on the human interactions, and, until recently, has completely ignored the importance and riskiness of automatic behaviors.
With all of this in mind, how then should we respond to the inevitability of slips? Historically, the typical response would have been to reprimand (if not fire) a nurse for giving the wrong medication, admonishing her to “be more careful next time!” Even if the nurse tried to be more careful, she would be just as likely to commit a different error while automatically carrying out a different task in a different setting. As James Reason reminds us, “Errors are largely unintentional. It is very difficult for management to control what people did not intend to do in the first place.”2
And it is not just managers whose instinct is to blame the provider at the sharp end—we providers blame ourselves! When we make a slip—a stupid error in something that we usually do perfectly “in our sleep”—we feel embarrassed. We chastise ourselves harder than any supervisor could, and swear we’ll never make a careless mistake like that again.5 Realistically, though, such promises are almost impossible to keep.
Whatever the strategy employed to prevent slips (and they will be discussed throughout the book), a clear lesson is that boring, repetitive tasks can be dangerous and are often performed better by machines. In medicine, these tasks include monitoring a patient's oxygen level during a long surgery, suturing large wounds, holding surgical retractors steady for a long time, and scanning mountains of data for significant patterns. As anesthesiologist Alan Merry and legal scholar and novelist Alexander McCall Smith have observed:
people have no need to apologize for their failure to achieve machine-like standards in those activities for which machines are better suited. They are good at other things—original thought, for one, empathy and compassion for another … It is true that people are distractible—but in fact this provides a major survival advantage for them. A machine (unless expressly designed to detect such an event) will continue with its repetitive task while the house burns down around it, whereas most humans will notice that something unexpected is going on and will change their activity …6