++
In late 1999, the National Academy of Medicine (NAM, then called the Institute of Medicine) published To Err Is Human: Building a Safer Health Care System.1 Although the NAM has published more than 800 reports since To Err, none have been nearly as influential. The reason: extrapolating from data from the Harvard Medical Practice Study,2,3 performed a decade earlier, the authors estimated that 44,000 to 98,000 Americans die each year from medical errors.* More shockingly, they translated these numbers into the now-famous “jumbo jet units,” pointing out that this death toll would be the equivalent of a jumbo jet crashing each and every day in the United States.
++
Although some critiqued the jumbo jet analogy as hyperbolic, we like it for several reasons. First, it provides a vivid and tangible icon for the magnitude of the problem (obviously, if extended to the rest of the world, the toll would be many times higher). Second, if in fact a jumbo jet were to crash every day, who among us would even consider flying electively? Third, and most importantly, consider for a moment what our society would do—and spend—to fix the problem if there were an aviation disaster every day. The answer, of course, is that there would be no limit to what we would do to fix that problem. Yet prior to To Err Is Human, we were doing next to nothing to make patients safer.
++
This is not to imply that the millions of committed, hardworking, and well-trained doctors, nurses, pharmacists, therapists, and healthcare administrators wanted to harm people from medical mistakes. They did not—to the degree that Albert Wu has labeled providers who commit an error that causes terrible harm “second victims.”6 Yet we now understand that the problem of medical errors is not fundamentally one of “bad apples” (though there are some), but rather one of competent providers working in a chaotic system that has not prioritized safety. As Wachter and Shojania wrote in the 2004 book Internal Bleeding:
Decades of research, mostly from outside healthcare, has confirmed our own medical experience: Most errors are made by good but fallible people working in dysfunctional systems, which means that making care safer depends on buttressing the system to prevent or catch the inevitable lapses of mortals. This logical approach is common in other complex, high-tech industries, but it has been woefully ignored in medicine. Instead, we have steadfastly clung to the view that an error is a moral failure by an individual, a posture that has left patients feeling angry and ready to blame, and providers feeling guilty and demoralized. Most importantly, it hasn’t done a damn thing to make healthcare safer.7
++
Try for a moment to think of systems in healthcare that were truly “hardwired” for safety prior to 1999. Can you come up with any? We can think of just one: the double-checking done by nurses before releasing a unit of blood to prevent ABO transfusion errors. Now think about other error-prone areas: preventing harmful drug interactions or giving patients medicines to which they are allergic; ensuring that patients’ preferences regarding resuscitation are respected; guaranteeing that the correct limbs are operated on; making sure primary care doctors have the necessary information after a hospitalization; diagnosing patients with chest pain in the emergency department correctly—none of these were organized in ways that ensured safety.
++
Interestingly, many of the answers were there for the taking—from industries as diverse as take-out restaurants to nuclear power plants, from commercial aviation to automobile manufacturing—and there are now dozens of examples of successes in applying techniques drawn from other fields to healthcare safety and quality (Table P-1).8 Why does healthcare depend so much on the experiences of other industries to guide its improvement efforts? In part, it is because other industries have long recognized the diverse expertise that must be tapped to produce the best possible product at the lowest cost. In healthcare, the absence of any incentive (until recently) to focus on quality and safety, our burgeoning biomedical knowledge base, our siloed approach to training, and, frankly, professional hubris have caused us to look inward, not outward, for answers. The fact that we are now routinely seeking insights from aviation, manufacturing, education, and other industries, and embracing paradigms from engineering, sociology, psychology, and management, may prove to be the most enduring benefit of the patient safety movement.
++
++
All of this makes the field of patient safety at once vexing and exciting. To keep patients safe will take a uniquely interdisciplinary effort, one in which doctors, nurses, pharmacists, and administrators forge new types of relationships. It will demand that we look to other industries for good ideas, while recognizing that caring for patients is different enough from other human endeavors and that thoughtful adaptation is critical. It will require that we tamp down our traditionally rigid hierarchies, without forgetting the importance of leadership or compromising crucial lines of authority. It will take additional resources, although investments in safety may well pay off in new efficiencies, lower provider turnover, and fewer expensive complications. It will require a thoughtful embrace of this new notion of systems thinking, while recognizing the absolute importance of the well-trained and committed caregiver. Again, from Internal Bleeding:
Although there is much we can learn from industries that have long embraced the systems approach, … medical care is much more complex and customized than flying an Airbus: At 3 a.m., the critically ill patient needs superb and compassionate doctors and nurses more than she needs a better checklist. We take seriously the awesome privileges and responsibilities that society grants us as physicians, and don’t believe for a second that individual excellence and professional passion will become expendable even after our trapeze swings over netting called a “safer system.” In the end, medical errors are a hard enough nut to crack that we need excellent doctors and safer systems.7
++
The first edition of Understanding Patient Safety was published in 2007, and the second in 2012. In preparing this third edition five years later, we were impressed by the maturation of the safety field. Between the first and second edition, for example, there were fundamental changes in our understanding of safety targets, with a shift to a focus on harm rather than errors. We saw the emergence of checklists as a key tool in safety. New safety-oriented practices, such as rapid response teams and medication reconciliation, became commonplace. The digitization of medical practice was just beginning to gain steam, but most doctor's offices and hospitals remained paper-based.
++
Between 2012 and today, the most impressive change has been the widespread computerization of the healthcare system. Fueled by $30 billion in incentive payments distributed under the Meaningful Use and HITECH programs, more than 90% of U.S. hospitals now have electronic health records, as do more than 70% of physician offices (in 2008, these figures were closer to one-in-ten in both hospitals and offices).9 This means that many error types, particularly those related to handwritten prescriptions or failure to transmit information, have all but disappeared. However, they have been replaced by new classes of electronic health record–associated errors that stem from problems at the human–machine interface.10
++
In fact, mitigating the impact of unanticipated consequence has proven to be a major theme of the patient safety field. In this edition, we spend considerable time addressing such issues (Table P-2). We have learned that nearly every safety fix has a dark side. This doesn’t mean that we should hesitate before implementing sensible and evidence-based safety improvements—in fact, we’ve seen good evidence over the past few years that our safety efforts are bearing fruit.11 But it does mean that we need to improve our ability to anticipate unanticipated consequences, ensure that we are alert to them after we implement a safety fix, and do what we can to mitigate any harms that emerge.
++
++
This book aims to teach the key principles of patient safety to a diverse audience: physicians, nurses, pharmacists, other healthcare providers, quality and safety professionals, risk managers, hospital administrators, and others. It is suitable for all levels of readers: from the senior physician trying to learn this new way of approaching his or her work, to the medical or nursing student, to the risk manager or hospital board member seeking to get more involved in institutional safety efforts. The fact that the same book can speak to all of these groups (whereas few clinical textbooks could) is another mark of the interdisciplinary nature of this field. Although many of the examples and references are from the United States, our travels and studies have convinced us that most of the issues are the same internationally, and that all countries can learn much from each other. We have made every effort, therefore, to make the book relevant to a geographically diverse audience, and have included key references and tools from outside the United States.
++
Understanding Patient Safety is divided into three main sections. In Section I, we describe the epidemiology of error, distinguish safety from quality, discuss the key mental models that inform our modern understanding of the safety field, and summarize the policy environment for patient safety. In Section II, we review different error types, taking advantage of real cases to describe various kinds of mistakes and safety hazards, introduce new terminology, and discuss what we know about how errors happen and how they can be prevented. Although many prevention strategies will be touched on in Section II, more general issues regarding various strategies (from both individual institutional and broader policy perspectives) will be reviewed in Section III. After a concluding chapter, the Appendix includes a wide array of resources, from helpful Web sites to a patient safety glossary. To keep the book a manageable size, our goal is to be more useful and engaging than comprehensive—readers wishing to dig deeper will find relevant references throughout the text.
++
Some of the material for this book is derived or adapted from other works that we have edited or written. Specifically, some of the case presentations are drawn from Internal Bleeding: The Truth Behind America's Terrifying Epidemic of Medical Mistakes,7 the “Quality Grand Rounds” series in the Annals of Internal Medicine (Appendix I),12 and AHRQ WebM&M.13 Many of the case presentations came from cases we used for the QGR series, and we are grateful to the patients, families, and caregivers who allowed us to use their stories (often agreeing to be interviewed). Of course, all patient and provider names have been changed to protect privacy.
++
We are indebted to our colleagues at the University of California, San Francisco, particularly Drs. Adrienne Green, Niraj Sehgal, Brad Sharpe, Urmimala Sarkar, and Sumant Ranji, for supporting our work. We are grateful to the Agency for Healthcare Research and Quality for its continued support of AHRQ Patient Safety Network, now approaching its 20th year in publication.14 Our editorship of this widely used safety resource allows us to keep up with the safety literature each week. We are grateful to Kaveh Shojania, now of the University of Toronto, for his remarkable contributions to the safety field and for authoring the book's glossary. We thank our publisher at McGraw-Hill, Jim Shanahan, as well as our editor, Amanda Fielding. Kiran would like to thank several mentors from her time at Brigham and Women's Hospital in Boston including Drs. Tejal Gandhi, Allen Kachalia, and Joel Katz. We are also grateful to our partners in life, Katie Hafner for Bob and Manik Suri for Kiran, for their support and encouragement.
++
Finally, although this is not primarily a book written for patients, it is a book written about patients. As patient safety becomes professionalized (with “patient safety officers”), it will inevitably become jargon-heavy—“We need a root cause analysis!” “What did the Failure Mode Effects Analysis show?”—and this evolution makes it easy to take our eyes off the ball. We now know that tens of thousands of people in the United States and many times that number around the world die each year because of preventable medical errors. Moreover, every day millions of people check into hospitals or clinics worried that they’ll be killed in the process of receiving chemotherapy, undergoing surgery, or delivering a baby. Our efforts must be focused on preventing these errors, and the associated anxiety that patients feel when they receive medical care in an unsafe, chaotic environment.
++
Some have argued that medical errors are the dark side of medical progress, an inevitable consequence of the ever-increasing complexity of modern medicine. Perhaps a few errors fit this description, but most do not. We can easily envision a system in which patients benefit from all the modern miracles available to us, and do so in reliable organizations that take advantage of all the necessary tools and systems to “get it right” the vast majority of the time. Looking back at the remarkable progress that has been made in the 17 years since the publication of To Err Is Human, we are confident that we can create such a system. Our hope is that this book makes a small contribution toward achieving that goal.