The fireworks that accompanied the publication of To Err Is Human by the Institute of Medicine in late 1999 generated some magical thinking about how easy it would be to fix the problem of medical errors. A few computer systems here, some standard processes there (double checks, read-backs), and maybe just a sprinkling of culture change—and poof, patients would be safer.
We now know how naive this point of view was. The problem of medical errors is remarkably complex, and the solutions will need to be as varied as the problems. Do we need better information technology? Yes. Improved teamwork? Yes. Stronger rules and regulations? Yes. Checklists, simulation, decision support, forcing functions? Yes, yes, yes, and yes.
Organizationally, we have come to understand that solutions must be both top down and bottom up. Resources need to be made available from senior leadership and boards—for teamwork training, computers, and appropriate staffing. Yet much of the action in patient safety happens at the frontline: will a nurse simply work around a hazardous situation, leaving it unaddressed, or take the time and trouble to report it and help fix it? Will residents enthusiastically participate in teamwork training programs and M&M conferences? Will the senior surgeon welcome—truly welcome—input from the intern or the nurse who sees something that might put a patient at risk?
The analogies from other industries are extraordinarily helpful, but they take us only so far. Computerizing the hospital is far more challenging than computerizing the supermarket. Changing an operating room's culture is many times more complex than creating an environment in a hermetically sealed cockpit that allows two people—with similar training, expertise, and social status—to feel comfortable raising their concerns. Giving a patient a dozen medications safely is much more difficult than avoiding defects as a car slides down an assembly line. And yet there is much we can learn from all these settings, and that learning has truly begun.
And what is the proper role of patients in all of this? It is clear that patients should be involved in their care, and that patient engagement can be an important part of a comprehensive safety strategy. Moreover, being open and honest about our errors with patients and their families is undeniably right, independent of pragmatic considerations regarding whether such disclosures change the risk of a malpractice suit.
But why should a patient have to check into a hospital or enter a clinic and be worried—quite appropriately—that he or she will be injured by the medical system? We should be proud of the progress we have made in the relatively short time since To Err Is Human launched the modern patient safety movement. But we cannot rest until patients can approach the healthcare system free from fear and anxiety that they will be harmed or killed in the process of being helped. We still have much to do before we get there.