Active error (or active failure)—The terms active and latent as applied to errors were coined by James Reason. Active errors occur at the point of contact between a human and some aspect of a larger system (e.g., a human–machine interface). They are generally readily apparent (e.g., pushing an incorrect button, ignoring a warning light) and almost always involve someone at the frontline. Active failures are sometimes referred to as errors at the sharp end, figuratively referring to a scalpel. In other words, errors at the sharp end are noticed first because they are committed by the person closest to the patient. This person may literally be holding a scalpel (e.g., an orthopedist operating on the wrong leg) or figuratively be administering any kind of therapy (e.g., a nurse programming an intravenous pump) or performing any aspect of care. Latent errors (or latent conditions), in contrast, refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients. To complete the metaphor, latent errors are those at the other end of the scalpel—the blunt end—referring to the many layers of the healthcare system that affect the person “holding” the scalpel.
Adverse drug event (ADE)—An adverse event (i.e., injury resulting from medical care) involving medication use.
Anaphylaxis to penicillin
Major hemorrhage from heparin
Aminoglycoside-induced renal failure
Agranulocytosis from chloramphenicol
As with the more general term adverse event, the occurrence of an ADE does not necessarily indicate an error or poor quality of care. ADEs that involve an element of error (of either omission or commission) are often referred to as preventable ADEs. Medication errors that reached the patient but by good fortune did not cause any harm are often called potential ADEs. For instance, a serious allergic reaction to penicillin in a patient with no prior such history is an ADE, but so is the same reaction in a patient who has a known allergy history but receives penicillin due to a prescribing oversight. The former occurrence would count as an adverse drug reaction or nonpreventable ADE, while the latter would represent a preventable ADE. If a patient with a documented serious penicillin allergy received a penicillin-like antibiotic but happened not to react to it, this event would be characterized as a potential ADE.
An ameliorable ADE is one in which the patient experienced harm from a medication that, while not completely preventable, could have been mitigated. For instance, a patient taking a cholesterol-lowering agent (statin) may develop muscle pains and eventually progress to a more serious condition called rhabdomyolysis. Failure to periodically check a blood test that assesses muscle damage or failure to recognize this possible diagnosis in a patient taking statins who subsequently develops rhabdomyolysis would make this event an ameliorable ADE: harm from medical care that could have been lessened with earlier, appropriate management. Again, the initial development of some problem was not preventable, but the eventual harm that occurred need not have been so severe, hence the term ameliorable ADE.
Adverse drug reaction—Adverse effect produced by the use of a medication in the recommended manner—i.e., a drug side effect. These effects range from nuisance effects (e.g., dry mouth with anticholinergic medications) to severe reactions, such as anaphylaxis to penicillin. Adverse drug reactions represent a subset of the broad category of adverse drug events—specifically, they are non-preventable ADEs.
Adverse event—Any injury caused by medical care.
Pneumothorax from central venous catheter placement
Anaphylaxis to penicillin
Postoperative wound infection
Hospital-acquired delirium (or “sundowning”) in elderly patients
Identifying something as an adverse event does not imply “error,” “negligence,” or poor quality care. It simply indicates that an undesirable clinical outcome resulted from some aspect of diagnosis or therapy, not an underlying disease process. Thus, pneumothorax from central venous catheter placement counts as an adverse event regardless of insertion technique. Similarly, a postoperative wound infection counts as an adverse event even if the operation proceeded with optimal adherence to sterile procedures, the patient received appropriate antibiotic prophylaxis in the perioperative setting, and so on. (See also “iatrogenic”).
Adverse events after hospital discharge—Being discharged from the hospital can be dangerous for patients. Nearly 20% of patients experience an adverse event in the first 3 weeks after discharge, including medication errors, healthcare-associated infections, and procedural complications.
Alert fatigue—Computerized warnings and alarms are used to improve safety by alerting clinicians of potentially unsafe situations. However, this proliferation of alerts may have negative implications for patient safety as well.
Anchoring error (or bias)—Refers to the common cognitive trap of allowing first impressions to exert undue influence on the diagnostic process. Clinicians often latch on to features of a patient's presentation that suggest a specific diagnosis. Often, this initial diagnostic impression will prove correct, hence the use of the phrase anchoring heuristic in some contexts, as it can be a useful rule of thumb to “always trust your first impressions.” However, in some cases, subsequent developments in the patient's course will prove inconsistent with the first impression. Anchoring bias refers to the tendency to hold on to the initial diagnosis, even in the face of disconfirming evidence.
APACHE—The Acute Physiologic and Chronic Health Evaluation (APACHE) scoring system has been widely used in the United States. APACHE II is the most widely studied version of this instrument (a more recent version, APACHE III, is proprietary, whereas APACHE II is publicly available); it derives a severity score from such factors as underlying disease and chronic health status. Other points are added for 12 physiologic variables (i.e., hematocrit, creatinine, Glasgow Coma Score, mean arterial pressure) measured within 24 hours of admission to the ICU. The APACHE II score has been validated in several studies involving tens of thousands of ICU patients.
Authority gradient—Refers to the balance of decision-making power or the steepness of command hierarchy in a given situation. Members of a crew or organization with a domineering, overbearing, or dictatorial team leader experience a steep authority gradient. Expressing concerns, questioning, or even simply clarifying instructions would require considerable determination on the part of team members who perceive their input as devalued or frankly unwelcome. Most teams require some degree of authority gradient; otherwise roles are blurred and decisions cannot be made in a timely fashion. However, effective team leaders consciously establish a command hierarchy appropriate to the training and experience of team members.
Authority gradients may occur even when the notion of a team is less well defined. For instance, a pharmacist calling a physician to clarify an order may encounter a steep authority gradient, based on the tone of the physician's voice or a lack of openness to input from the pharmacist. A confident, experienced pharmacist may nonetheless continue to raise legitimate concerns about an order, but other pharmacists might not.
Availability bias (or heuristic)—Refers to the tendency to assume, when judging probabilities or predicting outcomes, that the first possibility that comes to mind (i.e., the most cognitively “available” possibility) is also the most likely possibility. For instance, suppose a patient presents with intermittent episodes of very high blood pressure. Because episodic hypertension resembles textbook descriptions of pheochromocytoma, a memorable but uncommon endocrinologic tumor, this diagnosis may immediately come to mind. A clinician who infers from this immediate association that pheochromocytoma is the most likely diagnosis would be exhibiting availability bias. In addition to resemblance to classic descriptions of disease, personal experience can also trigger availability bias, as when the diagnosis underlying a recent patient's presentation immediately comes to mind when any subsequent patient presents with similar symptoms. Particularly memorable cases may similarly exert undue influence in shaping diagnostic impressions.
Bayesian approach—Probabilistic reasoning in which test results (not just laboratory investigations but also history, physical exam, or any aspect for the diagnostic process) are combined with prior beliefs about the probability of a particular disease. One way of recognizing the need for a Bayesian approach is to recognize the difference between the performance of a test in a population and that in an individual. At the population level, we can say that a test has a sensitivity and specificity of, say, 90%—that is, 90% of patients with the condition of interest have a positive result and 90% of patients without the condition have a negative result. In practice, however, a clinician needs to attempt to predict whether an individual patient with a positive or negative result does or does not have the condition of interest. This prediction requires combining the observed test result not just with the known sensitivity and specificity but also with the chance the patient could have had the disease in the first place (based on demographic factors, findings on exam, or general clinical gestalt).
Beers criteria—Beers criteria define medications that generally should be avoided in ambulatory elderly patients, doses or frequencies of administration that should not be exceeded, and medications that should be avoided in older persons known to have any of several common conditions. The criteria were originally developed using a formal consensus process for combining reviews of the evidence with expert input. The criteria for inappropriate use address commonly used categories of medications such as sedative-hypnotics, antidepressants, antipsychotics, antihypertensives, nonsteroidal anti-inflammatory agents, oral hypoglycemics, analgesics, dementia treatments, platelet inhibitors, histamine-2 blockers, antibiotics, decongestants, iron supplements, muscle relaxants, gastrointestinal antispasmodics, and antiemetics. The criteria were intended to guide clinical practice, but also to inform quality assurance review and health services research.
Most would agree that prescriptions for medications deemed inappropriate according to Beers criteria represent poor quality care. Unfortunately, harm does not only occur from receipt of these inappropriately prescribed medications. In one comprehensive national study of medication-related emergency department visits for elderly patients, most problems involved common and important medications not considered inappropriate according to the Beers criteria—principally, oral anticoagulants (e.g., warfarin), antidiabetic agents (e.g., insulin), and antiplatelet agents (aspirin and clopidogrel).
Benchmark—A benchmark in healthcare refers to an attribute or achievement that serves as a standard for other providers or institutions to emulate. Benchmarks differ from other standard of care goals, in that they derive from empiric data—specifically, performance or outcomes data. For example, a statewide survey might produce risk-adjusted 30-day rates for death or other major adverse outcomes. After adjusting for relevant clinical factors, the top 10% of hospitals can be identified in terms of particular outcome measures. These institutions would then provide benchmark data on these outcomes. For instance, one might benchmark “door-to-balloon” time at 90 minutes, based on the observation that the top-performing hospitals all had door-to-balloon times in this range. In regard to infection control, benchmarks would typically be derived from national or regional data on the rates of relevant nosocomial infections. The lowest 10% of these rates might be regarded as benchmarks for other institutions to emulate.
Black Box Warnings—The prominent warning labels (generally printed inside black boxes) on packages for certain prescription medications in the United States. These warnings typically arise from post-market surveillance or post-approval clinical trials that bring to light serious adverse reactions. The U.S. Food and Drug Administration (FDA) subsequently may require a pharmaceutical company to place a black box warning on the labeling or packaging of the drug. Although medications with black box warnings often enjoy widespread use and, with cautious use, typically do not result in harm, these warnings remain important sources of safety information for patients and healthcare providers. They also emphasize the importance of continued, post-market surveillance for adverse drug reactions for all medications, especially relatively new ones.
Blunt end—The blunt end refers to the many layers of the healthcare system not in direct contact with patients, but which influence the personnel and equipment at the sharp end who do contact patients. The blunt end thus consists of those who set policy, manage healthcare institutions, and design medical devices, and other people and forces, which, though removed in time and space from direct patient care, nonetheless affect how care is delivered. Thus, an error programming an intravenous pump would represent a problem at the sharp end, while the institution's decision to use multiple different types of infusion pumps, making programming errors more likely, would represent a problem at the blunt end. The terminology of “sharp” and “blunt” ends corresponds roughly to active failures and latent conditions.
Checklist—Algorithmic listing of actions to be performed in a given clinical setting (e.g., advanced cardiac life support [ACLS] protocols for treating cardiac arrest) to ensure that, no matter how often performed by a given practitioner, no step will be forgotten. An analogy is often made to flight preparation in aviation, as pilots and air traffic controllers follow pretakeoff checklists regardless of how many times they have carried out the tasks involved.
Clinical decision support system (CDSS)—Any system designed to improve clinical decision making related to diagnostic or therapeutic processes of care. Typically a decision support system responds to “triggers” or “flags”—specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters—and provides information or recommendations directly relevant to a specific patient encounter.
CDSSs address activities ranging from the selection of drugs (e.g., the optimal antibiotic choice given specific microbiologic data) or diagnostic tests to detailed support for optimal drug dosing and support for resolving diagnostic dilemmas. Structured antibiotic order forms represent a common example of paper-based CDSSs. Although such systems are still commonly encountered, many people equate CDSSs with computerized systems in which software algorithms generate patient-specific recommendations by matching characteristics, such as age, renal function, or allergy history, with rules in a computerized knowledge base.
The distinction between decision support and simple reminders can be unclear, but usually reminder systems are included as decision support if they involve patient-specific information. For instance, a generic reminder (e.g., “Did you obtain an allergy history?”) would not be considered decision support, but a warning (e.g., “This patient is allergic to codeine.”) that appears at the time of entering an order for codeine would be.
Close call—An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (e.g., a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (e.g., a nurse happens to realize that a physician wrote an order in the wrong chart). Such events have also been termed near miss incidents.
Competency—Having the necessary knowledge or technical skill to perform a given procedure within the bounds of success and failure rates deemed compatible with acceptable care. The medical education literature often refers to core competencies, which include not just technical skills with respect to procedures or medical knowledge but also competencies with respect to communicating with patients, collaborating with other members of the healthcare team, and acting as a manager or agent for change in the health system.
Complexity science (or complexity theory)—Provides an approach to understanding the behavior of systems that exhibit nonlinear dynamics, or the ways in which some adaptive systems produce novel behavior not expected from the properties of their individual components. Such behaviors emerge as a result of interactions between agents at a local level in the complex system and between the system and its environment.
Complexity theory differs importantly from systems thinking in its emphasis on the interaction between local systems and their environment (such as the larger system in which a given hospital or clinic operates). It is often tempting to ignore the larger environment as unchangeable and therefore outside the scope of quality improvement or patient safety activities. According to complexity theory, however, behavior within a hospital or clinic (e.g., noncompliance with a national practice guideline) can often be understood only by identifying interactions between local attributes and environmental factors.
Computerized provider order entry or computerized physician order entry (CPOE)—Refers to a computer-based system of ordering medications and often other tests. Physicians (or other providers) directly enter orders into a computer system that can have varying levels of sophistication. Basic CPOE ensures standardized, legible, complete orders, and thus primarily reduces errors caused by poor handwriting and ambiguous abbreviations.
Almost all CPOE systems offer some additional capabilities, which fall under the general rubric of CDSS. Typical CDSS features involve suggested default values for drug doses, routes of administration, or frequency. More sophisticated CDSSs can perform drug allergy checks (e.g., the user orders ceftriaxone and a warning flashes that the patient has a documented penicillin allergy), drug-laboratory value checks (e.g., initiating an order for gentamicin prompts the system to alert you to the patient's last creatinine), drug–drug interaction checks, and so on. At the highest level of sophistication, CDSS prevents not only errors of commission (e.g., ordering a drug in excessive doses or in the setting of a serious allergy) but also errors of omission. For example, an alert may appear such as, “You have ordered heparin; would you like to order a partial thromboplastin time (PTT) in 6 hours?” Or, even more sophisticated: “The admitting diagnosis is hip fracture; would you like to order heparin for deep vein thrombosis (DVT) prophylaxis?” See also “Clinical decision support system.”
Confirmation bias—Refers to the tendency to focus on evidence that supports a working hypothesis, such as a diagnosis in clinical medicine, rather than to look for evidence that refutes it or provides greater support to an alternative diagnosis. Suppose that a 65-year-old man with a past history of angina presents to the emergency department with acute onset of shortness of breath. The physician immediately considers the possibility of cardiac ischemia, so asks the patient if he has experienced any chest pain. The patient replies affirmatively. Because the physician perceives this answer as confirming his working diagnosis, he does not ask if the chest pain was pleuritic in nature, which would decrease the likelihood of an acute coronary syndrome and increase the likelihood of pulmonary embolism (a reasonable alternative diagnosis for acute shortness of breath accompanied by chest pain). The physician then orders an ECG and cardiac troponin. The ECG shows nonspecific ST changes and the troponin returns slightly elevated.
Of course, ordering an ECG and testing cardiac enzymes is appropriate in the work-up of acute shortness of breath, especially when it is accompanied by chest pain and in a patient with known angina. The problem is that these tests may be misleading, since positive results are consistent not only with acute coronary syndrome but also with pulmonary embolism. To avoid confirmation bias in this case, the physician might have obtained an arterial blood glass or a D-dimer level. Abnormal results for either of these tests would be relatively unlikely to occur in a patient with an acute coronary syndrome (unless complicated by pulmonary edema), but likely to occur with pulmonary embolism. These results could be followed up by more direct testing for pulmonary embolism (e.g., with a helical CT scan of the chest), while normal results would allow the clinician to proceed with greater confidence down the road of investigating and managing cardiac ischemia.
This vignette was presented as if information were sought in sequence. In many cases, especially in acute care medicine, clinicians have the results of numerous tests in hand when they first meet a patient. The results of these tests often do not all suggest the same diagnosis. The appeal of accentuating confirmatory test results and ignoring nonconfirmatory ones is that it minimizes cognitive dissonance.
A related cognitive trap that may accompany confirmation bias and compound the possibility of error is “anchoring bias”—the tendency to stick with one's first impressions, even in the face of significant disconfirming evidence.
Crew resource management (CRM)—Also called crisis resource management in some contexts (e.g., anesthesia), encompasses a range of approaches to training groups to function as teams, rather than as collections of individuals. Originally developed in aviation, CRM emphasizes the role of human factors—the effects of fatigue, expected or predictable perceptual errors (such as misreading monitors or mishearing instructions), as well as the impact of different management styles and organizational cultures in high-stress, high-risk environments. CRM training develops communication skills, fosters a more cohesive environment among team members, and creates an atmosphere in which junior personnel will feel free to speak up when they think that something is amiss. Some CRM programs emphasize education on the settings in which errors occur and the aspects of team decision making conducive to “trapping” errors before they cause harm. Other programs may provide more hands-on training involving simulated crisis scenarios followed by debriefing sessions in which participants assess their own and others’ behavior.
Critical incidents—A term made famous by a classic human factors study by Jeffrey Cooper of “anesthetic mishaps,” though the term had first been coined in the 1950s. Cooper and colleagues brought the technique of critical incident analysis to a wide audience in healthcare but followed the definition of the originator of the technique. They defined critical incidents as occurrences that are “significant or pivotal, in either a desirable or an undesirable way,” though Cooper and colleagues (and most others since) chose to focus on incidents that had potentially undesirable consequences. This concept is best understood in the context of the type of investigation that follows, which is very much in the style of root cause analysis. Thus, significant or pivotal means that there was significant potential for harm (or actual harm), but also that the event has the potential to reveal important hazards in the organization. In many ways, it embodies the expression in quality improvement circles that “every defect is a treasure.” In other words, these incidents, whether near misses or disasters in which significant harm occurred, provide valuable opportunities to learn about individual and organizational factors that can be remedied to prevent similar incidents in the future.
Decision support—Refers to any system for advising or providing guidance about a particular clinical decision at the point of care. For example, a copy of an algorithm for antibiotic selection in patients with community-acquired pneumonia would count as clinical decision support if made available at the point of care. Increasingly, decision support occurs via a computerized clinical information or order entry system. Computerized decision support includes any software employing a knowledge base designed to assist clinicians in decision making at the point of care.
Typically a decision support system responds to “triggers” or “flags”—specific diagnoses, laboratory results, medication choices, or complex combinations of such parameters—and provides information or recommendations directly relevant to a specific patient encounter. For instance, ordering an aminoglycoside for a patient with creatinine above a certain value might trigger a message suggesting a dose adjustment based on the patient's decreased renal function.
Diagnostic errors—Thousands of patients die every year due to diagnostic errors. While clinicians’ cognitive biases play a role in many diagnostic errors, underlying healthcare system problems also contribute to missed and delayed diagnoses.
Disclosure, error disclosure—Many victims of medical errors never learn of the mistake, because the error is simply not disclosed. Physicians have traditionally shied away from discussing errors with patients, due to fear of precipitating a malpractice lawsuit and embarrassment and discomfort with the disclosure process.
Disruptive and unprofessional behavior—Popular media often depicts physicians as brilliant, intimidating, and condescending in equal measures. This stereotype, though undoubtedly dramatic and even amusing, obscures the fact that disruptive and unprofessional behavior by clinicians poses a definite threat to patient safety.
Duty hours—Long and unpredictable work hours have been a staple of medical training for centuries. In 2003, the Accreditation Council for Graduate Medical Education (ACGME) implemented new rules limiting duty hours for all residents to reduce fatigue. The implementation of resident duty-hour restrictions has been controversial, as evidence regarding its impact on patient safety has been mixed.
Error—An act of commission (doing something wrong) or omission (failing to do the right thing) that leads to an undesirable outcome or significant potential for such an outcome. For instance, ordering a medication for a patient with a documented allergy to that medication would be an act of commission. Failing to prescribe a proven medication with major benefits for an eligible patient (e.g., low-dose unfractionated heparin as venous thromboembolism prophylaxis for a patient after hip replacement surgery) would represent an error of omission.
Errors of omission are more difficult to recognize than errors of commission but likely represent a larger problem. In other words, there are likely many more instances in which the provision of additional diagnostic, therapeutic, or preventive modalities would have improved care than there are instances in which the care provided quite literally should not have been provided. In many ways, this point echoes the generally agreed-upon view in the healthcare quality literature that underuse far exceeds overuse, even though the latter historically received greater attention. (See definition for “Underuse, overuse, and misuse”). In addition to commission versus omission, three other dichotomies commonly appear in the literature on errors: active failures versus latent conditions, errors at the sharp end versus errors at the blunt end, and slips versus mistakes.
Error chain—Error chain generally refers to the series of events that led to a disastrous outcome, typically uncovered by a root cause analysis. Sometimes the chain metaphor carries the added sense of inexorability, as many of the causes are tightly coupled, such that one problem begets the next. A more specific meaning of error chain, especially when used in the phrase “break the error chain,” relates to the common themes or categories of causes that emerge from root cause analyses. These categories go by different names in different settings, but they generally include (1) failure to follow standard operating procedures, (2) poor leadership, (3) breakdowns in communication or teamwork, (4) overlooking or ignoring individual fallibility, and (5) losing track of objectives. Used in this way, “break the error chain” is shorthand for an approach in which team members continually address these links as a crisis or routine situation unfolds. The checklists that are included in teamwork training programs have categories corresponding to these common links in the error chain (e.g., establish a team leader, assign roles and responsibilities, and monitor your teammates).
Evidence-based—Use of the phrase “evidence-based” in connection with an assertion about some aspect of medical care—a recommended treatment, the cause of some condition, or the best way to diagnose it—implies that the assertion reflects the results of medical research, as opposed to, for example, a personal opinion (plausible or widespread as that opinion might be). Given the volume of medical research and the not-infrequent occurrence of conflicting results from different studies addressing the same question, the phrase “reflects the results of medical research” should be clarified as “reflects the preponderance of results from relevant studies of good methodological quality.”
The concept of evidence-based treatments has particular relevance to patient safety, because many recommended methods for measuring and improving safety problems have been drawn from other high-risk industries, without any studies to confirm that these strategies work well in healthcare (or, in many cases, that they work well in the original industry). The lack of evidence supporting widely recommended (sometimes even mandated) patient safety practices contrasts sharply with the rest of clinical medicine. While individual practitioners may employ diagnostic tests or administer treatments of unproven value, professional organizations typically do not endorse such aspects of care until well-designed studies demonstrate that these diagnostic or treatment strategies confer net benefit to patients (i.e., until they become evidence-based). Certainly, diagnostic and therapeutic processes do not become standard of care or in any way mandated until they have undergone rigorous evaluation in well-designed studies.
In patient safety, by contrast, patient safety goals established at state and national levels (sometimes even mandated by regulatory agencies or by law) often reflect ideas that have undergone little or no empiric evaluation. Just as in clinical medicine, promising safety strategies sometimes can turn out to confer no benefit or even create new problems—hence the need for rigorous evaluations of candidate patient safety strategies just as in other areas of medicine. That said, just how high to set the bar for the evidence required to justify actively disseminating patient safety and quality improvement strategies is a subject that has received considerable attention in recent years. Some leading thinkers in patient safety argue that an evidence bar comparable to that used in more traditional clinical medicine would be too high, given the difficulty of studying complex social systems such as hospitals and clinics, and the high costs of studying interventions such as rapid response teams or computerized order entry.
Face validity—The extent to which a technical concept, instrument, or study result is plausible, usually because its findings are consistent with prior assumptions and expectations.
Failure mode—Error analysis may involve retrospective investigations (as in Root Cause Analysis) or prospective attempts to predict “error modes.” Different frameworks exist for predicting possible errors. One commonly used approach is failure mode and effect analysis (FMEA), in which the likelihood of a particular process failure is combined with an estimate of the relative impact of that error to produce a “criticality index.” By combining the probability of failure with the consequences of failure, this index allows for the prioritization of specific processes as quality improvement targets. For instance, an FMEA analysis of the medication dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (i.e., those with the highest “criticality indices”) would be prioritized for error proofing.
Failure mode and effects analysis (FMEA)—A common process used to prospectively identify error risk within a particular process. FMEA begins with a complete process mapping that identifies all the steps that must occur for a given process to occur (e.g., programming an infusion pump or preparing an intravenous medication in the pharmacy). With the process mapped out, the FMEA then continues by identifying the ways in which each step can go wrong (i.e., the “failure modes” for each step), the probability that each error will be detected (i.e., so that it can be corrected before causing harm), and the consequences or impact of the error not being detected. The estimates of the likelihood of a particular process failure, the chance of detecting such failure, and its impact are combined numerically to produce a criticality index.
This criticality index provides a rough quantitative estimate of the magnitude of hazard posed by each step in a high-risk process. Assigning a criticality index to each step allows prioritization of targets for improvement. For instance, an FMEA analysis of the medication-dispensing process on a general hospital ward might break down all steps from receipt of orders in the central pharmacy to filling automated dispensing machines by pharmacy technicians. Each step in this process would be assigned a probability of failure and an impact score, so that all steps could be ranked according to the product of these two numbers. Steps ranked at the top (i.e., those with the highest criticality indices) would be prioritized for error proofing.
FMEA makes sense as a general approach and it (or similar prospective error-proofing techniques) has been used in other high-risk industries. However, the reliability of the technique is not clear. Different teams charged with analyzing the same process may identify different steps in the process, assign different risks to the steps, and consequently prioritize different targets for improvement.
Failure to rescue—Failure to rescue is shorthand for failure to rescue (i.e., prevent a clinically important deterioration, such as death or permanent disability) from a complication of an underlying illness (e.g., cardiac arrest in a patient with acute myocardial infarction) or a complication of medical care (e.g., major hemorrhage after thrombolysis for acute myocardial infarction). Failure to rescue thus provides a measure of the degree to which providers responded to adverse occurrences (e.g., hospital-acquired infections, cardiac arrest or shock) that developed on their watch. It may reflect the quality of monitoring, the effectiveness of actions taken once early complications are recognized, or both.
The technical motivation for using failure to rescue to evaluate the quality of care stems from the concern that some institutions might document adverse occurrences more assiduously than others. Therefore, using lower rates of in-hospital complications by themselves may simply reward hospitals with poor documentation. However, if the medical record indicates that a complication has occurred, the response to that complication should provide an indicator of the quality of care that is less susceptible to charting bias.
Forcing function—An aspect of a design that prevents a target action from being performed or allows its performance only if another specific action is performed first. For example, automobiles are now designed so that the driver cannot shift into reverse without first putting his or her foot on the brake pedal. Forcing functions need not involve device design. For instance, one of the first forcing functions identified in healthcare was the removal of concentrated potassium from general hospital wards. This action is intended to prevent the inadvertent preparation of intravenous solutions with concentrated potassium, an error that has produced small but consistent numbers of deaths for many years.
“Five Rights”—The “Five Rights”—administering the Right Medication, in the Right Dose, at the Right Time, by the Right Route, to the Right Patient—are the cornerstone of traditional nursing teaching about safe medication practice.
While the Five Rights represent goals of safe medication administration, they contain no procedural detail, and thus may inadvertently perpetuate the traditional focus on individual performance rather than system improvement. Procedures for ensuring each of the Five Rights must take into account human factor and systems design issues (such as workload, ambient distractions, poor lighting, problems with wristbands, ineffective double-check protocols, etc.) that can threaten or undermine even the most conscientious efforts to comply with the Five Rights. In the end, the Five Rights remain an important goal for safe medication practice, but one that may give the illusion of safety if not supported by strong policies and procedures, a system organized around modern principles of patient safety, and a robust safety culture.
Handoffs and handovers—Refer to the process when one healthcare professional updates another on the status of one or more patients for the purpose of taking over their care. Typical examples involve a physician who has been on call overnight telling an incoming physician about patients she has admitted so he can continue with their ongoing management, know what immediate issues to watch out for, and so on. Nurses similarly conduct a handover at the end of their shift, updating their colleagues about the status of the patients under their care and tasks that need to be performed. When the outgoing nurses return for their next duty period, they will in turn receive new updates during the change of shift handover.
Handovers in care have always carried risks: a professional who spent hours assessing and managing a patient, on completion of her work, provides a brief summary of the salient features of the case to an incoming professional who typically has other unfamiliar patients he must get to know. The summary may leave out key details due to oversight, exacerbated by an unstructured process and being rushed to finish work. Even structured, fairly thorough summaries during handovers may fail to capture nuances that could subsequently prove relevant.
Despite these long-recognized problems, handovers received relatively little attention until recent years, when they became more frequent. For instance, with reductions in duty hours for physician trainees, more handovers must occur in any given 24-hour period. And, with shorter lengths of stay in hospitals and other occupancy issues, patients more often move from one ward to another or from one institution to another (e.g., from an acute care hospital to a rehabilitation facility or skilled nursing facility).
Due to the increasing recognition of hazards associated with these transitions in care, the term “handovers” is often used to refer to the information transfer that occurs from one clinical setting to another (e.g., from hospital to nursing home), not just from one professional to another.
Healthcare-associated infections—Although long accepted by clinicians as an inevitable hazard of hospitalization, recent efforts demonstrate that relatively simple measures can prevent the majority of healthcare-associated infections. As a result, hospitals are under intense pressure to reduce the burden of these infections.
Health literacy—Individuals’ ability to find, process, and comprehend the basic health information necessary to act on medical instructions and make decisions about their health. Numerous studies have documented the degree to which patients frequently do not understand basic information or instructions related to general aspects of their medical care, their medications, and procedures they will undergo. The limited ability to comprehend medical instructions or information in some cases reflects obvious language barriers (e.g., reviewing medication instructions in English with a patient who speaks very little English), but the scope of the problem reflects broader issues related to levels of education, cross-cultural differences, and overuse of technical terminology by clinicians.
Heuristic—Loosely defined or informal rules often arrived at through experience or trial and error that influence assessments and decisions (e.g., gastrointestinal complaints that wake patients up at night are unlikely to be benign in nature). Heuristics provide cognitive shortcuts in the face of complex situations, and thus serve an important purpose. Unfortunately, they can also turn out to be wrong, with frequently used heuristics often forming the basis for the many cognitive biases, such as anchoring bias, availability bias, confirmation bias, and others, that have received attention in the literature on diagnostic errors and medical decision making.
The Health Insurance Portability and Accountability Act (HIPAA)—HIPAA, passed by the U.S. Congress in 1996, was intended to increase privacy and security of patient information during electronic transmission or communication of “protected health information” (PHI) among providers or between providers and payers or other entities.
PHI includes all medical records and other individually identifiable health information. “Individually identifiable information” includes data explicitly linked to a patient, as well as health information with data items that carry a reasonable potential for allowing individual identification.
HIPAA also requires providers to offer patients certain rights with respect to their information, including the right to access and copy their records and the right to request amendments to the information contained in their records.
Administrative protections specified by HIPAA to promote the above regulations and rights include requirements for a Privacy Officer and staff training regarding the protection of patients’ information.
High reliability organizations (HROs)—HROs refer to organizations or systems that operate in hazardous conditions but have fewer than their fair share of adverse events. Commonly discussed examples include air traffic control systems, nuclear power plants, and naval aircraft carriers. It is worth noting that, in the safety literature, organizations labeled as HROs are ones that operate with nearly failure-free performance records, not simply better than average ones. This shift in meaning is understandable given that the failure rates in these other industries are much lower than rates of errors and adverse events in healthcare. This comparison glosses over the difference in significance of a “failure” in the nuclear power industry compared with one in healthcare. The point remains, however, that some organizations achieve consistently safe and effective performance records despite unpredictable operating environments or intrinsically hazardous endeavors. Detailed case studies of specific HROs have identified some common features, which have been offered as models for other organizations to achieve substantial improvements in their safety records. These features include:
Preoccupation with failure—the acknowledgment of the high-risk, error-prone nature of an organization's activities and the determination to achieve consistently safe operations.
Commitment to resilience—the development of capacities to detect unexpected threats and contain them before they cause harm, or bounce back when they do.
Sensitivity to operations—an attentiveness to the issues facing workers at the frontline. This feature comes into play when conducting analyses of specific events (e.g., frontline workers play a crucial role in root cause analysis by bringing up unrecognized latent threats in current operating procedures), but also in connection with organizational decision making.
Decentralized decision making—management units at the frontline are given some autonomy in identifying and responding to threats, rather than adopting a rigid top-down approach.
A culture of safety, in which individuals feel comfortable drawing attention to potential hazards or actual failures without fear of censure from management.
Hindsight bias—In a very general sense, hindsight bias relates to the common expression, “hindsight is 20/20.” This expression captures the tendency for people to regard past events as expected or obvious, even when, in real time, the events perplexed those involved. More formally, one might say that after learning the outcome of a series of events—whether the outcome of the World Series or the steps leading to a war—people tend to exaggerate the extent to which they had foreseen the likelihood of its occurrence.
In the context of safety analysis, hindsight bias refers to the tendency to judge the events leading up to an accident as errors because the bad outcome is known. The more severe the outcome, the more likely that decisions leading up to this outcome will be judged as errors. Judging the antecedent decisions as errors implies that the outcome was preventable. In legal circles, one might use the phrase “but for,” as in “but for these errors in judgment, this terrible outcome would not have occurred.” Such judgments return us to the concept of “hindsight is 20/20.” Those reviewing events after the fact see the outcome as more foreseeable and therefore more preventable than they would have appreciated in real time.
Human factors (or human factors engineering)—Refers to the study of human abilities and characteristics as they affect the design and smooth operation of equipment, systems, and jobs. The field concerns itself with considerations of the strengths and weaknesses of human physical and mental abilities and how these affect the systems design. Human factors analysis does not require designing or redesigning existing objects. For instance, the now generally accepted recommendation that hospitals standardize equipment such as ventilators, programmable IV pumps, and defibrillators (i.e., each hospital picks a single type, so that different floors do not have different defibrillators) is an example of a very basic application of a heuristic from human factors that equipment be standardized within a system wherever possible.
Iatrogenic—An adverse effect of medical care, rather than of the underlying disease (literally “brought forth by healer,” from Greek iatros, for healer, and gennan, to bring forth); equivalent to adverse event.
Incident reporting—Refers to the identification of occurrences that could have led, or did lead, to an undesirable outcome. Reports usually come from personnel directly involved in the incident or events leading up to it (e.g., the nurse, pharmacist, or physician caring for a patient when a medication error occurred) rather than, say, floor managers. From the perspective of those collecting the data, incident reporting counts as a passive form of surveillance, relying on those involved in target incidents to provide the desired information. Compared with medical record review and direct observation (active methods), incident reporting captures only a fraction of incidents, but has the advantages of relatively low cost and the involvement of frontline personnel in the process of identifying important problems for the organization.
Informed consent—Refers to the process whereby a physician informs a patient about the risks and benefits of a proposed therapy or test. Informed consent aims to provide sufficient information about the proposed course and any reasonable alternatives that the patient can exercise autonomy in deciding whether to proceed.
Legislation governing the requirements of, and conditions under which, consent must be obtained varies by jurisdiction. Most general guidelines require patients to be informed of the nature of their condition, the proposed procedure, the purpose of the procedure, the risks and benefits of the proposed treatments, the probability of the anticipated risks and benefits, alternatives to the treatment and their associated risks and benefits, and the risks and benefits of not receiving the treatment or procedure.
Although the goals of informed consent are irrefutable, consent is often obtained in a haphazard, pro forma fashion, with patients having little true understanding of procedures to which they have consented. Evidence suggests that asking patients to restate the essence of the informed consent improves the quality of these discussions and makes it more likely that the consent is truly informed.
Just culture—The phrase “Just Culture” was popularized in the patient safety lexicon by David Marx, who outlined principles for achieving a culture in which frontline personnel feel comfortable disclosing errors—including their own—while maintaining professional accountability.
Traditionally, healthcare's culture has held individuals accountable for all errors or mishaps that befall patients under their care. By contrast, a Just Culture recognizes that individual practitioners should not be held accountable for system failings over which they have no control. It also recognizes many individual or “active” errors are the result of predictable interactions between human operators and the systems in which they work. However, in contrast to a culture that touts “no blame” as its governing principle, a Just Culture does not tolerate conscious disregard of clear risks to patients or gross misconduct (e.g., falsifying a record, performing professional duties while intoxicated).
In summary, a Just Culture recognizes that competent professionals make mistakes and acknowledges that even competent professionals will develop unhealthy norms (shortcuts, “routine rule violations”), but has zero tolerance for reckless behavior.
Latent error (or latent condition)—The terms active and latent as applied to errors were coined by James Reason. Latent errors (or latent conditions) refer to less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients. For instance, whereas the active failure in a particular adverse event may have been a mistake in programming an intravenous pump, a latent error might be that the institution uses multiple different types of infusion pumps, making programming errors more likely. Thus, latent errors are quite literally “accidents waiting to happen.” Latent errors are sometimes referred to as errors at the blunt end, referring to the many layers of the healthcare system that affect the person “holding” the scalpel. Active failures, in contrast, are sometimes referred to as errors at the sharp end, or the personnel and parts of the healthcare system in direct contact with patients.
Learning curve—The acquisition of any new skill is associated with the potential for lower-than-expected success rates or higher-than-expected complication rates. This phenomenon is often known as a learning curve. In some cases, this learning curve can be quantified in terms of the number of procedures that must be performed before an operator can replicate the outcomes of more experienced operators or centers. While learning curves are almost inevitable when new procedures emerge or new providers are in training, minimizing their impact is a patient safety imperative. One option is to perform initial operations or procedures under the supervision of more experienced operators. Surgical and procedural simulators may play an increasingly important role in decreasing the impact of learning curves on patients, by allowing acquisition of relevant skills in laboratory settings.
Magnet hospital status—Refers to a designation by the Magnet Hospital Recognition Program administered by the American Nurses Credentialing Center. The program has its genesis in a 1983 study conducted by the American Academy of Nursing that sought to identify hospitals that retained nurses for longer than average periods of time. The study identified institutional characteristics correlated with high retention rates, an important finding in light of a major nursing shortage at the time. These findings provided the basis for the concept of magnet hospital and led 10 years later to the formal Magnet Program.
Without taking anything away from the particular hospitals that have achieved magnet status, the program has its critics. Regardless of the particulars of the Magnet Recognition Program and the lack of persuasive evidence linking magnet status to quality, to many the term magnet hospital connotes a hospital that delivers superior patient care and, partly on this basis, attracts and retains high-quality nurses.
Medical Emergency Team—The concept of medical emergency teams (also known as rapid response teams) is that of a cardiac arrest team with more liberal calling criteria. Instead of just frank respiratory or cardiac arrest, medical emergency teams respond to a wide range of worrisome, acute changes in patients’ clinical status, such as low blood pressure, difficulty breathing, or altered mental status. In addition to less stringent calling criteria, the concept of medical emergency teams de-emphasizes the traditional hierarchy in patient care in that anyone can initiate the call. Nurses, junior medical staff, or others involved in the care of patients can call for the assistance of the medical emergency team whenever they are worried about a patient's condition, without having to wait for more senior personnel to assess the patient and approve the decision to call for help.
Medication reconciliation—Patients admitted to a hospital commonly receive new medications or have changes made to their existing medications. As a result, the new medication regimen prescribed at the time of discharge may inadvertently omit needed medications that patients have been receiving for some time. Alternatively, new medications may unintentionally duplicate existing medications. Such unintended inconsistencies in medication regimens may occur at any point of transition in care (e.g., transfer from an intensive care unit [ICU] to a general ward), not just hospital admission or discharge. Medication reconciliation refers to the process of avoiding such inadvertent inconsistencies across transitions in care by reviewing the patient's complete medication regimen at the time of admission, transfer, and discharge and comparing it with the regimen being considered for the new setting of care.
Mental model—Mental models are psychological representations of real, hypothetical, or imaginary situations. Scottish psychologist Kenneth Craik (1943) first proposed mental models as the basis for anticipating events and explaining events (i.e., for reasoning). Though easiest to conceptualize in terms of mental pictures of objects (e.g., a DNA double helix or the inside of an internal combustion engine), mental models can also include “scripts” or processes and other properties beyond images. Mental models create differing expectations, which suggest different courses of action. For instance, when you walk into a fast-food restaurant, you are invoking a different mental model than when you enter a fancy restaurant. Based on this model, you automatically go to place your order at the counter, rather than sitting at a booth and expecting a waiter to take your order.
Metacognition—Metacognition refers to thinking about thinking—that is, reflecting on the thought processes that led to a particular diagnosis or decision to consider whether biases or cognitive short cuts may have had a detrimental effect. In some ways, metacognition amounts to playing devil's advocate with oneself when it comes to working diagnoses and important therapeutic decisions. However, the devil is often in the details—one must become familiar with the variety of specific biases that commonly affect medical reasoning. For instance, when discharging a patient with atypical chest pain from the emergency department, you might step back and consider how much the discharge diagnosis of musculoskeletal pain reflects the sign-out as a “soft rule out” you received from a colleague on the night shift. Or, you might mull over the degree to which your reaction to and assessment of a particular patient stemmed from his having been labeled a “frequent flyer.” Another cognitive bias is that clinicians tend to assign more importance to pieces of information that required personal effort to obtain.
Mistakes—In some contexts, errors are dichotomized as slips or mistakes, based on the cognitive psychology of task-oriented behavior. Mistakes reflect failures during attentional behaviors—behavior that requires conscious thought, analysis, and planning, as in active problem solving. Rather than lapses in concentration (as with slips), mistakes typically involve insufficient knowledge, failure to correctly interpret available information, or application of the wrong cognitive heuristic or rule. Thus, choosing the wrong diagnostic test or ordering a suboptimal medication for a given condition represents a mistake. Mistakes often reflect lack of experience or insufficient training. Reducing the likelihood of mistakes typically requires more training, supervision, or occasionally disciplinary action (in the case of negligence).
Unfortunately, healthcare has typically responded to all errors as if they were mistakes, with remedial education and/or added layers of supervision. In point of fact, most errors are actually slips, which are failures of schematic behavior that occur due to fatigue, stress, or emotional distractions, and are prevented through sharply different mechanisms.
Near miss—An event or situation that did not produce patient injury, but only because of chance. This good fortune might reflect robustness of the patient (e.g., a patient with penicillin allergy receives penicillin, but has no reaction) or a fortuitous, timely intervention (e.g., a nurse happens to realize that a physician wrote an order in the wrong chart). This definition is identical to that for close call.
“Never Events” list—Nickname for a list launched and managed by the National Quality Forum, initially intended to be “things that should never happen in healthcare.” The list, whose real name is the “Serious Reportable Events” list, has expanded over time to include adverse events that are unambiguous, serious, and to a reasonable degree preventable (Appendix VI). While most are rare, never events may be devastating to patients and indicate serious underlying organizational safety problems.
Normal accident theory—Though less often cited than high reliability theory in the healthcare literature, normal accident theory has played a prominent role in the study of complex organizations. In contrast to the optimism of high reliability theory, normal accident theory suggests that, at least in some settings, major accidents become inevitable and, thus, in a sense, “normal.”
Safety expert Charles Perrow proposed two factors that create an environment in which a major accident becomes increasingly likely over time: complexity and tight coupling. The degree of complexity envisioned by Perrow occurs when no single operator can immediately foresee the consequences of a given action in the system. Tight coupling occurs when processes are intrinsically time dependent—once a process has been set in motion, it must be completed within a certain period of time. Importantly, normal accident theory contends that accidents become inevitable in complex, tightly coupled systems regardless of steps taken to increase safety. In fact, these steps sometimes increase the risk for future accidents through unintended collateral effects and general increases in system complexity.
Even if one does not believe the central contention of normal accident theory—that the potential for catastrophe emerges as an intrinsic property of certain complex systems—analyses informed by this theory's perspective have offered some fascinating insights into possible failure modes for high-risk organizations, including hospitals.
Normalization of deviance—The term “normalization of deviance” was coined by Diane Vaughan in her book The Challenger Launch Decision, in which she analyzes the interactions between various cultural forces within NASA that contributed to the Challenger disaster. Vaughn used this expression to describe the gradual shift in what is regarded as normal after repeated exposures to “deviant behavior” (behavior straying from correct [or safe] operating procedure). Corners get cut, safety checks bypassed, and alarms ignored or turned off, and these behaviors become normal—not just common but also stripped of their significance as warnings of impending danger. In their 2002 Annals of Internal Medicine discussion of a catastrophic error in healthcare, Chassin and Becher coined the phrase “a culture of low expectations.” When a system routinely produces errors (paperwork in the wrong chart, major miscommunications between different members of a given healthcare team, patients in the dark about important aspects of the care), providers in the system become inured to malfunction. In such a system, what should be regarded as a major warning of impending danger is ignored as a normal operating procedure.
Onion—The onion model illustrates the multiple levels or layers of protection (as in the layers of an onion) in a complex, high-risk system such as any healthcare setting. These layers include external regulations (e.g., related to staffing levels or required organizational practices, such as medication reconciliation), organizational features such as a just culture, equipment and technology (e.g., computerized order entry), and education and training of personnel.
Patient safety—Fundamentally, patient safety refers to freedom from accidental or preventable injuries produced by medical care. Thus, practices or interventions that improve patient safety are those that reduce the occurrence of preventable adverse events.
Patient safety in ambulatory care—The vast majority of healthcare takes place in the outpatient, or ambulatory, setting, and a growing body of research has identified and characterized factors that influence safety in office practice, the types of errors commonly encountered in ambulatory care, and potential strategies for improving ambulatory safety.
Pay for performance (“P4P”)—Refers to the general strategy of promoting quality improvement by rewarding providers (meaning individual clinicians or, more commonly, clinics or hospitals) who meet certain performance expectations with respect to healthcare quality or efficiency.
Performance can be defined in terms of patient outcomes but is more commonly defined in terms of processes of care (e.g., the percentage of eligible diabetics referred for annual retinal examinations, the percentage of children who received immunizations appropriate for their age, patients admitted to the hospital with pneumonia who receive antibiotics within six hours). P4P initiatives reflect the efforts of purchasers of healthcare—from the federal government to private insurers—to use their purchasing power to encourage providers to develop whatever specific quality improvement initiatives are required to achieve the specified targets. Thus, rather than committing to a specific quality improvement strategy, such as a new information system or a disease management program, which may have variable success in different institutions, P4P creates a climate in which provider groups will be strongly incentivized to find whatever solutions will work for them.
Physician work hours and patient safety—Long and unpredictable work hours have been a staple of medical training for centuries. However, little attention was paid to the patient safety effects of fatigue among residents until March 1984, when Libby Zion died due to a medication-prescribing error while under the care of residents in the midst of a 36-hour shift. In 2003, the Accreditation Council for Graduate Medical Education (ACGME) implemented new rules limiting work hours for all residents, with the key components being that residents should work no more than 80 hours per week or 24 consecutive hours on duty, should not be “on-call” more than every third night, and should have one day off per week.
Plan–do–study–act—Commonly referred to as PDSA, refers to the cycle of activities advocated for achieving process or system improvement. The cycle was first proposed by Walter Shewhart, one of the pioneers of statistical process control (see “run charts”) and popularized by his student, quality expert W. Edwards Deming. The PDSA cycle represents one of the cornerstones of continuous quality improvement (CQI). The components of the cycle are briefly described as follows:
Plan: Analyze the problem you intend to improve and devise a plan to correct the problem.
Do: Carry out the plan (preferably as a pilot project to avoid major investments of time or money in unsuccessful efforts).
Study: Did the planned action succeed in solving the problem? If not, what went wrong? If partial success was achieved, how could the plan be refined?
Act: Adopt the change piloted above as is, abandon it as a complete failure, or modify it and run through the cycle again.
Regardless of which action is taken, the PDSA cycle continues, with either the same problem or a new one.
Potential ADE—A potential ADE is a medication error or other drug-related mishap that reached the patient but happened not to produce harm (e.g., a penicillin-allergic patient receives penicillin but happens not to have an adverse reaction). In some studies, potential ADEs refer to errors or other problems that, if not intercepted, would be expected to cause harm. Thus, in some studies, if a physician ordered penicillin for a patient with a documented serious penicillin allergy, the order would be characterized as a potential ADE, on the grounds that administration of the drug would carry a substantial risk of harm to the patient.
Production pressure—Represents the pressure to put quantity of output—for a product or a service—ahead of safety. This pressure is seen in its starkest form in the line speed of factory assembly lines, famously demonstrated by Charlie Chaplin in Modern Times, as he is carried away on a conveyor belt and into the giant gears of the factory by the rapidly moving assembly line.
In healthcare, production pressure refers to delivery of services—the pressure to run hospitals at 100% capacity, with each bed filled with the sickest possible patients who are discharged at the first sign that they are stable, or the pressure to leave no operating room unused and to keep moving through the schedule for each room as fast as possible. In a survey of anesthesiologists, half of respondents stated that they had witnessed at least one case in which production pressure resulted in what they regarded as unsafe care. Examples included elective surgery in patients without adequate preoperative evaluation and proceeding with surgery despite significant contraindications.
Production pressure produces an organizational culture in which frontline personnel (and often managers) are reluctant to suggest any course of action that compromises productivity, even temporarily. For instance, in the survey of anesthesiologists, respondents reported pressure by surgeons to avoid delaying cases through additional patient evaluation or canceling cases, even when patients had clear contraindications to surgery.
Rapid Response Team (RRT)—The concept of RRTs (also known as Medical Emergency Teams) is that of a Code Blue team with more liberal calling criteria. Instead of just frank respiratory or cardiac arrest, RRTs respond to a wide range of worrisome, acute changes in patients’ clinical status, such as low blood pressure, difficulty breathing, or altered mental status. In addition to less stringent calling criteria, RRTs (now sometimes called “Rapid Response Systems,” to highlight the importance of the activation criteria as well as the response) de-emphasize the traditional hierarchy in patient care in that anyone can initiate the call. Nurses, junior medical staff, or others involved in the care of patients (and, in some hospitals, patients or family members) can call for the assistance of the RRT whenever they are worried about a patient's condition, without having to wait for more senior personnel to assess the patient and approve the decision to call for help.
Read-backs—When information is conveyed verbally, miscommunication may occur in a variety of ways, especially when transmission may not occur clearly (e.g., by telephone or radio, or if communication occurs under stress). For names and numbers, the problem often is confusing the sound of one letter or number with another. To address this possibility, the military, civil aviation, and many high-risk industries use protocols for mandatory read-backs, in which the listener repeats the key information, so that the transmitter can confirm its correctness.
Because mistaken substitution or reversal of alphanumeric information is such a potential hazard, read-back protocols typically include the use of phonetic alphabets, such as the NATO system (“Alpha–Bravo–Charlie–Delta–Echo … X-ray–Yankee–Zulu”) now familiar to many. In healthcare, traditionally, read-back has been mandatory only in the context of checking to ensure accurate identification of recipients of blood transfusions. However, there are many other circumstances in which healthcare teams could benefit from following such protocols, for example, when communicating key lab results or patient orders over the phone, and even when exchanging information in person (e.g., handoffs).
Red rules—Rules that must be followed to the letter. In the language of nonhealthcare industries, red rules “stop the line.” In other words, any deviation from a red rule will bring work to a halt until compliance is achieved. Red rules, in addition to relating to important and risky processes, must also be simple and easy to remember.
An example of a red rule in healthcare might be the following: “No hospitalized patient can undergo a test of any kind, receive a medication or blood product, or undergo a procedure if they are not wearing an identification bracelet.” The implication of designating this a red rule is that the moment a patient is identified as not meeting this condition, all activity must cease in order to verify the patient's identity and supply an identification band.
Healthcare organizations already have numerous rules and policies that call for strict adherence. The reason that some organizations are using red rules is that, unlike many standard rules, red rules will always be supported by the entire organization. In other words, when someone at the frontline calls for work to cease on the basis of a red rule, top management must always support this decision. Thus, when properly implemented, red rules should foster a culture of safety, as frontline workers will know that they can stop the line when they notice potential hazards, even when doing so may result in considerable inconvenience or be time consuming and costly, for their immediate supervisors or the organization as a whole.
Root cause analysis—A structured process for identifying the causal or contributing factors underlying adverse events or other critical incidents. The key advantage of RCA over traditional clinical case reviews is that it follows a predefined protocol for identifying specific contributing factors in various causal categories (e.g., personnel, training, equipment, protocols, scheduling) rather than attributing the incident to the first error one finds or to preconceived notions investigators might have about the case.
Rule of thumb—See “Heuristic.” Loosely defined or informal rule often arrived at through experience or trial and error (e.g., gastrointestinal complaints that wake patients up at night are unlikely to be benign). Heuristics provide cognitive shortcuts in the face of complex situations, and thus serve an important purpose. Unfortunately, they can also turn out to be wrong.
The phrase “rule of thumb” probably has its origin with trades such as carpentry in which skilled workers could use the length of their thumb (roughly one inch from knuckle to tip) rather than more precise measuring instruments and still produce excellent results. In other words, they measured not using a “rule of wood” (old-fashioned way of saying ruler), but by a “rule of thumb.”
Run charts—A type of statistical process control or quality control graph in which some observation (e.g., manufacturing defects or adverse outcomes) is plotted over time to see if there are “runs” of points above or below a center line, usually representing the average or median. In addition to the number of runs, the length of the runs conveys important information. For run charts with more than 20 useful observations, a run of 8 or more dots would count as a “shift” in the process of interest, suggesting some nonrandom variation. Other key tests applied to run charts include tests for “trends” (sequences of successive increases or decreases in the observation of interest) and “zigzags” (alternation in the direction—up or down—of the lines joining pairs of dots). If a nonrandom change for the better, or shift, occurs, it suggests that an intervention has succeeded. The expression “moving the dots” refers to this type of shift.
Safety culture—Safety culture refers to a commitment to safety that permeates all levels of an organization, from frontline personnel to executive management. More specifically, “safety culture” calls up a number of features identified in studies of high reliability organizations, organizations outside of healthcare with exemplary performance with respect to safety, including:
Acknowledgment of the high-risk, error-prone nature of an organization's activities
A blame-free environment where individuals are able to report errors or close calls without fear of reprimand or punishment
An expectation of collaboration across ranks to seek solutions to vulnerabilities
A willingness on the part of the organization to direct resources to addressing safety concerns
Sentinel event—An adverse event in which death or serious harm to a patient has occurred; usually used to refer to events that are not at all expected or acceptable—for example, an operation on the wrong patient or body part. The choice of the word sentinel reflects the egregiousness of the injury (e.g., amputation of the wrong leg) and the likelihood that investigation of such events will reveal serious problems in current policies or procedures.
Sensemaking—A term from organizational theory that refers to the processes by which an organization takes in information to make sense of its environment, to generate knowledge, and to make decisions. It is the organizational equivalent of what individuals do when they process information, interpret events in their environments, and make decisions based on these activities. More technically, organizational sensemaking constructs the shared meanings that define the organization's purpose and frame the perception of problems or opportunities that the organization needs to work on.
Sharp end—The sharp end refers to the personnel or parts of the healthcare system in direct contact with patients. Personnel operating at the sharp end may literally be holding a scalpel (e.g., an orthopedist who operates on the wrong leg) or figuratively be administering any kind of therapy (e.g., a nurse programming an intravenous pump) or performing any aspect of care. To complete the metaphor, the blunt end refers to the many layers of the healthcare system that affect the scalpels, pills, and medical devices, or the personnel wielding, administering, and operating them. Thus, an error in programming an intravenous pump would represent a problem at the sharp end, while the institution's decision to use multiple types of infusion pumps (making programming errors more likely) would represent a problem at the blunt end. The terminology of “sharp” and “blunt” ends corresponds roughly to active failures and latent conditions.
Signouts and signovers—The term “signout” is used to refer to the act of transmitting information about the patient. Handoffs and signouts have been linked to adverse clinical events in settings ranging from the emergency department to the intensive care unit.
Situational awareness—Situational awareness refers to the degree to which one's perception of a situation matches reality. In the context of crisis management, where the phrase is most often used, situational awareness includes awareness of fatigue and stress among team members (including oneself), environmental threats to safety, appropriate immediate goals, and the deteriorating status of the crisis (or patient). Failure to maintain situational awareness can result in various problems that compound the crisis. For instance, during a resuscitation, an individual or entire team may focus on a particular task, such as a difficult central line insertion or a particular medication to administer. Fixation on this problem can result in loss of situational awareness to the point that steps are not taken to address immediately life-threatening problems such as respiratory failure or a pulseless rhythm. In this context, maintaining situational awareness might be seen as equivalent to keeping the big picture in mind. Alternatively, in assigning tasks in a crisis, the leader may ignore signals from a team member, which may result in escalating anxiety for the team member, failure to perform the assigned task, or further patient deterioration.
Six sigma—Six sigma refers loosely to striving for near perfection in the performance of a process or production of a product. The name derives from the Greek letter sigma, often used to refer to the standard deviation of a normal distribution. By definition, 95% of a normally distributed population falls within 2 standard deviations of the average (or “2 sigma”). This leaves 5% of observations as “abnormal” or “unacceptable.” Six Sigma targets a defect rate of 3.4 per million opportunities—6 standard deviations from the population average.
When it comes to industrial performance, having 5% of a product fall outside the desired specifications would represent an unacceptably high defect rate. What company could stay in business if 5% of its product did not perform well? For example, would we tolerate a pharmaceutical company that produced pills containing incorrect dosages 5% of the time? Certainly not. But when it comes to clinical performance—the number of patients who receive a proven medication, the number of patients who develop complications from a procedure—we routinely accept failure or defect rates in the 2% to 5% range, orders of magnitude below Six Sigma performance.
Not every process in healthcare requires such near-perfect performance. In fact, one of the lessons of Reason's Swiss cheese model is the extent to which low overall error rates are possible even when individual components have many “holes.” However, many high-stakes processes are far less forgiving, since a single “defect” can lead to catastrophe (e.g., wrong-site surgery, accidental administration of concentrated potassium).
Slips (or lapses)—Errors can be dichotomized as slips or mistakes, based on the cognitive psychology of task-oriented behavior. Slips refer to failures of schematic behaviors, or lapses in concentration (e.g., overlooking a step in a routine task due to a lapse in memory, an experienced surgeon nicking an adjacent organ during an operation due to a momentary lapse in concentration).
Slips occur in the face of competing sensory or emotional distractions, fatigue, and stress. Reducing the risk of slips requires attention to the designs of protocols, devices, and work environments—using checklists so key steps will not be omitted, reducing fatigue among personnel (or shifting high-risk work away from personnel who have been working extended hours), removing unnecessary variation in the design of key devices, eliminating distractions (e.g., phones and pagers) from areas where work requires intense concentration, and other redesign strategies. Slips can be contrasted with mistakes, which are failures that occur in attentional behavior such as active problem solving.
Standard of care—What the average, prudent clinician would be expected to do under certain circumstances. The standard of care may vary by community (e.g., due to resource constraints). When the term is used in the clinical setting, the standard of care is generally felt not to vary by specialty or level of training. In other words, the standard of care for a condition may well be defined in terms of the standard expected of a specialist, in which case a generalist (or trainee) would be expected to deliver the same care or make a timely referral to the appropriate specialist (or supervisor, in the case of a trainee). Standard of care is also a term of art in malpractice law, and its definition varies from jurisdiction to jurisdiction. When used in this legal sense, often the standard of care is specific to a given specialty; it is often defined as the care expected of a reasonable practitioner with similar training practicing in the same location under the same circumstances.
Structure–process–outcome triad (“Donabedian Triad”)—Most definitions of quality emphasize favorable patient outcomes as the gold standard for assessing quality. In practice, however, one would like to detect quality problems without waiting for poor outcomes to develop in such sufficient numbers that deviations from expected rates of morbidity and mortality can be detected. Donabedian first proposed that quality could be measured using aspects of care with proven relationships to desirable patient outcomes. For instance, if proven diagnostic and therapeutic strategies are monitored, quality problems can be detected long before demonstrable poor outcomes occur.
Aspects of care with proven connections to patient outcomes fall into two general categories: process and structure. Processes encompass all that is done to patients in terms of diagnosis, treatment, monitoring, and counseling. Cardiovascular care provides classic examples of the use of process measures to assess quality. Given the known benefits of aspirin and beta-blockers for patients with myocardial infarction, the quality of care for patients with myocardial infarction can be measured in terms of the rates at which eligible patients receive these proven therapies. The percentage of eligible women who undergo mammography at appropriate intervals would provide a process-based measure for quality of preventive care for women.
Structure refers to the setting in which care occurs and the capacity of that setting to produce quality. Traditional examples of structural measures related to quality include credentials, patient volume, and academic affiliation. More recent structural measures include the adoption of organizational models for inpatient care (e.g., closed ICUs and dedicated stroke units) and possibly the presence of sophisticated clinical information systems. Cardiovascular care provides another classic example of structural measures of quality. Numerous studies have shown that institutions that perform more cardiac surgeries and invasive cardiology procedures achieve better outcomes than institutions that see fewer patients. Given these data, patient volume represents a structural measure of quality of care for patients undergoing cardiac procedures.
Swiss cheese model—James Reason developed the “Swiss cheese model” (Figure 2-1) to illustrate how analyses of major accidents and catastrophic systems failures tend to reveal multiple, smaller failures leading up to the actual hazard.
In the model, each slice of cheese represents a safety barrier or precaution relevant to a particular hazard. For example, if the hazard were wrong-site surgery, slices of the cheese might include conventions for identifying sidedness on radiology tests, a protocol for signing the correct site when the surgeon and patient first meet, and a second protocol for reviewing the medical record and checking the previously marked site in the operating room. Many more layers exist. The point is that no single barrier is foolproof. They each have “holes,” hence the Swiss cheese. For some serious events (e.g., operating on the wrong site or wrong person), even though the holes will align infrequently, even rare cases of harm (errors making it “through the cheese”) will be unacceptable.
While the model may convey the impression that the slices of cheese and the location of their respective holes are independent, this may not be the case. For instance, in an emergency situation, all three of the surgical identification safety checks mentioned above may fail or be bypassed. The surgeon may meet the patient for the first time in the operating room. A hurried x-ray technologist might mislabel a film (or simply hang it backwards and a hurried surgeon may not notice it); “signing the site” may not take place at all (e.g., if the patient is unconscious), or, if it takes place, be rushed and offer no real protection. In the technical parlance of accident analysis, the different barriers may have a common failure mode, in which several protections are lost at once (i.e., the holes in several layers of the cheese line up).
In healthcare, such failure modes, in which slices of the cheese line up more often than one would expect if the location of their holes were independent of each other (and certainly more often than wings fall off airplanes), occur distressingly commonly. In fact, many of the systems problems discussed by Reason and others—poorly designed work schedules, lack of teamwork, and variations in the design of important equipment between and even within institutions—are sufficiently common that many of the slices of cheese already have their holes aligned. In such cases, one slice of cheese may be all that is left between the patient and significant hazard.
Systems approach—Medicine has traditionally treated quality problems and errors as failings on the part of individual providers, perhaps reflecting inadequate knowledge or skill levels. The systems approach, by contrast, takes the view that most errors reflect predictable human failings in the context of poorly designed systems (e.g., expected lapses in human vigilance in the face of long work hours or predictable mistakes on the part of relatively inexperienced personnel faced with cognitively complex situations). Rather than focusing corrective efforts on reprimanding individuals or pursuing remedial education, the systems approach seeks to identify situations or factors likely to give rise to human error and implement systems changes that will reduce their occurrence or minimize their impact on patients. This view holds that efforts to catch human errors before they occur or block them from causing harm will ultimately be more fruitful than ones that seek to somehow create flawless providers.
This systems focus includes paying attention to human factors engineering (or ergonomics), including the design of protocols, schedules, and other factors that are routinely addressed in other high-risk industries but have traditionally been ignored in medicine.
“Time outs”—Refer to planned periods of quiet and/or interdisciplinary discussion focused on ensuring that key procedural details have been addressed. For instance, protocols for ensuring correct site surgery often recommend a time out to confirm the identification of the patient, the surgical procedure, site, and other key aspects, often stating them aloud for double-checking by other team members. In addition to avoiding major misidentification errors involving the patient or surgical site, such a time out ensures that all team members share the same “game plan,” so to speak. Taking the time to focus on listening and communicating the plans as a team can rectify miscommunications and misunderstandings before a procedure gets underway.
Teamwork Training—Providing safe healthcare depends on highly trained individuals with disparate roles and responsibilities acting together in the best interests of the patient. The need for improved teamwork has led to the application of teamwork training principles, originally developed in aviation, to a variety of healthcare settings.
Triggers—Refer to signals for detecting likely adverse events. Triggers alert providers involved in patient safety activities to probable adverse events so they can review the medical record to determine if an actual or potential adverse event has occurred. For instance, if a hospitalized patient received naloxone (a drug used to reverse the effects of narcotics), the patient probably received an excessive dose of morphine or some other opiate. In the emergency department, the use of naloxone would more likely represent treatment of a self-inflected opiate overdose, so the trigger would have little value in that setting. But, among patients already admitted to hospital, a pharmacy could use the administration of naloxone as a “trigger” to investigate possible ADEs.
In cases in which the trigger correctly identified an adverse event, causative factors can be identified and, over time, interventions developed to reduce the frequency of particularly common causes of adverse events. The traditional use of triggers has been to efficiently identify adverse events after the fact. However, using triggers in real time has tremendous potential as a patient safety tool. In a study of real-time triggers in a single community hospital, for example, more than 1000 triggers were generated in six months, and approximately 25% led to physician action and would not have been recognized without the trigger.
As with any alert or alarm system, the threshold for generating triggers has to balance true and false positives. The system will lose its value if too many triggers prove to be false alarms. This concern is less relevant when triggers are used as chart review tools. In such cases, the tolerance of false alarms depends only on the availability of sufficient resources for medical record review. Reviewing four false alarms for every true adverse event might be quite reasonable in the context of an institutional safety program, but frontline providers would balk at (and eventually ignore) a trigger system that generated four false alarms for every true one.
Underuse, overuse, and misuse—For process of care, quality problems can arise in one of the three ways: underuse, overuse, and misuse.
Underuse refers to the failure to provide a healthcare service when it would have produced a favorable outcome for a patient. Standard examples include failures to provide appropriate preventive services to eligible patients (e.g., Pap smears, flu shots for elderly patients, screening for hypertension) and proven medications for chronic illnesses (steroid inhalers for asthmatics; aspirin, beta-blockers, and lipid-lowering agents for patients who have suffered a recent myocardial infarction).
Overuse refers to providing a process of care in circumstances where the potential for harm exceeds the potential for benefit. Prescribing an antibiotic for a viral infection such as a cold, for which antibiotics are ineffective, constitutes overuse. The potential for harm includes adverse reactions to the antibiotics and increases in antibiotic resistance among bacteria in the community. Overuse can also apply to diagnostic tests and surgical procedures.
Misuse occurs when an appropriate process of care has been selected but a preventable complication occurs and the patient does not receive the full potential benefit of the service. Avoidable complications of surgery or medication use are misuse problems. A patient who suffers a rash after receiving penicillin for strep throat, despite having a known allergy to that antibiotic, is an example of misuse. A patient who develops a pneumothorax after an inexperienced operator attempted to insert a subclavian line would represent another example of misuse.
Voluntary patient safety event reporting—See incident reporting. Patient safety event reporting systems are ubiquitous in hospitals and are a mainstay of efforts to detect safety and quality problems. However, while event reports may highlight specific safety concerns, they do not provide insights into the epidemiology of safety problems.
Workaround—From the perspective of frontline personnel trying to accomplish their work, the design of equipment or the policies governing work tasks can seem counterproductive. When frontline personnel adopt consistent patterns of bypassing safety features of medical equipment, these patterns and actions are referred to as workarounds. Although workarounds “fix the problem,” the system remains unaltered and thus continues to present potential safety hazards for future patients.
From a definitional point of view, it does not matter if frontline users are justified in working around a given policy or equipment design feature. What does matter is that the motivation for a workaround lies in getting work done, not laziness or whim. Thus, the appropriate response by managers to the existence of a workaround should not consist of reflexively reminding staff about the policy and restating the importance of following it. Rather, workarounds should trigger assessment of workflow and the various competing demands for the time of frontline personnel. In busy clinical areas where efficiency is paramount, managers can expect workarounds to arise whenever policies create added tasks for frontline personnel, especially when the extra work is perceived to be out of proportion to the importance of the safety goal.
Wrong-site, wrong-procedure, and wrong-patient surgery—Few medical errors are as terrifying as those that involve patients who have undergone surgery on the wrong body part, undergone the incorrect procedure, or had a procedure intended for another patient. These “wrong-site, wrong-procedure, wrong-patient errors” (WSPEs) are rightly termed never events.
Reprinted with permission from AHRQ Patient Safety Network: Shojania KG, Wachter RM, Hartman EE. AHRQ Patient Safety Network Glossary. Available at: https://psnet.ahrq.gov/glossary.