Several infants at a hospital received epinephrine in error and suffered serious medical consequences. An analysis revealed that several pharmacists had made the same mistake; the problem was caused by the identical appearance of vitamin E and epinephrine bottles in the pharmacy. This was a system error.
An epidemic of unexpected deaths on the cardiac ward was investigated. The times of the deaths were correlated with personnel schedules, leading to the conclusion that one nurse was responsible. It turned out that she was administering lethal doses of digoxin to patients. This was not a system error.
Traditionally, quality assurance has focused on individual caregivers and institutions in a “bad apple” approach that relies heavily on sanctions. More recently, quality has been viewed through the lens of the continuous quality improvement (CQI) model that seeks to enhance the clinical performance of all systems of care, not just the outliers with poor quality of care. The move to a CQI model has required development of more formalized standards of care that can be used as benchmarks for measuring quality, and more systematic collection of data to measure overall performance and not just performance in isolated cases (Table 10-2).
Traditional Quality Assurance: Licensure, Accreditation, and Peer Review
Traditionally, the health care system has placed great reliance on educational institutions and licensing and accrediting agencies to ensure the competence of individuals and institutions in health care. Health care professionals undergo rigorous training and pass special licensing examinations intended to ensure that caregivers have at least a basic level of knowledge and competence. However, clinicians may have been competent practitioners at the time they took their examinations, but their skills lapsed or they developed impairment from alcohol or drug use, depression, or other conditions (Leape & Fromson, 2006).
Many organizations that confer specialty board certification require physicians to pass examinations on a periodic basis and perform systematic quality reviews of their own clinical practices to maintain active specialty certification. However, while some hospitals may require active specialty certification for a physician to be granted privileges to practice in the hospital, certification is not required for medical licensure.
The traditional approach to quality assurance has also relied heavily on peer pressure within hospitals, HMOs, and the medical community at large. Peer review, which has been part of medicine for decades, is the evaluation by health care practitioners of the appropriateness and quality of services performed by other practitioners, usually in the same specialty. Medicare anointed the Joint Commission on Accreditation of Hospitals (now named the Joint Commission) with the authority to terminate hospitals from the Medicare program if quality of care was found to be deficient. The Joint Commission requires hospital medical staff to set up peer review committees for the purpose of maintaining quality of care.
The Joint Commission uses criteria of structure, process, and outcome to assess quality of care. Structural criteria include such factors as whether the emergency department defibrillator works properly. Process criteria include whether medical records are completed in a timely manner, or if the credentials committee keeps minutes of its meetings. Outcomes include such measures as mortality rates for surgical procedures, proportions of deaths that are preventable, and rates of adverse drug reactions and wound infections. Medicare also contracts with quality improvement organizations (QIOs) to promote better quality of care among physicians caring for Medicare beneficiaries.
Angela Lopez, age 57, suffered from metastatic ovarian cancer but was feeling well and prayed she would live 9 months more. Her son was the first family member ever to attend college, and she hoped to see him graduate. It was decided to infuse chemotherapy directly into her peritoneal cavity. As the solution poured into her abdomen, she felt increasing pressure. She asked the nurse to stop the fluid. The nurse called the physician, who said not to worry. Two hours later, Ms. Lopez became short of breath and demanded that the fluid be stopped. The nurse again called the physician, but an hour later Ms. Lopez died. Her abdomen was tense with fluid, which pushed on her lungs and stopped circulation through her inferior vena cava. The quality assurance committee reviewed the case as a preventable death and criticized the physician for giving too much fluid and failing to respond adequately to the nurse’s call. The physician replied that he was not at fault; the nurse had not told him how sick the patient was. The case was closed.
The traditional quality assurance strategies of licensing and peer review have not been effective tools for improving quality. Peer review often adheres to the theory of bad apples, attempting to discipline physicians (to remove them from the apple barrel) for mistakes rather than to improve their practice through education. The physician who caused Ms. Lopez’s preventable death responded to peer criticism by blaming the nurse rather than learning from the mistake. With the hundreds of decisions physicians make each day, often in time-constrained situations, serious errors are relatively common in medical practice. One-third of physicians surveyed in 2009 did not agree with disclosing serious errors to patients and 20% had not disclosed errors to their patients (Iezzoni et al., 2012).
Even if sanctions against the truly bad apples had more teeth, these measures would not solve the quality problem. Removing the incontrovertibly bad apples from the barrel does not address all the quality problems that emanate from competent caregivers who are not performing optimally. Health care systems do need to forcefully sanction caregivers who, despite efforts at remediation, cannot operate at a basic standard of acceptable practice. But measures are also needed to “shift the curve” of overall clinical practice to a higher level of quality, not just to trim off the poor-quality outliers.
Peer reviewers frequently disagree as to whether the quality of care in particular cases is adequate or not (Laffel & Berwick, 1993). Because of these limitations, efforts are underway to formalize standards of care using clinical practice guidelines and to move from individual case review to more systematic monitoring of overall practice patterns (Table 10-3).
Table 10-3Proposals for improving quality ||Download (.pdf) Table 10-3 Proposals for improving quality
|Identifying and sanctioning “bad apples” |
|Clinical practice guidelines |
|Measuring practice patterns |
|Continuous quality improvement |
|Computerized information systems |
|Public reporting of quality |
|Pay for reporting |
|Pay for performance |
|Financially neutral clinical decision making |
Clinical Practice Guidelines
Dr. Benjamin Waters was frustrated by patients who came in with urinary incontinence. He never learned about the problem in medical school, so he simply referred these patients to a urologist. In his managed care plan, Dr. Waters was known to overrefer, so he felt stuck. He could not handle the problem, yet he did not want to refer patients elsewhere. He solved his dilemma by prescribing incontinence pads and diapers, but did not feel good about it.
Dr. Denise Drier learned about urinary incontinence in family medicine residency but did not feel secure about caring for the problem. On the web, she found “Urinary Incontinence in Adults: Clinical Practice Guideline Update.” She studied the material and applied it to her incontinence patients. After a few successes, she and the patients were feeling better about themselves.
For many conditions, there is a better and a worse way to make a diagnosis and prescribe treatment. Physicians may not be aware of the better way because of gaps in training, limited experience, or insufficient time or motivation to learn new techniques. For these problems, clinical practice guidelines can be helpful in improving quality of care. In 1989, Congress established the Agency for Health Care Policy and Research, now called the Agency for Healthcare Research and Quality (AHRQ), to support and disseminate evidence-based practice guidelines, among other tasks. Practice guidelines make specific recommendations to physicians on how to treat clinical conditions such as diabetes, osteoporosis, urinary incontinence, or cataracts.
More than 2,700 guidelines exist; written by dozens of organizations including specialty societies and commercial companies. The U.S. Preventive Services Task Force and other respected professional organizations issue widely accepted guidelines based on a rigorous and objective review of scientific evidence. However, many guidelines are unreliable and tainted by monetary interests (Graham et al., 2011). Nearly 90% of clinical practice guideline authors in one survey had ties to the pharmaceutical industry, a bias often not disclosed to readers of the guidelines (Shaneyfelt & Centor, 2009). For example, 8 of the 15 members of the panel recommending new cholesterol guidelines in 2013—which increased the number of people who would be taking statin drugs-–had industry ties (Ioannidis, 2014). Moreover, clinical practice guidelines developed based on research on a narrowly defined population, such as nonelderly patients with a single chronic condition, may not be applicable to different patient populations, such as elderly patients with multiple diseases (Boyd et al., 2005).
Practice guidelines are not appropriate for many clinical situations. Uncertainty pervades clinical medicine, and practice guidelines are applicable only for those cases in which we enjoy “islands of knowledge in our seas of ignorance.” Practice guidelines can assist but not replace clinical judgment in the quest for high-quality care.
Pedro Urrutia, age 59, noticed mild nocturia and urinary frequency. His friend had prostate cancer, and he became concerned. The urologist said that his prostate was only slightly enlarged, his prostate-specific antigen (blood test) was normal, and surgery was not needed. Mr. Urrutia wanted surgery and found another urologist to do it.
At age 82, James Chin noted nocturia and urinary hesitancy. He had two glasses of wine on his wife’s birthday and later that night was unable to urinate. He went to the emergency department, was found to have a large prostate without nodules, and was catheterized. The urologist strongly recommended a transurethral resection of the prostate. Mr. Chin refused, thinking that the urinary retention was caused by the alcohol. Five years later, he was in good health with his prostate intact.
Even when guidelines have been defined, patient preferences vary markedly. Some, like Mr. Urrutia, want prostate surgery, even though it is not needed; others, like Mr. Chin, have strong indications for surgery but do not want it. Practice guidelines must take into account not only scientific data, but also patient preferences (O’Connor et al., 2007).
Do practice guidelines in themselves improve quality of care? The evidence is murky (Djulbegovic & Guyatt, 2014). However, guidelines can be an important foundation for more comprehensive quality improvement strategies, such as computer systems to remind physicians when patients are in need of certain services according to a guideline (e.g., a reminder system about women due for a mammogram) or having trusted colleagues (“opinion leaders”) or visiting experts (“academic detailing”) conduct small group sessions with clinicians to review and reinforce practice guidelines (Bodenheimer & Grumbach, 2007).
Measuring Practice Patterns
A central tenet of the CQI approach is the need to systematically monitor how well individual caregivers, institutions, and organizations are performing. Two types of indicators used to evaluate clinical performance are process measures and outcome measures. Process of care refers to the types of services delivered by caregivers. Examples are prescribing aspirin to patients with coronary heart disease, or turning immobile patients in hospital beds on a regular schedule to prevent bed sores. Outcomes—e.g., death, symptoms, mental health, physical functioning, laboratory studies, and health status—are the gold standard for measuring quality. However, outcomes (particularly those dealing with quality of life) may be difficult to measure. More easily counted outcomes such as mortality may be rare events, and therefore uninformative for evaluating quality of care for many conditions that are not immediately life-threatening. Also, outcomes may be heavily influenced by the underlying severity of illness and related patient characteristics, and not just by the quality of health care that patients received (King et al., 2016). When measuring patient outcomes, it is necessary to “risk adjust” these outcome measurements for differences in the underlying characteristics of different groups of patients. Because of these challenges in using outcomes as measures to monitor quality of care, process measures are more commonly used. For process measures to be valid indicators of quality, there must be solid research demonstrating that the processes do in fact influence patient outcomes.
Dr. Susan Cutter felt horrible. It was supposed to have been a routine hysterectomy. Somehow she had inadvertently lacerated the large intestine of the patient, a 45-year-old woman with symptomatic fibroids of the uterus but otherwise in good health prior to surgery. Bacteria from the intestine had leaked into the abdomen, and after a protracted battle in the ICU the patient died of septic shock.
Dr. Cutter met with the Chief of Surgery at her hospital. The Chief reviewed the case with Dr. Cutter, but also pulled out a report showing the statistics on all of Dr. Cutter’s surgical cases over the previous 5 years. The report showed that Dr. Cutter’s mortality and complication rates were among the lowest of surgeons on the hospital’s staff. However, the Chief did note that another surgeon, Dr. Dehisce, had a complication rate that was much higher than that of all the other staff surgeons. The Chief of Surgery asked Dr. Cutter to serve on a departmental committee to review Dr. Dehisce’s cases and to meet with Dr. Dehisce to consider ways to address his poor performance.
The contemporary approach to quality monitoring moves beyond examining a few isolated cases toward measuring processes or outcomes for a large population of patients. For example, a traditional peer review approach is to review every case of a patient who dies during surgery. Reviewing an individual case may help a surgeon and the operating team understand where errors may have occurred—a process known as “root cause” analysis. However, it does not indicate whether the case represented an aberrant bad outcome for a surgeon or team that usually has good surgical outcomes, or whether the case is indicative of more widespread problems. To answer these questions requires examining data on all the patients operated on by the surgeon and the operating team to measure the overall rate of surgical complications, and having some benchmark data that indicate whether this rate is higher than expected for similar types of patients.
Many practice organizations, from small groups of office-based physicians to huge, vertically integrated HMOs are starting to monitor patterns of care and provide feedback on this care to physicians and other staff in these organizations. A typical example of this practice profiling is measuring the rate at which diabetic patients receive recommended services, such as annual eye examinations, periodic testing of HbA1c levels, and evaluation of kidney function. Diabetes process of care profiles demonstrate which clinicians are providing high quality care and which would benefit from improvement advice; and indicate what systematic reforms would improve care, such as health coaching for diabetic patients in poor control (Bodenheimer & Grumbach, 2007).
Continuous Quality Improvement
Maximizing excellence for individual health care professionals is only one ingredient in the recipe for high-quality health care. Improving institutions is the other, through CQI techniques. CQI involves the identification of concrete problems and the formation of interdisciplinary teams to gather data and propose and implement solutions to the problems.
In LDS Hospital in Salt Lake City, variation in wound infection rates by different surgeons was related to the timing of the administration of prophylactic antibiotics. Patients who received antibiotics 2 hours before surgery had the lowest infection rates. The surgery department adopted a policy that all patients receive antibiotics precisely 2 hours before surgery; the rate of postoperative wound infections dropped from 1.8% to 0.9%. (Burke, 2001)
Such successes only dot, but do not yet dominate, the health care quality landscape (Solberg, 2007). The Institute for Healthcare Improvement (IHI) has led efforts to spread CQI efforts by sponsoring “collaboratives” to assist institutions and groups of institutions to improve health care outcomes and access while ideally reducing costs. Hundreds of health care organizations have participated in collaboratives concerned with such topics as improving the care of chronic illness, reducing waiting times, improving care at the end of life, and reducing adverse drug events. Collaboratives involve learning sessions during which teams from various institutions meet and discuss the application of a rapid change methodology within institutions. Some of IHI’s successes have taken place in the area of chronic disease, with a variety of institutions—from large integrated delivery systems to tiny rural community health centers—implementing the chronic care model to improve outcomes for conditions such as diabetes, asthma, and congestive heart failure (Bodenheimer et al., 2002). Collaboratives that assist institutions to implement the chronic care model have shown modest improvement in patient outcomes compared with controls (Vargas et al., 2007). In the area of patient safety, in 2004, IHI launched the 100,000 Lives Campaign (www.ihi.org) to reduce mortality rates in hospitals, followed by a 5 Million Lives Campaign between 2006 and 2008; more than 4,000 hospitals in the United States participated in these campaigns. There is evidence that these campaigns have contributed to reductions in hospital mortality, (Berwick et al., 2006; Wachter & Pronovost, 2006).
Computerized Information Systems
The advent of computerized information systems has created opportunities to improve care and to monitor the process and outcomes of care for entire populations. Electronic medical records can create lists of patients who are overdue for services needed for preventive care or the management of chronic illness and can generate reminder prompts for physicians and patients (Baron, 2007). Computerized physician order entry (CPOE) systems can alert the physician about inappropriate medication doses or medications to which the patient is known to be allergic (Kaushal et al., 2003). However, studies on the impact of electronic medical records on quality are mixed; without transformation of practice organization, the electronic medical record by itself is of limited utility (McCullough et al., 2013).
Public Reporting of Quality
The CQI approach emphasizes systematic monitoring of care to provide internal feedback to clinicians and health organizations to spur improved processes of care. A different approach to monitoring quality of care is to direct this information to the public. This approach views public release of systematic measurements of quality of care—commonly referred to as health care “report cards”—as a tool to empower health care consumers to select higher-quality caregivers and institutions. The Centers for Medicare and Medicaid Services sponsors the Physician Compare website, which publishes quality-of-care measures for physician groups and will soon include quality data on individual physicians (www.medicare.gov/find-a-doctor/provider-search.aspx). The Healthcare Effectiveness Data and Information Set (HEDIS), developed by the National Committee for Quality Assurance (NCQA), a private organization controlled by large HMOs and large employers, tracks many performance indicators at the level of health insurance plans, making it less useful to consumers, physicians, and hospitals and potentially more useful to employers deciding which health plans to offer to their employees. Some states issue report cards on hospital quality, with some also including physician performance. Advocates of this approach argue that armed with this information, patients and health care purchasers will make more informed decisions and preferentially seek out health care organizations with better report card grades.
An important experiment in individual physician report cards was initiated by the New York State Department of Health in 1990. The department released data on risk-adjusted mortality rates for coronary bypass surgery performed at each hospital in the state, and in 1992, mortality rates were also published for each cardiac surgeon. Each year’s list was big news and highly controversial. However, difficulties in measurement were highlighted by the fact that within 1 year, 46% of the surgeons had moved from one-half of the ranked list to the other half.
Several fascinating results came of this project: (1) Patients did not switch from hospitals with high mortality rates to those with lower mortality rates. (2) With the release of each report, one in five bottom quartile surgeons relocated or ceased practicing. (3) In 4 years, overall risk-adjusted coronary artery bypass mortality dropped by 41% in New York State. Mortality for this operation also dropped in states without report cards, but not as much. (4) Some surgeons, worried about the report cards, may have elected not to operate on the most risky patients in order to improve their report card ranking. It is possible that the reduction in surgical mortality in part resulted from withholding surgery for the sickest patients. The New York State experiment had less effect on changing the market decisions of patients and purchasers than on motivating quality improvements in hospitals that had poor surgical outcomes (Marshall et al., 2000; Jha & Epstein, 2006). Similarly, public reporting of diabetes measures can stimulate physicians to improve their care (Smith et al., 2012). In general, public reporting is associated with quality improvement but does not necessarily drive patients to higher-quality health care providers (Agency for Healthcare Research and Quality, 2012). Despite resources such as HEDIS report cards on health plan quality, few employers use quality data when selecting health plans for their employees; cost is the driving factor in most employer decisions (Galvin & Delbanco, 2005).
Report cards are based on a philosophy that says “if you can’t count it, you can’t improve it.” Albert Einstein expressed an alternative philosophy that might illuminate the report card enterprise: “Not everything that can be counted counts, and not everything that counts can be counted.” Increasingly, the focus on quality is switching to a focus on value, with value referring to quality divided by cost. Thus an increase in a quality measure associated with a growth in cost may not improve value, where improved quality with a stable or reduced cost increases value (Owens et al., 2011).
In 2003, Medicare initiated public reporting for hospitals, focusing on risk-adjusted quality of care for heart attacks, heart failure, and pneumonia. More recently, surgical care and other measures have been added (www.hospitalcompare.hhs.gov). This Hospital Quality Initiative is voluntary, but nonparticipating hospitals receive a reduction in their Medicare payments. One might say that the program is in essence no-pay for no-reporting. Hospital quality has improved for some measures that are reported (Chassin et al., 2010), but hospitals focus their quality activities on the specific measures prescribed by the program, at times to the detriment of other quality activities (Pham et al., 2006).
In 2007, Medicare began the Physician Quality Reporting System, under which physicians who report certain quality measures received a small increase in their Medicare fees. Medicare plans to use these data to rate individual physician quality on the Physician Compare website. In 2015, the pay for reporting incentive shifted to a penalty such that physicians not reporting their measures are subject to a Medicare fee reduction (Findlay, 2014).
Pay for performance (P4P) goes one step beyond pay for reporting; physicians or hospitals receive more money if their quality measures exceed certain benchmarks or if the measures improve from year to year (Epstein et al., 2004). One of the oldest P4P programs is the Integrated Healthcare Association (IHA) program in California. IHA, representing employers, health plans, health systems and physician groups, launched the program in 2002 with a set of uniform performance measures including clinical care, patient satisfaction, use of information technology, and health care costs. In 2014, seven health plans and nearly 200 physician organizations—involving 35,000 physicians and 9 million patients—participated in the IHA program. From 2004 to 2013, physician organizations received about $500 million in performance-based payments (Integrated Healthcare Association, 2014).
The IHA program is unique for two reasons: All major health plans collaborated in choosing the measures upon which performance bonuses are based, and most physicians in California belong to a large medical group or independent practice association (see Chapter 6). If only one health plan sets up a P4P program with physicians, there may not be enough patients from that health plan to accurately measure the physician’s quality; with all health plans participating, a substantial portion of a physician’s patient panel is included in the measures. If P4P targets individual physicians rather than larger physician organizations, the small numbers of patients may distort the results. The ability of the California experience to aggregate a large number of patients allows for more accurate performance evaluation.
In 2003, Medicare launched the Premier Hospital Quality Incentive Demonstration, a P4P program for 266 hospitals around the country to test the extent to which financial bonuses would improve the quality of care provided to Medicare patients with certain conditions, including acute myocardial infarction, heart failure, and pneumonia. For the first 3 years, Premier hospitals improved quality more than similar non-Premier hospitals, but during the next 3 years, there were no significant differences in improvement. Moreover, during the second 3 years the lowest performing hospitals failed to improve (Ryan et al., 2012). In another P4P program, mandatory for most acute hospitals, Medicare reduces payments for hospitals with excessive rates of readmission.
A P4P program described as “an initiative to improve the quality of primary care that is the boldest such proposal attempted anywhere in the world” was launched in the United Kingdom in 2004 (Roland & Campbell, 2014). This program is described in Chapter 14.
Some authors urge caution, pointing out that P4P programs could encourage physicians and hospitals to avoid high-risk patients in order to keep their performance scores up (McMahon et al., 2007). Another difficulty is that many patients see a large number of physicians in a given year, making it impossible to determine which physician should receive a performance bonus (Pham et al., 2007). Moreover, P4P programs could increase disparities in quality by preferentially rewarding physicians and hospitals caring for higher-income patients and having greater resources available to invest in quality improvement, and penalizing those institutions and physicians attending to more vulnerable populations in resource-poor environments (Casalino et al., 2007).
Financially Neutral Clinical Decision Making
The quest for quality care encompasses a search for a financial structure that does not reward over- or undertreatment and that separates physicians’ personal incomes from their clinical decisions. Balanced incentives (see Chapter 4), combining elements of capitation or salary and fee-for-service, may have the best chance of minimizing the payment–treatment nexus (Robinson, 1999), encouraging physicians to do more of what is truly beneficial for patients while not inducing inappropriate and harmful services. Completely financially neutral decision making will always be an ideal and not a reality.