Determining Fluid and Inotrope Responsiveness
The measurement of SV and CO is fundamental to the hemodynamic management of critically ill patients in the ICU and unstable patients in the operating room. Fluid resuscitation is generally regarded as the first step in the resuscitation of hemodynamically unstable patients. Fundamentally, the only reason to give a patient a fluid challenge is to increase stroke volume (volume responsiveness). If the fluid challenge does not increase stroke volume, volume loading serves the patient no useful benefit (may be harmful). Clinical studies, however, have demonstrated that only about 50% of hemodynamically unstable patients are volume responsive.34 According to the Frank–Starling principle as the preload increases left ventricular (LV) stroke volume increases until the optimal preload is achieved at which point the stroke volume remains relatively constant. Once the left ventricle is functioning near the “flat” part of the Frank–Starling curve fluid loading has little effect on the stroke volume. This implies that the measurement of SV and its change with a preload challenge is essential in all patients undergoing fluid resuscitation. Similarly, the use of an ionotropic agent is based on the assumption that these agents will increase CO. CO monitoring is therefore essential when inotropic agents are being used to allow titration of the drug to the desired effect. Previously static pressure measurements, namely the pulmonary capillary wedge pressure (PCWP) and the central venous pressure (CVP), have been used to guide fluid therapy. However, studies performed over the last 2 decades demonstrate that these techniques are unable to accurately assess volume status or fluid responsiveness.35 Therefore, both fluid challenges and the use of inotropic agent should be based on the response of the SV to either of these challenges.
While a number of definitions exist, an increase in stroke volume (or cardiac output) > 10% to 15% has been used to define volume responsiveness. Interestingly, this rather arbitrary definition was based on the precision of the PAC from studies performed in the 1980s (which we now know are incorrect).36 Furthermore, although the volume of the fluid bolus has not been well standardized, a volume of between 500 and 1000 mL (or 10 mL/kg) of crystalloid solution has been most studied.34 As an operational definition we use a > 10% increase in stroke volume following a 500 mL crystalloid bolus (over 10 minutes) as an indicator of fluid responsiveness. Fluid boluses of greater than 500 mL should be avoided as this may lead to volume overload. More importantly, large fluid boluses may acutely increase cardiac filling pressures. Increased cardiac filling pressures trigger the release of natriuretic peptides. Natriuretic peptides cleave membrane-bound proteoglycans and glycoproteins (most notably syndecan-1 and hyaluronic acid) off the endothelial glycocalyx.37,38 The endothelial glycocalyx plays a major role in regulating endothelial permeability.39 Therefore, excessive volume expansion increases the release of natriuretic peptides which in turn damages the endothelial glycocalyx and this is followed by a rapid shift of intravascular fluid into the interstitial space leading to a marked increase in lung water and tissue edema.37,38 In most circumstances, it would therefore be desirable to determine whether the patient will be fluid responsive without actually administering a fluid bolus. In this regard, the passive leg raising maneuver has received the most attention. Lifting the legs passively from the horizontal position induces a gravitational transfer of blood from the lower limbs toward the intrathoracic compartment. Beyond its ease of use, this method has the advantage of reversing its effects once the legs are tilted down (reversal of the effects on cardiac filling pressures). Therefore, PLR may be considered a reversible “autotransfusion.” A recent meta-analysis, which pooled the results of eight studies, confirmed the excellent value of PLR to predict fluid responsiveness in critically ill patients with a global area under the ROC curve of 0.95.40 It should, however, be noted that intra-abdominal hypertension (an intra-abdominal pressure of > 16 mm Hg) impairs venous return and reduces the ability of PLR to detect fluid responsiveness.41 Similarly, it is likely (although it has not been studied) that patient’s with very thin legs from loss of muscle mass may have a limited “autotransfusion” following a PLR maneuver (and a false negative test). When performing the PLR maneuver, it is important that the method be standardized. The lower limbs should be elevated to 45° (automatic bed elevation or wedge pillow) while at the same time placing the patient in the supine from a 45° semirecumbent position. Starting the PLR maneuver from a total horizontal position may induce an insufficient venous blood shift to significantly elevate cardiac preload.42 By contrast, starting PLR from a semirecumbent position induces a larger increase in cardiac preload as it induces the shift of venous blood not only from both the legs but also from the abdominal compartment.43 Since the maximal hemodynamic effects of PLR occur within the first minute of leg elevation,44 it is important to assess these effects with a method able to track changes in cardiac output or stroke volume on a real-time basis, that is, pulse contour analysis (calibrated), esophageal Doppler, or bioreactance. While the change in SV may be detected within the first minute of the PLR maneuver with pulse contour analysis it may take up to 3 minutes for this change to be detected by bioreactance.
Optimizing Cardiac Output in Elective Surgical Patients
A seminal paper published by Shoemaker et al. in 1982 demonstrated that postoperative patients with an oxygen delivery (DO2) < 550 mL/min/m2 and a cardiac index (CI) < 4.5 L/min/m2 were at a significantly greater risk of dying than patients whose DO2 and CI were above these thresholds.45 These authors hypothesized that optimizing postoperative DO2 using the cardiorespiratory pattern of those who survived (DO2 > 550 mL/min/m2 and cardiac index > 4.5 L/min/m2) would improve the outcome of patients undergoing high-risk surgery.46 This study was followed by a pseudorandomized control trial in which 100 patients were “randomized” to achieve these postoperative DO2 and VO2 targets (supranormal) or a control group with standard postoperative hemodynamic goals.46 The mortality of the control group was 48% compared to 13% in the supranormal group (P < 0.03). In 1988 these authors published a now landmark study in which they measured DO2 and VO2 in 100 consecutive patients undergoing high-risk surgical operations.47 They calculated the intraoperative and postoperative oxygen debt (VO2 debt) by subtracting the measured VO2 from the estimated VO2 requirements corrected for temperature and anesthesia. The estimated VO2 during anesthesia was calculated using the following formula: VO2 (anesthesia) = 10 x kg0.72.47 This unvalidated equation was published in a textbook by Lowe and Ernst.48 Shoemaker and colleagues then correlated the calculated VO2 deficit with the subsequent development of lethal and nonlethal organ failure. In this study the cumulative VO2 deficit averaged 33.5 ± 36.9 L/m2 in nonsurvivors, 26.8 ± 32.1 L/m2 in survivors with organ failure, and 8.0 ± 10.9 L/m2 in survivors without organ failure (P < 0.05). Shoemaker and colleagues noted that the oxygen debt was incurred almost exclusively during the intraoperative period. Based on these findings, the authors proposed that the greater the oxygen debt incurred (during surgery), the greater the risk of organ failure and death.47 This observation was supported by experimental models of hemorrhagic shock in which the magnitude of VO2 deficit was related to the risk of death of the study animals.49 The same year (1988) these authors published the results of two series of patients.50 The first series was again a pseudo-randomized study in which a DO2 > 600 mL/min m2 and CI > 4.5 L/min m2 were targeted in the postoperative period as compared to standard postoperative hemodynamic targets. The reported mortality was 38% in the control group compared to 21% in the protocol group (P < 0.05). In the second series, patients were randomized preoperatively into one of three treatment groups, namely: (i) a central venous pressure (CVP)-control group, (ii) a pulmonary artery (PAC) control group, and (iii) a PAC protocol group in which “supranormal” hemodynamic and oxygen transport values were used as the goals of care (DO2 > 600 mL/min m2 and CI > 4.5 L/min m2. The reported mortality was 23% in the CVP control group, 33% in the PA-control group, and 4% in the PA-protocol group (P < 0.01). Complications were observed less frequently in patients treated by the protocol in both series. Based on the VO2 debt incurred intraoperatively and the summation of their outcome data, Shoemaker and colleagues recommended that “in the high-risk patient, PA catheterization should be instituted preoperatively and that the important cardiorespiratory values be prophylactically augmented beginning in the preoperative and continued into the intraoperative and immediate postoperative periods.”51
The concept of deliberate perioperative supranormal oxygen delivery was subsequently tested in a true randomized controlled clinical trial by Boyd and colleagues published in 1993.52 In this study 107 high-risk surgical patients were randomized to a control group or a protocol group in which DO2 was increased to greater than 600 mL/min m2 by the use of a dopexamine hydrochloride infusion. The mortality was 5.7% in the protocol group compared to 22.2% (P = 0.01) in the control group with half the number of complication in the protocol group as compared to the control group (P = 0.008). Patients enrolled in this RCT were followed for 15 years following randomization to ascertain their length of survival after surgery.53 Remarkably 20.7% of the goal directed therapy patients versus 7.5% of the control group were alive at 15 years. The authors concluded that short-term goal directed therapy in the perioperative period may improve long-term outcome, in part due to is ability to reduce the number of perioperative complications. The study of Boyd and colleagues has been followed by at least 30 RCTs which have studied perioperative hemodynamic optimization in a variety of settings using various goals and techniques of hemodynamic optimization.54,55,56 The initial preemptive hemodynamic studies used the PAC and targeted the Shoemaker “supranormal” goals while more recent studies have “optimized” CO using esophageal Doppler or dynamic indices of fluid responsiveness. Meta-analyses of these studies have demonstrated that both approaches reduce surgical mortality and morbidity.54,55,56 Furthermore, while mortality was reduced only in the high-risk patients’ morbidity was reduced across all risk groups. In addition, these meta-analyses have demonstrated that the PAC has been largely replaced by less invasive hemodynamic monitoring techniques.
The original goal directed therapy (GDT) study by Shoemaker et al published in 1982 demonstrated the benefit of achieving postoperative supranormal hemodynamic targets.46 Following their 1988 study in which they demonstrated that the oxygen debt was incurred intraoperatively,47 they recommended preemptive perioperative (preoperative or intraoperative) hemodynamic optimization.50 The studies that followed demonstrated that this approach reduced surgical morbidity and mortality.54,55,56 In their 1988 paper Shoemaker and colleagues were unable to determine “which of these influences are operative” to explain the intraoperative oxygen debt.47 Furthermore, as already mentioned the VO2 deficit was calculated using a formula that had not been validated.48 At face value it would appear to be counterintuitive that anesthesia would result in an oxygen debt. General anesthesia and neuromuscular blockade reduce metabolic rate and oxygen consumption while DO2 remains largely unchanged.57,58 Hypothermia occurs frequently during anesthesia which further reduces metabolic oxygen requirements.59,60 Indeed, in Shoemaker’s pivotal paper VO2 fell during the intra-operative period reaching a nadir and the end of surgery.47 In this study VO2 increased sharply after surgery reaching the pre-operative VO2 at 1 hour with the VO2 peaking at 4 hours. It is therefore difficult to understand how anesthesia induces an oxygen debt. This apparent contradiction is best resolved by an analysis of the time course of the mixed venous oxygen saturation (SmvO2) or central venous oxygen saturation (ScvO2) during the perioperative period. SmvO2 (or ScvO2) is a reflection of the balance between DO2 and VO2; in patients who incur an oxygen debt the SmvO2 should fall. A number of studies have monitored the SmvO2/ScvO2 in the perioperative period.61,62,63,64 These studies have reproducibly demonstrate that the SmvO2/ScvO2 remains stable or increases slightly during anesthesia and surgery but falls sharply in the immediate post-anesthesia period. This data suggests that the oxygen debt is incurred postoperatively with the withdrawal of anesthesia (and NMB) and with the development of postoperative pain, agitation, shivering, and increased sympathetic tone. Furthermore, those patients with limited cardiac reserve and most likely to have the largest postoperative fall in SmvO2/ScvO2 and incur the largest oxygen debt. Indeed, these are the patients that have been demonstrated to be at the greatest risk of death and postoperative morbidity.62,64 This would suggest that optimization of CO and DO2 in the immediate postoperative period should be as effective as initiating hemodynamic optimization preoperatively or during the intraoperative period. Meta-analyses have confirmed the benefit of hemodynamic optimization whether initiated pre, intra, or postoperatively.54,55
Optimizing Cardiac Output in Trauma Patients
Patients who suffer severe traumatic injuries with blood loss incur an oxygen debt. The early increases in CO and DO2 following resuscitation of trauma patients is considered compensation in order to replenish the oxygen debt. It has therefore been postulated that hemodynamic optimization following trauma targeting supranormal goals would decreases the incidence of multiple organ failure and death.65,66 Bishop and coworkers pseudo-randomized 115 patients who had suffered from severe traumatic injuries to normal or supranormal hemodynamic goals (using a PAC) on admission to the SICU.67 In this study patients in the supranormal group had significantly fewer organ failures and a lower risk of death. Furthermore, the CI and DO2 of the survivors were significantly higher than those of the patients who died. McKinley and colleagues randomized 36 patients following traumatic shock to a protocol that aimed to achieve a DO2 > 600 mL/min/m2 or a protocol that aimed for a DO2 of about 500 mL/min/m2.68 The patients in the 500 mL/min/m2 group received less fluid and blood; however, there was no difference in outcomes between the groups. Velmahos et al randomized 75 severely injured patients to normal or supranormal hemodynamic goals in the pre-ICU period.69 Survival rates were identical between the normal and supranormal groups. However, patients from either group who achieved supranormal values had improved survival rates compared with patients who did not. In this study patients in the supranormal group who could not achieve supranormal values had a higher death rate than similar patients in the control group. These findings support the argument that achieving supranormal values is an indicator of physiologic reserve rather than being a useful endpoint of resuscitation. Patients who have the inherent ability to respond to trauma by increasing their CO and oxygen transport capacity beyond normal levels are more likely to eliminate the existing oxygen deficit, avoid organ failure, and survive. Furthermore, attempts to optimize patients who do not have the necessary physiologic reserve may be detrimental. Based on the assimilation of these studies it would appear that targeting supranormal DO2 does not improve the outcome of patients who have suffered traumatic injuries and that the most appropriate hemodynamic goals would be a MAP > 65 mm Hg with a normal cardiac index (> 2.5 L/min/m2).
Optimizing Cardiac Output in Medical Patients
It is widely believed that patients with sepsis, particularly those with an increased lactate concentration, have an oxygen debt (due to increased oxygen demand) and that increasing oxygen delivery will increase oxygen consumption and improve patient outcome.70 This concept was popularized by Edwards et al71 and Astiz et al72 in the late 1980s. There is, however, scant evidence that in patients with sepsis tissue hypoxia occurs. Hotchkiss and Karl in a seminal review published by over 20 years ago demonstrated that cellular hypoxia and bioenergetic failure do not occur in sepsis.73 Using phosphorus 31 NMR spectroscopy to monitor concentrations of high-energy phosphates, Song et al, demonstrated normal concentrations of ATP in the leg muscle of septic rats.74 Jepson et al confirmed these findings.75 Similarly, Solomon et al demonstrated that sepsis-induced myocardial depression was not due to bioenergetic failure.76 Using the hypoxic marker [18F]Fluoromisonidazole, Hotchkiss et al were unable to demonstrate evidence of cellular hypoxia in the muscle, heart, lung, and diaphragm of septic rates.77 Additional studies support these findings. In a porcine peritonitis model, Regueira et al demonstrated a significant increase in arterial lactate concentration yet there was no significant change in hepatic and muscle mitochondrial oxidative function.78
While sepsis is considered to be hypermetabolic condition oxygen consumption and energy expenditure are broadly comparable to that of normal people, with energy expenditure decreasing with increasing sepsis severity.79,80,81 Therefore, there is no requirement that oxygen delivery increase with sepsis. Ronco and colleagues determined the critical oxygen delivery threshold for anaerobic metabolism in septic and nonseptic critically ill humans while life support was being discontinued.82 In this study there was no difference in the critical oxygen delivery threshold between septic and nonseptic patients. The critical oxygen delivery threshold was 3.8 ± 1.5 mL/min/kg (266 mL/min in a 70 kg patient); assuming a hemoglobin concentration of 10 g/L translates into a cardiac output of approximately 2 L/min. It is likely that only preterminal moribund patients with septic shock would have such a low cardiac output.
Several studies performed over four decades ago provide strong evidence that hyperlactacidemia noted during shock states was unlikely to be caused by tissue hypoxia.83,84 It has now been well established that epinephrine released as part of the stress response in patients with shock stimulates Na+ K+ ATPase activity. Increased activity of Na+ K+ ATPase leads to increased lactate production under well-oxygenated conditions in various cells, including erythrocytes, vascular smooth muscle, neurons, glia, and skeletal muscle.85,86 This concept was confirmed by Levy and colleagues, who in patients with septic shock demonstrated that skeletal muscle was the leading source of lactate formation as a result of exaggerated aerobic glycolysis through Na+ K+ ATPase stimulation.87 Selective inhibition of Na+ K+ ATPase with ouabain infusion stopped over-production of muscle lactate and pyruvate. In summary, these data suggest that oxygen requirement are not increased in patients with sepsis, that an oxygen debt does not exist in patients with sepsis, and that lactate is produced aerobically as part of the stress response. This would suggest that increasing oxygen delivery would not be a useful exercise. In a pivotal study published in 1994, Hayes and colleagues randomized 109 fluid resuscitated critically ill patients to receive dobutamine titrated to achieve a DO2 > 600 mL/min/m2 or a control group who received dobutamine only if the CI < 2.8 L/m2.88 During treatment there was no difference between groups in the MAP or VO2, despite a significantly higher CI and DO2 in the treatment group. The in-hospital mortality was significantly higher in the treatment group (34% vs 54%, P = 0.04). In a follow-up publication limited to those patients with sepsis, these authors demonstrated that those patients with normal hemodynamics and those who reached the supranormal DO2 goal spontaneously (fluid alone) had a significantly lower morality than those in whom the DO2 goals were achieved with dobutamine.89 The findings of this study are in keeping with the studies in trauma patients which suggest that attempts at driving up oxygen delivery in patients with limited cardiac reserve is not beneficial and maybe potentially harmful. This concept is supported by the study by Gattinoni and colleagues.90 These authors randomized 762 critically ill patients to three groups, namely (i) a control group, (ii) a group with a target cardiac index > 4.5 L/min/m2 (supranormal group), and (iii) a group with a target SmvO2 > 70%. In this study there was no difference in outcome between any of the groups.
In conclusion, fluid challenges and the use of inotropic agents should be guided by the change in cardiac output following these interventions. Perioperative optimization of cardiac output with targeted fluid challenges reduces postoperative complications and mortality. Targeting supranormal hemodynamic targets in patients with traumatic injuries and those with sepsis does not improve outcome and may be harmful. Similarly, in patients with sepsis attempting to increase oxygen delivery in response to an increased lactate concentration is illogical and potentially harmful.