On a scale that has, at one extreme, supervising physicians doing everything while trainees watch, and, at the other, trainees doing everything and calling their supervisors only when they are in trouble, until fairly recently most medical training systems were far too tilted toward autonomy. Prodded by some widely publicized cases of medical error that were due, at least in part, to inadequate supervision (the death of Libby Zion at New York Hospital in 1986 was the most vivid example2; Table 1-1), the traditional model of medical education—dominated by unfettered resident autonomy—is giving way to something safer. (As a side note, while the Libby Zion case is popularly attributed to long resident hours, the chair of the commission that investigated the case [Dr. Bertrand Bell] clearly saw the root cause more as inadequate supervision than sleepy residents.3) We now recognize that “learning from mistakes” is fundamentally unethical when it is built into the system, and that it is unreasonable to assume trainees will even know when they need help, particularly if they are thrust into the clinical arena with little or no practice and supervision.4