Print Get Citation Citation AMA Citation Wu C, Ramjaun A. Wu C, Ramjaun A Wu, Constance, and Aliya Ramjaun. "Quick Take: Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists." 2 Minute Medicine, 3 December 2015. McGraw-Hill, New York, NY, 2015. AccessMedicine. http://accessmedicine.mhmedical.com/updatesContent.aspx?gbosid=452997§ionid=205307769 MLA Citation Wu C, Ramjaun A. Wu C, Ramjaun A Wu, Constance, and Aliya Ramjaun.. "Quick Take: Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists." 2 Minute Medicine New York, NY: McGraw-Hill, 2015, http://accessmedicine.mhmedical.com/updatesContent.aspx?gbosid=452997§ionid=205307769. Download citation file: RIS (Zotero) EndNote BibTex Medlars ProCite RefWorks Reference Manager Mendeley © Copyright Clip Full Chapter Figures Only Tables Only Videos Only Supplementary Content Top Quick Take: Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists by Constance Wu, Aliya Ramjaun Listen +Originally published by 2 Minute Medicine® (view original article). Reused on AccessMedicine with permission. +Deep learning approaches have been proposed as a way to facilitate the interpretation of chest radiographs in low resource settings. In this study, investigators developed a convolutional neural network called CheXNeXt and compared its performance on detecting the presence of 14 different pathologies to the performance of 9 radiologists. CheXNeXt was trained and internally validated on a validation set consisting of 420 images of 14 different pathologies such as pneumonia, pleural effusion, pulmonary masses, and nodules. Radiologists included for comparison included 6 board-certified radiologists (average experience 12 years, range 4 to 28 years) and 3 seniors radiology residents from 3 academic institutions. Investigators found that the algorithm had a mean proportion correct for all pathologies of 0.828 (SD 0.12) compared with 0.675 (SD 0.15) for board-certified radiologists and 0.654 (SD 0.16) for residents. The algorithm had an area under the curve (AUC) of 0.862 (95% CI 0.825 to 0.895) for atelectasis, which was significantly higher than the radiologists’ AUC of 0.808 (95% CI 0.777 to 0.838). Radiologists had statistically significantly higher AUCs for cardiomegaly (AUC 0.888, 95% CI 0.863 to 0.910), emphysema (AUC 0.911, 95% CI 0.866 to 0.947), and hiatal hernia (AUC 0.985, 95% CI 0.974 to 0.991) in comparison to the algorithm (AUC of 0.831 for cardiomegaly, 95% CI 0.790 to 8.70; AUC of 0.704 for emphysema, 95% CI 0.567 to 0.833; AUC of 0.851 for hiatal hernia, 95% CI 0.785 to 0.909). There were no statistically significant differences in AUCs between radiologists and CheXNeXt for the other 10 pathologies. The average time to interpret the 420 images in the validation set was 240 minutes for the radiologists in comparison to 1.5 minutes for CheXNeXt. Taken together, the results from this study indicate that deep learning approaches may be useful for identifying pathologies in chest radiographs in places where radiologists are not readily available. +Click to read the study in PLOS Medicine +©2018 2 Minute Medicine, Inc. All rights reserved. No works may be reproduced without expressed written consent from 2 Minute Medicine, Inc. Inquire about licensing here. No article should be construed as medical advice and is not intended as such by the authors or by 2 Minute Medicine, Inc.