Man against machine: diagnostic performance of a deep learning convolutional neural network in comparison to 58 dermatologists [Top 100 journal articles of 2018]
This article is part 6 of a series reviewing selected papers from Altmetric’s list of the top 100 most-discussed journal articles of 2018.
Imprecise knowledge is an important issue in clinical diagnosis, and reasoning with this uncertainty has long been considered a key challenge1 for artificial intelligence (AI) in medicine.
How far has AI come in meeting this challenge?
A May 2018 paper2 provides an insight into the potential for AI to reduce clinical uncertainty by assessing the melanoma detection performance of a deep learning convolutional neural network (CNN) in comparison to a large group of dermatologists. Melanoma is a major challenge in public health, with continuous increases in rates of incidence and mortality fueling a heightened commitment to early detection and prevention.
The study found that “the average diagnostic performance of 58 dermatologists was inferior to a deep learning CNN. Therefore, deep learning CNNs seem a promising tool for melanoma detection.”
Author abstract
Background
Deep learning convolutional neural networks (CNN) may facilitate melanoma detection, but data comparing a CNN’s diagnostic performance to larger groups of dermatologists are lacking.
Methods
Google’s Inception v4 CNN architecture was trained and validated using dermoscopic images and corresponding diagnoses. In a comparative cross-sectional reader study a 100-image test-set was used (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). Main outcome measures were sensitivity, specificity and area under the curve (AUC) of receiver operating characteristics (ROC) for diagnostic classification (dichotomous) of lesions by the CNN versus an international group of 58 dermatologists during level-I or -II of the reader study. Secondary end points included the dermatologists’ diagnostic performance in their management decisions and differences in the diagnostic performance of dermatologists during level-I and -II of the reader study. Additionally, the CNN’s performance was compared with the top-five algorithms of the 2016 International Symposium on Biomedical Imaging (ISBI) challenge.
Results
In level-I dermatologists achieved a mean (±standard deviation) sensitivity and specificity for lesion classification of 86.6% (±9.3%) and 71.3% (±11.2%), respectively. More clinical information (level-II) improved the sensitivity to 88.9% (±9.6%, P = 0.19) and specificity to 75.7% (±11.7%, P < 0.05). The CNN ROC curve revealed a higher specificity of 82.5% when compared with dermatologists in level-I (71.3%, P < 0.01) and level-II (75.7%, P < 0.01) at their sensitivities of 86.6% and 88.9%, respectively. The CNN ROC AUC was greater than the mean ROC area of dermatologists (0.86 versus 0.79, P < 0.01). The CNN scored results close to the top three algorithms of the ISBI 2016 challenge.
Conclusions
For the first time we compared a CNN’s diagnostic performance with a large international group of 58 dermatologists, including 30 experts. Most dermatologists were outperformed by the CNN. Irrespective of any physicians’ experience, they may benefit from assistance by a CNN’s image classification.
Header image source: Adapted from Wikimedia Commons, CC BY-SA 4.0.
References:
- Peek, N., Combi, C., Marin, R., & Bellazzi, R. (2015). Thirty years of artificial intelligence in medicine (AIME) conferences: A review of research themes. Artificial intelligence in medicine, 65(1), 61-73. ↩
- Haenssle, H. A., Fink, C., Schneiderbauer, R., Toberer, F., Buhl, T., Blum, A., … & Uhlmann, L. (2018). Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Annals of Oncology, 29(8), 1836-1842. ↩
Also published on Medium.