Abstract
Deep learning has shown promise to augment radiologists and improve the standard of care globally. Two main issues that complicate deploying these systems are patient privacy and scaling to the global population. To deploy a system at scale with minimal computational cost while preserving privacy we present a web delivered (but locally run) system for diagnosing chest X-Rays. Code is delivered via a URL to a web browser (including cell phones) but the patient data remains on the users machine and all processing occurs locally. The system is designed to be used as a reference where a user can process an image to confirm or aid in their diagnosis. The system contains three main components: out-of-distribution detection, disease prediction, and prediction explanation. The system open source and freely available here: https://mlmed.org/tools/xray/
Keywords: Chest X-Ray, Radiology, Deep Learning
Summary
The paper proposed a free-to-access web-based chest X-ray disease prediction system named Chester. It receives X-Ray image upload, calculates the prediction locally (using TensorFlow.js) and displays the probability of 14 chest diseases (Atelectasis, Cardiomegaly, Effusion, Infiltration, Mass, Nodule, Pneumonia, Pneumothorax, Consolidation, Edema, Emphysema, Fibrosis, Pleural Thickening, and Hernia). The source code is available at https://github.com/mlmed/dl-web-xray.
My first worry when reading about this paper is about the user experience caused by the time required to download the models and computes the predictions. But it turns out to be somehow acceptable:
- Initial loading of the models (12±2s)
- Computing the ALI and DenseNet computation graphs (1.3±0.5s)
- Computing gradients to explain predictions (17±4s)
According to my personal experience (on Chrome, Ubuntu 16.04 LTS, Core i7 5500U, 8 GB RAM, SSD, 30 MBps broadband), the runtime estimations were quite accurate, except for the predictions explanation (last part) which crashed my browser
Chester is composed of three main parts:
- Out of distribution (OOD) detection: this part provides ability to reject input image that is considered irrelevant (eg. non-chest X-ray images or even stupid cat images ). It uses ALI (Adversarially Learned Inference) (Dumoulin, Vincent et al., 2016), a GAN-based model, to reconstruct loss (using L1, L2, and SSIM distances as outlier metrics) instead of to estimate density. This model is trained to reject samples from dataset such as musculoskeletal radiographs from MURA dataset (Rajpurkar et al., 2018), real world images from CIFAR-100 (Krizhevsky & Hinton, 2009) and written digits from MNIST (LeCun & Cortes, 1998).
- Disease prediction: this part uses CheXnet DenseNet-121 model (Rajpurkar et al., 2017) is used to predict the disease. It is trained using Chest-Xray8 dataset (Wang et al., 2017) and Pneumonia dataset (Kermany et al., 2018). It’s performance (AUC) is around 81% for all diseases (detailed table below).
- Prediction explanation: for the last part, it uses basic gradient saliency map approach (Simonyan et al., 2014; Lo & Cohen, 2015): for an input image and the pre-softmax output of a neural network, we can compute the pixel-wise impact on a specific output or over all outputs.
From the engineering point of view, another interesting thing in this research is the use of ONNX to create a pipeline transforming models that are developed using other tools, such as PyTorch, to TensorFlow.js. So in this research, all models are developed with PyTorch, but ported to TensorFlow.js using this pipeline:
PyTorch ➡️ ONNX ➡️ TensorFlow ➡️ TensorFlow.js
About Paper
- Author(s): Joseph Paul Cohen, Paul Bertin, Vincent Frappier
- Full paper: https://arxiv.org/abs/1901.11210
- Year of publishing: 2019