Weakly supervised explanation generation for computer aided diagnostic systems


Aniket Joshi

Abstract

Computer Aided Diagnosis (CAD) systems are developed to aid doctors and clinicians in diagnosis after interpreting and examining a medical image. CAD systems aids in performing the task more consistently. With the arrival of data-driven deep learning paradigm and availability of large amount of data in the medical domain, CAD systems are being developed to diagnose a large variety of diseases ranging from different types of cancers, heart and brain diseases, Alzheimer’s disease, and diabetic retinopathy, etc. These systems are highly competent in performing the task on which they are trained. Although such systems perform at par with the trained clinicians, they suffer from a limitation in that they are completely black box in nature and are trained only on image-level class labels. This poses a problem in deploying CAD systems as stand alone solutions for disease diagnosis. This is because decisions in the medical domain are about health of a patient and are well reasoned and backed by evidence, sometimes from multiple modalities. Hence, there is a critical need for CAD system’s decisions to be explainable. Restricting our focus to only image modality, a solution to design explainable CAD systems could be to train the system using both class labels and local annotations and derive the explanation in a fully supervised manner. However, getting these local annotations is very expensive, time-consuming, and infeasible in most circumstances. In this thesis we address this explainability and data scarcity problem and propose two different approaches towards the development of weakly supervised explainable CAD systems. Firstly, we aim to explain the classification decision by providing heatmaps denoting important regions of interest in the image, which helped the model make the prediction. In order to generate anatomically accurate heatmaps, we provide a mixed set of annotations to our model - class labels for the entire training set of images and rough localization of suspect regions for a smaller subset of images in the training set. The proposed approach is illustrated on two different disease classification tasks based on disparate image modalities - Diabetic macular edema (DME) classification from OCT slices and Breast Cancer detection from mammographic images. Good classification results are shown on public datasets, supplemented by explanations in the form of suspect regions; these are derived using just a third of images with local annotations, emphasizing the potential for generalisability of the proposed solution.

Year of completion:  November 2021
 Advisor : Jayanthi Sivaswamy

Related Publications


    Downloads

    thesis