Deep Learning for Assisting Annotation and Studying Relationship between Cancers using Histopathology Whole Slide Images


Ashish Menon

Abstract

Cancer has been the leading cause of death across the globe and there has been a constant push and research from the scientific community as a whole towards assisting its diagnosis. Particularly the last decade has seen widespread use of computer vision and AI towards tackling cancer diagnosis using both radiological (non-invasive, ex: X-ray, CT ) and pathological modalities (invasive ex: histopathology). A histopathologic Whole Slide Image (WSI) represents digitized image of tissue sample characterized by a large size of up to 109 pixels at maximum resolution. It is considered the gold standard for cancer diagnosis. The routine diagnosis involves experts called pathologists to analyse a slide containing the tissue samples under a microscope. The process is often subjective to cognitive load and the diagnosis is prone to inter and intra pathologist errors. With the digitization of tissue samples as WSI, computerassisted diagnosis would be impactful to address the above issues especially with the advent of deep learning. For this we require models to be trained with large amounts of annotated data as well as understanding the cancer manifestation across organs. In this thesis work, we address two major issues with the help of deep learning techniques. (1) assisting the whole slide image annotation using an expert in the loop and (2) understanding the relationship between cancers and bring to light commonalities of cancer patterns between certain pairs of organs. A typical process of slide diagnosis under a microscope by experts involves exhaustive analysis in scanning across the slides, searching for anomalous/tumorous regions present. Owing to the large dimensions of the histopathology WSI, visually searching for clinically significant regions (patches) is a tedious task for a medical expert. Sequential analysis of several such images further increases the workload resulting in poor diagnosis. A major impediment to automate this task using a deep learning model is the requirement of large amounts of annotated data of WSI patches which is a laborious process and involves exhaustive search for anomalous regions. To tackle this issue, the first part of the thesis work proposes a novel CNN-based, expert feedback-driven interactive learning technique to mitigate this issue. The proposed method seeks to acquire labels of the most informative patches in small increments with multiple feedback rounds to maximize the throughput. It requires the expert to query a patch of interest from a slide and provide feedback to a set of unlabelled patches chosen using the proposed sampling strategy from a ranked list. The proposed technique is applied in a setting that assumes there exists a large cohort of unannotated slides, almost eliminating the need of annotated data upfront; instead learns with an expert involvement. We discuss several strategies for sampling the right set of samples to be labelled by the expert to minimise the expert feedback and maximise the throughput. Theproposed technique can also annotate multiple slides parallelly using a single slide under review (used to query anomalous patches), which in turn reduces the effort of annotation. The Cancer Genome Atlas (TCGA) contains large repositories of histopathology whole slide images spanning across several organs and subtypes. However, not much work has gone into analysis of all the organs and subtypes and their similarities. Our work attempts to bridge this gap by training deep learning models to classify cancer vs normal patches for 11 subtypes spanning 7 organs (9792 tissue slides) to achieve near-perfect classification performance. We used these models to investigate their performances in the test set of other organs (cross organ inference). We found that every model had a good cross organ inference accuracy when tested on breast, colorectal and liver cancers. Further, a high accuracy is observed between models trained on the cancer subtypes originating from same organ (kidney and lung). We also validated these performances by showing the separability of cancer and normal samples in a high dimensional feature space. We further hypothesized that the high cross organ inferences are due to shared tumor morphologies among organs. We validated the hypothesis by showing the overlap in the Gradient-weighted Class Activation Mapping (GradCAM) visualizations and similarities in the distributions of nuclei geometrical features present within the high attention regions.

Year of completion:  June 2022
 Advisor : C V Jawahar,Vinod P K

Related Publications


    Downloads

    thesis