Image Reconstruction


Improving resolution of tomographic images has been an active area of research in the past few years. This is of special interest in nuclear imaging where the image resolution is limited by the permissible dosage. Super resolution (SR) techniques based on combination of a set of low resolution images with spatial shifts have been examined for PET and CT images.

Our research is focused on obtaining high quality upsampled tomographic images. The technique we have developed for upsampling uses samples drawn from union of rotated lattices. Both hexagonal and square lattices have been studied. Such a technique has the benefit that the number of images required for deriving the synthetically zoomed output is minimal.

U12 1

 

hoffman H12 4 compliment hoffman H12 4 spectrum
  Reconstructed (upscaled by a factor of 4) RoI on union of rotated hexagonal lattice and its spectrum

 

hoffman struct SR4 compliment  hoffman SR 4 spectrum 
Reconstructed (upscaled by a factor of 4) RoI on union of shifted square lattice and its spectrum

 


People Involve

  • Neha
  • Kartheek

Brain Image Analysis


Brain is one of the most complex and sophisticated organs in human body. It is the center of nervous system and responsible for motor actions, memory and intelligence in humans. Study of structure, function and disease in human brain is done by analyzing images obtained in different modalities like Magnetic Resonance Imaging (MRI) and Computed Tomography (CT).


Structural Analysis and Disease Detection in CT and MRI Brain Images

Stroke detection - Stroke is a disease which affects vessels that supply blood to the brain. A stroke occurs when a blood vessel either bursts or there is a blockage of the blood vessel. Due to lack of oxygen, nerve cells in the affected brain area are not able to perform basic functions and cause sudden death.

Strokes are mainly classified in two categories:

  1. Ischemic stroke or infarct (due to lack of blood supply)
  2. Hemorrhagic stroke (due to rupture of blood vessel)
 brainCT1  brainCT2

Computed tomographic (CT) images are widely used in the diagnosis of stroke. We are working on an automated method to detect and classify an abnormality into acute infarct, chronic infarct and hemorrhage at the slice level of non-contrast CT images. CT imaging is preferred over MRI due to wider availability, lower cost and lower scan time.

brainCT3 brainCT4

We have developed a unified method to detect both types of strokes from a given CT volume data. The proposed method is based on the observation that stroke presence disturbs the natural contra-lateral symmetry of a slice. Accordingly, we characterize stroke as a distortion between the two halves of the brain in terms of tissue density and texture distribution.

This work was conceived recently and is still under progress in association with CARE hospital.

Detection of Neurodegenerative diseases - Neurodegenerative diseases result in cognitive impairment that affects memory, attention, language and problem solving. It is caused due to head injury, infections or aging. Early detection of these diseases can be helpful in treatment and hence in reversing the impairment. MR images are recommended for detection of this class of diseases due to their high contrast property.

Structural imaging can detect and follow the time course of subtle brain atrophy as a surrogate marker for pathological processes. Our research is directed towards developing MR image analysis algorithms to detect such atrophies. Such detection tools will aid in studying the pathological processes and detect pre-demented conditions.


People Involved

  • Sushma
  • Rohit
  • Shashank
  • Mayank
  • Sandeep
  • Saurabh

 

Retinal Image Analysis


Retinal images are widely used for dignostic purposes by opthalmologists. They provide vital information about the health of the sensory part of the visual system.

Several diseases that can lead to blindness manifest as artifacts in the retinal image. Automatic segmentation and analysis of retinal images can therefore be used to diagnose these diseases.

Our current work focuses on following areas:

  • General Segmentation
  • Uni-modal and cross-modal registration
  • Disease Analysis
  • Content Based Image Retrieval (CBIR) of Retinal Images

General Segmentation - Developing techniques for segmenting various structures of interest such as blood vessel tree, optic disk and the macula within the retina.

Uni-modal and cross-modal registration Developing techniques for retinal image registration in order to combine the complementary information in different and same retinal image modalities image.

Disease Analysis - Developing techniques for identifying, quantifying and tracking signs of different types of diseases.

Some of the projects in disease analysis are :

  • Detection and quantification of lesions that occur at very early stages of Diabetic Retinopathy (DR). Examples of such lesions are microaneurysms and hard exudates. The aim is to detect these from color fundus images as it is of prime importance in developing solutions for screening programmes among large populations.
  • Detection of Capillary non-perfusion (CNP), which occurs in advanced stages of DR. The aim is to detect and quantify the total area covered by these lesions from FFA images.
  • Detection, counting and grading of drusen, that occur due to Age-related macular degeneration(AMD)

Our current collaborators are: LVPEI, Aravind Eye Institute, Hyderabad and Aravind Eye Hospital, Madurai.


Content Based Image Retrieval (CBIR) of Retinal Images

Image search through Content Based Image Retrieval (CBIR) is a challenging problem in a large database. This becomes more complex in medical images as the interest for retrieval is based on semantics (pathology/anatomy) rather than just the visual similarity.

We are currently working on devising a CBIR solution for retinal images in the ophthalmology department of hospitals. By applying CBIR in medical image databases we aim to assist ophthalmologists and students for teaching and self training on the computer. This is based on the assumption that visual characteristics of a disease carry diagnostic information and often visually similar images correspond to the same disease category. 


People Involved

  • Arunava Chakravarty
  • Ujjwal
  • Gopal
  • Akhilesh
  • Sai
  • Yogesh

 

Retinal Image Datasets

Our Datasets

- Capillary Nonperfusion (CNP) Analysis Dataset *
- Age-related Macular Degeneration (AMD) Analysis Dataset *
- Optic Nerve Head (ONH) segmentation Dataset (Drishiti-GS 1)

 

* - available on request


 

Other Available Datasets

- Digital Retinal Images for Vessel Extraction (DRIVE)
- STructured Analysis of the Retina (STARE)
- Standard Diabetic Retinopathy Database (DIARETDB0 & DIARETDB1)
- Methods to evaluate segmentation and indexing techniques (MESSIDOR)
- Test suite of 18 challenging pairs of images for testing registration algorithms
- Image Database and Archive (RISA)
- Collection of multispectral images of the fundus


Other Retinal Image Analysis Groups

- Department of computer Science, University of Bristol
- IMAGERET
- MESSIDOR
- Retinal Image Analysis Group: Columbia University
- Retinopathy image search and analysis (RISA)
- Structured Analysis of the Retina (STARE)
- DRIVE : Digital Retinal Images for Vessel Extraction


Reading References

- Online Retinal Image Analysis Reference


Retinal Image Tools

* Available Soon


Events

Histo Pathological Image Analysis


Automated analysis of histo-pathological images is attracting much interest in the recent past .Most of these images are largely characterised by colour, textural and morphological features and not gross anatomical structures found in radiology (neuroimages, lung,CT etc). A fundamental task in CAD tool development for histopathological images is detection and segmentation of structures such as lymphocytes, nuclei, stroma, etc.

In regards to histopathological image analysis, we are currently working on mitosis detection in H & E stained breast biopsy slides .Mitosis detection and quantification is a key task in grading of cancer. In designing a pipeline for the detection problem, identification of stroma in the tissue is important when comes to rejection of false candidates.

 

histo stained tissue histo stroma segementation

 


 

People Involved

  • Anisha

 

Semantic Classification of Boundaries of an RGBD Image


Nishit Soni, Anoop M. Namboodiri, CV Jawahar and Srikumar Ramalingam. Semantic Classification of Boundaries of an RGBD Image. In Proceedings of the British Machine Vision Conference (BMVC 2015), pages 114.1-114.12. BMVA Press, September 2015. [paper] [abstract] [poster] [code] [bibtex]

Download the dataset from here.


Summary

Edges in an image often correspond to depth discontinuities at object boundaries (occlusion edges) or normal discontinuities (convex or concave edges). In addition, there could be planar edges that are within planar regions. Planar edges may result from shadows, reflection, specularities and albedo variations. Figure 2 shows a sample image with edge labels. Figure 1 represents the kinect depth map of that image. This paper studies the problem of classifying boundaries from RGBD data. We propose a novel algorithm using random forest for classifying edges into convex, concave and occluding entities. We release a data set with more than 500 RGBD images with pixel-wise groundtruth labels. Our method produces promising results and achieves an F-score of 0.84.

 

We use both image and depth cues to infer the labels of edge pixels. We start with a set of edge pixels obtained from an edge detection algorithm and the goal is to assign one of the four labels to each of these edge pixels. Each edge pixel is uniquely mapped to one of the contour segments. Contour segments are sets of linked edge pixels. We formulate the problem as an optimization on a graph constructed using contour segments. We obtain unary features using pixel classifier based on Random forest. We design a feature vector with simple geometric depth comparison features. We use a simple Potts model for pairwise potentials. The individual steps in the algorithm is shown in figure 3.

Figure 3 : This figure summarizes the pipeline of our approach. It shows RGB and depth maps as input (1st image set), with Pb edge detection (2nd image). The classification and MRF outputs are shown in the last two images respectively. Color code: red (occluding), green (planar), blue (convex), yellow (concave).


Experiments and Results

For quantitative evaluation of the method, we have created an annotated dataset of 500 RGBD images of varying complexity. Train to test ratio is 3:2. Our dataset consists of objects such as tables, chairs, cupboard shelves, boxes and household objects in addition to walls and floors. We also annotate 100 images from NYU dataset, which include varying scenes from bed-room, living-room, kitchen, bathroom and so on with different complexities.

We compare our approach with Gupta et al. [1] and show that our approach provides better results. The approach that we present here provides good labels for most pixels with high precision and the performance degrades when there is a significant loss in the depth data. We get an average F-score of 0.82 on the classification results for our data set. The use of smoothness constraints in the MRF achieves an F-score of 0.84. The NYU dataset contains complex scenes containing glass windows and table heads. We achieve an average F-score of 0.74 for the NYU dataset. Below is the quantitative evaluation of our approach along with the comparison with Gupta et al..

 

Table 1 : Precision, Recall and F-measure for each edge type on our and NYU datasets. 1st and 2nd rows of each set gives the results of our approach and comparison with [1]. The 3rd row in each set shows the results of our approach on NYU dataset.
  Occluding Planar Convex Concave
Recall 0.85 0.92 0.70 0.78
Gupta et al. [1] Recall 0.70 0.84 0.52 0.67
Our recall on NYU 0.76 0.85 0.56 0.69
Precision 0.86 0.81 0.93 0.89
Gupta et al. [1] Precision 0.71 0.75 0.72 0.71
Our Precision on NYU 0.79 0.80 0.77 0.71
F-measure 0.86 0.86 0.80 0.83
Gupta et al. [1] F-measure 0.71 0.79 0.61 0.69
Our Fmeasure on NYU 0.77 0.83 0.65 0.70
Table2 : Precision, recall and F-measure for each edge type without and with pairwise potentials.
  Occluding Planar Convex Concave
Pixel Recall 0.82 0.87 0.69 0.75
Final Recall 0.85 0.92 0.70 0.78
Pixel Precision 0.84 0.85 0.90 0.86
Final Precision 0.86 0.81 0.93 0.89
Pixel F-measure 0.83 0.86 0.78 0.80
Final F-measure 0.86 0.86 0.80 0.83

 

Results on NYU dataset (click to enlarge)
Color code: red (occluding), green (planar), blue (convex), yellow (concave)
Groundtruth 557 GT 637 GT 734 GT 941 GT 934 GT
Result 557 MRF 637 MRF 734 MRF 941 MRF 934 MRF

 

Results on our dataset (click to enlarge)
Color code: red (occluding), green (planar), blue (convex), yellow (concave)
Groundtruth 143 gt 274 gt 214 gt 364 gt 410 GT
Our result 143 graphcut 274 graphcut 214 graphcut 364 graphcut 410 MRF
Sgupta et al. [1] 143 274 214 364 410

 


References 

  1. S. Gupta, P. Arbelaez, and J. Malik. Perceptual organization and recognition of indoor scenes from rgb-d images. In CVPR, 2013.

Authors

Nishit Soni 1
This email address is being protected from spambots. You need JavaScript enabled to view it.

Anoop M. Namboodiri 1
This email address is being protected from spambots. You need JavaScript enabled to view it.

C. V. Jawahar 1
This email address is being protected from spambots. You need JavaScript enabled to view it.

Srikumar Ramalingam 2
This email address is being protected from spambots. You need JavaScript enabled to view it.

1 International Institute of Information Technology, Hyderabad.
2 Mitsubishi Electric Research Lab (MERL), Cambridge, USA.