Retinal Image Analysis
Retinal images are widely used for dignostic purposes by opthalmologists. They provide vital information about the health of the sensory part of the visual system.
Several diseases that can lead to blindness manifest as artifacts in the retinal image. Automatic segmentation and analysis of retinal images can therefore be used to diagnose these diseases.
Our current work focuses on following areas:
- General Segmentation
- Uni-modal and cross-modal registration
- Disease Analysis
- Content Based Image Retrieval (CBIR) of Retinal Images
General Segmentation - Developing techniques for segmenting various structures of interest such as blood vessel tree, optic disk and the macula within the retina.
Uni-modal and cross-modal registration Developing techniques for retinal image registration in order to combine the complementary information in different and same retinal image modalities image.
Disease Analysis - Developing techniques for identifying, quantifying and tracking signs of different types of diseases.
Some of the projects in disease analysis are :
- Detection and quantification of lesions that occur at very early stages of Diabetic Retinopathy (DR). Examples of such lesions are microaneurysms and hard exudates. The aim is to detect these from color fundus images as it is of prime importance in developing solutions for screening programmes among large populations.
- Detection of Capillary non-perfusion (CNP), which occurs in advanced stages of DR. The aim is to detect and quantify the total area covered by these lesions from FFA images.
- Detection, counting and grading of drusen, that occur due to Age-related macular degeneration(AMD)
Our current collaborators are: LVPEI, Aravind Eye Institute, Hyderabad and Aravind Eye Hospital, Madurai.
Content Based Image Retrieval (CBIR) of Retinal Images
Image search through Content Based Image Retrieval (CBIR) is a challenging problem in a large database. This becomes more complex in medical images as the interest for retrieval is based on semantics (pathology/anatomy) rather than just the visual similarity.
We are currently working on devising a CBIR solution for retinal images in the ophthalmology department of hospitals. By applying CBIR in medical image databases we aim to assist ophthalmologists and students for teaching and self training on the computer. This is based on the assumption that visual characteristics of a disease carry diagnostic information and often visually similar images correspond to the same disease category.
People Involved
- Arunava Chakravarty
- Ujjwal
- Gopal
- Akhilesh
- Sai
- Yogesh
Retinal Image Datasets
Our Datasets
- Capillary Nonperfusion (CNP) Analysis Dataset *
- Age-related Macular Degeneration (AMD) Analysis Dataset *
- Optic Nerve Head (ONH) segmentation Dataset (Drishiti-GS 1)
* - available on request
Other Available Datasets
- Digital Retinal Images for Vessel Extraction (DRIVE)
- STructured Analysis of the Retina (STARE)
- Standard Diabetic Retinopathy Database (DIARETDB0 & DIARETDB1)
- Methods to evaluate segmentation and indexing techniques (MESSIDOR)
- Test suite of 18 challenging pairs of images for testing registration algorithms
- Image Database and Archive (RISA)
- Collection of multispectral images of the fundus
Other Retinal Image Analysis Groups
- Department of computer Science, University of Bristol
- IMAGERET
- MESSIDOR
- Retinal Image Analysis Group: Columbia University
- Retinopathy image search and analysis (RISA)
- Structured Analysis of the Retina (STARE)
- DRIVE : Digital Retinal Images for Vessel Extraction
Reading References
- Online Retinal Image Analysis Reference
Retinal Image Tools
* Available Soon
Events
- Retinopathy Online Challenge (ROC) is underway
- DIARETDB released
- Selected conferences in the field of Medical Imaging
- Medical Image Analysis Journals and Conference Proceedings
- Conferences Computer Vision, Image Processing and Medical Image Analysis
- Medical Imaging Computing
- European Society of Retina Specialists (EURETINA)
- Digital Healthcare Conference
- The International Diabetes Federation (IDF)


























Matching patches of a source image with patches of itself or a target image is a first step for many operations. Finding the optimum nearest-neighbors of each patch using a global search of the image is expensive. Optimality is often sacrificed for speed as a result. In this work, we developed the Mixed-Resolution Patch-Matching (MRPM) algorithm that uses a pyramid representation to perform fast global search. We compare mixed-resolution patches at coarser pyramid levels to alleviate the effects of smoothing. We store more matches at coarser resolutions to ensure wider search ranges and better accuracy at finer levels. Our method achieves near optimality in terms of exhaustive search. Our simple approach enables fast parallel implementations on the GPU, yielding upto 70x speedup compared to other iterative approaches.
K-Means is a popular clustering algorithm with wide applications in Computer Vision, Data mining, Data Visualization, etc. Clustering large numbers of high-dimensional vectors is very computation intensive. In this work, we present the design and implementation of the K-Means clustering algorithm on the modern GPU. A load balanced multi-node, multi-GPU implementation which can handle up to 6 million, 128-dimensional vectors was also developed. Our implementation scales linearly or near-linearly with different problem parameters. We achieve up to 2 times increase in speed compared to the best GPU implementation for K-Means on a single GPU.
Many image filtering operations provide ample parallelism, but progressive non-linear processing of images is among the hardest to parallelize due to long, sequential, and non-linear data dependency. A typical example of such an operation is error diffusion dithering, exemplified by the Floyd-Steinberg algorithm. In this work, we present its parallelization on multicore CPUs using a block-based approach and on the GPU using a pixel-based approach. We also develop a hybrid approach in which the CPU and the GPU operate in parallel during the computation. Our implementation can dither an 8K x 8K image on an off-the-shelf laptop with Nvidia 8600M GPU in about 400 milliseconds when the sequential implementation on its CPU took about 4 seconds.
We have developed several basic graph algorithms on the CUDA architecture including BFS, Single Source Shortest Path(SSSP), All-Pairs Shortest Path(APSP), and Minimum Spanning Tree computation for large graphs consisting of millions of vertices and edges. We show results on random, scale free and almost linear graphs. Our approaches are 10-50 times faster than their CPU counterparts, on random graphs with an average degree of 6 per vertex.
We propose the use of structured lighting patterns, which we refer to as projected texture, for the purpose of object recognition. The depth variations of the object induces deformations in the projected texture, and these deformations encode the shape information. The primary idea is to view the deformation pattern as a characteristic property of the object and use it directly for classification instead of trying to recover the shape explicitly. To achieve this we need to use an appropriate projection pattern and derive features that sufficiently characterize the deformations. The patterns required could be quite different depending on the nature of object shape and its variation across objects.


