It was an achievement of the hard work of students and faculty. ICVGIP 2022 was organized by IIT Gandhinagar in association with the Indian Unit for Pattern Recognition and Artificial Intelligence (IUPRAI), an affiliate of the International Association for Pattern Recognition (IAPR).

ICVGIP is dedicated to fostering the community of computer vision, graphics, and image processing researchers and enthusiasts in India and abroad. We strive to live up to this goal at every occurrence of this annual conference.

Best Paper Award

Interpreting Intrinsic Image Decomposition using Concept Activations

The Authors: Avani Gupta*; Saurabh Saini and Prof. P J Narayanan

Evaluation of ill-posed problems like Intrinsic Image Decomposition (IID) is challenging. IID involves decomposing an image into its constituent illumination-invariant Reflectance (R) and albedo invariant Shading (S) components. Contemporary IID methods use Deep Learning models and require large datasets for training. The evaluation of IID is carried out on either synthetic Ground Truth images or sparsely annotated natural images. A scene can be split into reflectance and shading in multiple, valid ways. Comparison with one specific decomposition in the ground-truth images used by current IID evaluation metrics like LMSE, MSE, DSSIM, WHDR, SAW AP%, etc., is inadequate. Measuring R-S disentanglement is a better way to evaluate the quality of IID. Inspired by ML interpretability methods, we propose Concept Sensitivity Metrics (CSM) that directly measure disentanglement using sensitivity to relevant concepts. Activation vectors for albedo invariance and illumination invariance concepts are used for the IID problem. We evaluate and interpret three recent IID methods on our synthetic benchmark of controlled albedo and illumination invariance sets. We also compare our disentanglement score with existing IID evaluation metrics on both natural and synthetic scenes and report our observations. Our code and data are publicly available for reproducibility.

Best Paper Award (Runner-Up)

Overcoming Label Noise for Source-free Unsupervised Video Domain Adaptation –

The Authors : Avijit Dasgupta *; Prof. C V Jawahar and Karteek Alahari (Inria)

icvgip 2022 02

Research summary:

Despite the progress seen in classification methods, current approaches for handling videos with distribution shifts in source and target domains remain source dependent as they require access to the source data during the adaptation stage. In this paper, we present a self-training-based source-free video domain adaptation approach (without bells and whistles) to address this challenge by bridging the gap between the source and the target domains. We use the source pre-trained model to generate pseudo-labels for the target domain samples, which are inevitably noisy. We treat the problem of source-free video domain adaptation as learning from noisy labels and argue that the samples with correct pseudo-labels can help in the adaptation stage. To this end, we leverage the cross- entropy loss as an indicator of the correctness of pseudo-labels, and use the resulting small-loss samples from the target domain for fine-tuning the model. Extensive experimental evaluations show that our method termed as CleanAdapt achieves ∼ 7% gain over the source-only model and outperforms the state-of-the-art approaches on various open datasets.