Silpa Vadakkeeveetil Sreelatha

Interpretable Representation Learning using Generative Models

Silpa Vadakkeeveetil Sreelatha (Ph.D. Student)

Generative models have witnessed significant performance improvements in image synthesis over the last decade with the introduction of the generative adversarial network (GAN), variational autoencoder (VAE), and Diffusion models. Extensive research has been carried out to demonstrate their utility in applications such as super-resolution image synthesis, text-conditioned image generation, and many others. However, it is necessary to identify the interpretable and disentangled representations concealed within the generative models to widen their ability to extrapolate, which is a critical component of human representation capabilities. For example, a model that can produce images of animals with diverse backgrounds should capture representations corresponding to animals, backgrounds etc in separate units, regardless of whether the background matches the animal's natural habitat. Some of the advantages of learning such representations are: (1) They aid in the controllable generation, which may be used in various applications such as zero-shot tasks, image manipulation, etc. (2) They can be used to synthesize counterfactual images which could be used in explainable AI, fairness, and robustness. This PhD project aims to learn interpretable representations in the generative models which could be used to improve the robustness, explainability and fairness of the classifiers.

Primary Host: Anjan Dutta (University of Surrey)
Exchange Host: Serge Belongie (University of Copenhagen & Cornell University)
PhD Duration: 01 July 2023 - 31 December 2026
Exchange Duration: 01 April 2025 - 30 September 2025 - Ongoing