Nirlipta Pande
Present state of machine learning is greatly limited as compared natural intelligence in the real world, where feedback is sparse. The ability of animals to learn reusable mechanisms from temporal structure is often engineered out to accomodate independent and identically distributed (i.i.d) data when modelling machine learning algorithms. This doctoral study aims at exploring generalization behaviours in machine learning by learning reusable causal representations that can be combined to learn new meaningful concepts with minimal samples. Taking inspiration from human-centric learning, we intend to learn shared compositional causal representations across modalities. Recent works like the Platonic Represetnation Hypothesis suggest emergence of shared representations across modalities and seminal works like Human Level Concept Learning through probabilistic program level induction have contributed to our understanding of learning concepts compositionally with very few instances, similar to how humans learn. We aim at extending these ideas to learning richer shared representations across modalities that can be used to learn new concepts compositionally.