Inductive and Semantic Priors for Categorization in Deep Learning
Tejaswi Kasarla (Ph.D. Student)
An inductive bias of a learning algorithm describes a set of assumptions about the target function independent of training data. Inductive biases play a vital role in the design of machine learning algorithms, consider for example inductive biases for image structures ( e.g., the convolution operator), symmetries (e.g, rotational equivariance), or relational structures (e.g., graph layers). The goal of the PhD project is to take a critical look at important and long-standing inductive biases like optimal class separation and class hierarchy information. Already, we introduced a closed-form solution to incorporate optimal class separation in deep networks that generalize to long-tail classification and open-set recognition. This required disentangling classification and separation in a network, i.e., first we separate class vectors angularly and train to align inputs with class vectors. Going forward, we plan to leverage many such inductive and semantic biases in improving the generalization of learned visual data representations.
|Primary Host:||Pascal Mettes (University of Amsterdam)|
|Exchange Host:||Rita Cucchiara (Università di Modena e Reggio Emilia)|
|PhD Duration:||01 October 2021 - 01 October 2025|
|Exchange Duration:||01 September 2023 - 30 November 2023 01 January 2025 - 31 March 2025|