Robustness via Learning to Defer
Rajeev Verma (Ph.D. Student)
Current deep learning methods are known to fail unexpectedly and catastrophically for inputs that are unlike the training data. One way of coping with this brittleness is to incorporate a human into the learning loop. One example is the learning to defer framework, where a rejection model decides if an input is passed to the model or deferred to a human (or some other backup system). In this project, we propose to investigate the learning-to-defer framework's ability to handle and cope with distribution shift and out-of-distribution (OOD) detection.
|Primary Advisor:||Eric Nalisnick (University of Amsterdam)|
|Industry Advisor:||Volker Fischer (Bosch Center for AI)|
|PhD Duration:||16 January 2023 - 16 January 2027|