Dealing with Domain Shift under Resource Constraints
Niclas Popp (Ph.D. Student)
This PhD project aims at addressing two major challenges that deep learning models for perception face when being deployed on edge devices (such as in driver assistance/automated driving, mobile robotics, video surveillance): (i) resources are limited in edge devices and there are hard limits on, e.g., latency and energy consumption; (ii) the model will face data unlike the typical data covered in its training data - there will be (continuous and/or sudden) domain shift. Each of these two challenges has been studied extensively in isolation. For instance, (i) has been addressed by approaches such as quantization, pruning, distillation, or neural architecture search. For (ii), approaches like test-time adaptation have been proposed that can deal with domain shift and drifts at inference time. Moreover, domain generalization is a class of approaches that aims at to train a model such that it generalizes to novel domains/domain shifts at test-time. However, there is relatively little work on combining these two desiderata. In contrast there is growing evidence that these two properties are at odds with each other: robustness to domain shift benefits largely from scaling models in terms of capacity and training data, while resource constraints prohibit exactly this scaling. The goal of this PhD project is thus to devise novel approaches for dealing with domain shift under resource constraints.
|Primary Advisor:||Matthias Hein (University of Tübingen)|
|Industry Advisor:||Jan Hendrik Metzen (Bosch Center for AI)|
|PhD Duration:||01 October 2023 - 01 September 2026|