Learning from multiple distributions
Maximilian Beck (Ph.D. Student)
Traditional deep learning methods perform well in single-domain settings where a vast amount of data is available, but fail in applications where labeled data is limited and comes from different domains. This is for example the case when industrial sensor systems need adjustments to new operating conditions or when intelligent devices need to adapt quickly to new users. To address these problems, numerous Meta-Learning and Few-Shot Learning algorithms have been developed that effectively transfer knowledge from related learning tasks. However, these methods have often focused on narrow task distributions where the learning tasks are very similar and domain shifts remain small. Generalizing across domains with a small sample data set still remains one of the greatest challenges faced by deep learning. In this project, we focus on wider task distributions including the cross-domain setting. We aim to develop novel learning algorithms by combining recent insights into learning dynamics with associative memories, domain adaptation methods, and state-of-the-art neural network architectures.
Primary Host: | Sepp Hochreiter (Johannes Kepler University Linz) |
Exchange Host: | Gerhard Neumann (Karlsruhe Institute of Technology) |
PhD Duration: | 01 July 2021 - Ongoing |
Exchange Duration: | 01 February 2024 - 01 August 2024 - Ongoing |