Learning of hidden principled mechanisms behind data without supervision
Takeru Miyato (Ph.D. Student)
In the past, various neural networks models have been developed based on the in-distribution performance on benchmark datasets like ImageNet. However, in many cases, the data in real applications differs from the dataset on which the model was trained because of the change in the mechanisms behind the dataset. In order to deal with such a situation, the model needs to respect the fundamental underlying mechanisms. For example, in the CV tasks, the underlying mechanism of interest might be the one that governs the interaction between objects and human agents, the creation process of the objects, the semantic context of the objects, etc. Thanks to the compact and versatile knowledge of such mechanisms, we humans are capable of making accurate inferences on unseen data, even for a fairly wide range of tasks with small amount of observations. I believe that enabling models to learn and even autonomously acquire simple mechanisms behind the dataset necessary to shift the field of machine learning to the stage beyond mere pattern recognition. I aim to develop frameworks toward the goal of automated learning of hidden principled mechanisms behind data.
|Primary Host:||Andreas Geiger (University of Tübingen & Max Planck Institute for Intelligent Systems)|
|Exchange Host:||Max Welling (University of Amsterdam)|
|PhD Duration:||01 September 2022 - 31 August 2026|
|Exchange Duration:||01 September 2024 - 31 August 2025 - Ongoing|