Quantifying uncertainty in deep learning models
Lukas Aichberger (Ph.D. Student)
Predictions made by machine learning models urge to be reliable. Understanding if a prediction is under-confident or falsely over-confident is critical for deploying machine learning, especially in real-world applications. However, extracting models from data is inseparably connected with uncertainty, due to the intrinsic and irreducible stochastic variability in the samples as well as the lack of knowledge about the underlying model parameters that best explain the data. This research thus aims to develop practical methods to identify sources of uncertainties during both optimization and decision-making processes. The goal is to conceptualize interpretable and trustworthy techniques that overcome inexplicable and overly confident predictions currently common in deep learning models.
|Primary Host:||Sepp Hochreiter (Johannes Kepler University Linz)|
|Exchange Host:||Yarin Gal (University of Oxford)|
|PhD Duration:||17 October 2022 - 16 October 2025|
|Exchange Duration:||14 October 2024 - 13 April 2025 - Ongoing|