Trustworthy Machine Learning
Nikola Konstantinov (Ph.D. Student)
Key to the recent success of machine learning algorithms is the availability of large data sets for training models. The scale and variability of the needed data, however, often enforces its collection from various, potentially unreliable sources. Previous work has shown that machine learning models are vulnerable to noise and adversarial perturbations in the training data. Their performance can also suffer from model misspecifications, test-time attacks and failures of the train and test-time environment. The purpose of the proposed PhD project is the design and analysis of algorithms with provable guarantees for robustness to such problems.
|Primary Host:||Christoph H. Lampert (IST Austria)|
|Exchange Host:||Nicolò Cesa-Bianchi (Università degli Studi di Milano)|
|PhD Duration:||15 September 2017 - 15 March 2022|
|Exchange Duration:||15 April 2021 - 15 July 2021 - Ongoing|