Foundations of Regularization in Deep Learning
Linara Adilova (Ph.D. Student)
Regularization lies at the core of successful training state-of-the-art deep neural networks. It allows to control overfitting and allows to obtain good generalization even with massively overparametrized models. Regularization influences the training process both implicitly - through the properties of optimizers - and explicitly - by using a regularized loss function, dropout, batch normalization, etc. The goal of this Project is to shed more light on the foundations of regularization techniques employed in deep learning and to formally ground empirical results using the insights from the regularization theory.
|Primary Host:||Asja Fischer (Ruhr University Bochum)|
|Exchange Host:||Martin Jaggi (EPFL)|
|PhD Duration:||01 February 2021 - 31 March 2024|
|Exchange Duration:||01 June 2022 - 31 December 2022 - Ongoing|