Linara Adilova
PhD
Ruhr University Bochum (RUB)
Foundations of Regularization in Deep Learning

Regularization lies at the core of successful training state-of-the-art deep neural networks. It allows to control overfitting and allows to obtain good generalization even with massively overparametrized models. Regularization influences the training process both implicitly - through the properties of optimizers - and explicitly - by using a regularized loss function, dropout, batch normalization, etc. The goal of this Project is to shed more light on the foundations of regularization techniques employed in deep learning and to formally ground empirical results using the insights from the regularization theory.

Track:
Academic Track
PhD Duration:
February 1st, 2021 - March 31st, 2024
First Exchange:
June 1st, 2022 - December 31st, 2022
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.