Naima Elosegui Borras
Geometric-aware regularization, representation, and optimization in deep learning are essential for tackling challenges in continual learning and domain generalization, where capturing similarity between tasks or domains is crucial. However, similarity is often loosely defined—how do we precisely measure how close two model representations are? Accurately quantifying this similarity is critical for selecting appropriate regularizers, which influence both parameter modeling and optimization processes.
Optimal transport theory, which leverages the Wasserstein metric, and information geometry, which employs the Riemannian metric, provide a principled way of measuring similarity and applying geometry-aware regularizing. Interestingly, geometric arguments are also informative about network dynamics, providing knowledge on how to initialise models and how this affects training regimes. I focus on analysing the geometric effects of model, loss and optimiser choice in problems that require generalisation across domains and tasks, mainly with applications to brain data.