Learning from distributions with kernelized optimal transport
Dimitri Meunier (Ph.D. Student)
The problem of learning functions over spaces of probabilities -- or distribution regression -- is gaining significant interest in the machine learning community. A key challenge behind this problem is to identify a suitable representation capturing all relevant properties of the underlying functional mapping. A principled approach to distribution regression is provided by kernel mean embeddings, which lifts kernel-induced similarity on the input domain at the probability level. However, kernel mean embeddings implicitly hinge on the maximum mean discrepancy (MMD), a metric on probabilities, which may fail to capture key geometrical relations between distributions. In contrast, optimal transport (OT) metrics, are potentially more appealing. The PhD project aims to develop theoretically grounded algorithms for distribution regression that exploit tools from optimal transport. Those developments can potentially help improve current state-of-the-art meta learning heuristics where a central aspect is to measure the similarity between tasks, presented to the learner as empirical distributions.
|Primary Host:||Arthur Gretton (University College London)|
|Exchange Host:||Gabriel Peyré (CNRS, DMA & École Normale Supérieure)|
|PhD Duration:||05 October 2021 - 31 August 2025|
|Exchange Duration:||01 June 2023 - 31 August 2023 01 June 2024 - 31 August 2024|