Invariance and Causality in Machine Learning
Cian Eastwood (Ph.D. Student)
Machine learning (ML) methods have achieved remarkable successes on problems with independent and identically-distributed (IID) data. However, real-world data is not IID—environments change, experimental conditions shift, and new measurement devices are used. Current ML methods struggle when asked to transfer or adapt quickly to such out-of-distribution (OOD) data. However, causality provides a principled mathematical framework to describe the distributional differences that arise from the aforementioned system changes. In my PhD studies, I am exploring how best to exploit the invariances that are observed across multiple environments or experimental conditions by viewing them as imprints of (or clues about) the underlying causal mechanisms. The central hypothesis is that these invariances reveal how the system can change and thus how best to prepare for future changes. My two main focuses are causal representation learning—the discovery of high-level abstract causal variables from low-level observations—and the learning of invariant predictors to enable OOD generalization.
Primary Host: | Chris Williams (University of Edinburgh & The Alan Turing Institute) |
Exchange Host: | Bernhard Schölkopf (ELLIS Institute Tübingen & Max Planck Institute for Intelligent Systems) |
PhD Duration: | 01 September 2018 - 30 September 2022 |
Exchange Duration: | 01 April 2021 - 28 February 2022 - Ongoing |