Thumb ticker md julius von k%c3%bcgelgen  phd student

Independent causal mechanisms in machine learning

Julius von Kügelgen (Ph.D. Student)

Due to changes in environment, measurement device, experimental condition, or sample selection bias, the commonly-made assumption of independent and identically distributed (i.i.d.) random variables underlying many machine learning algorithms is often violated in practice. The perspective of causal modelling offers a principled, mathematical way of reasoning about similarities and differences between distributions arising from such i.i.d. violations. In particular, it views systems as being comprised of independent modules, or mechanisms, which are robust, or invariant, across different conditions---even if other parts of the system change. In my PhD studies, I explore whether and how switching from the traditional prediction-based paradigm to instead learning independent causal mechanisms can be beneficial for non-i.i.d. ML tasks (such as transfer-, meta-, or continual learning). Moreover, I am interested in causal representation learning, i.e., learning generative models over a small number of meaningful causal variables from high-dimensional observations; using counterfactual reasoning to better understand and interpret ML models (explainable AI); and learning causal relations from heterogeneous data (causal discovery).

Primary Host: Bernhard Schölkopf (ELLIS Institute Tübingen & Max Planck Institute for Intelligent Systems)
Exchange Host: Adrian Weller (University of Cambridge & The Alan Turing Institute)
PhD Duration: 01 September 2018 - 28 February 2023
Exchange Duration: 01 September 2018 - 31 August 2019 - Ongoing