no image

Explainable machine learning for causal understanding in neuroscience

Anastasiia Filippova (Ph.D. Student)

Understanding how neural circuits enable behavior is a critical challenge in neuroscience: it has important implications for brain-machine-interfaces (BMIs), robotics, and neuro-rehabilitation. As our ability to record large neural and behavioral data increases, there is growing interest in modeling neural dynamics during adaptive behaviors to probe neural representations (Urai et al Nature Neuroscience 2022). In particular, new non-linear methods that discover neural latent embeddings can reveal underlying correlates of behavior (Schneider et al Nature 2023), yet, we lack causal, mathematical understanding of these latents, which is required to be able to causally test their role. Moreover, we need our methods to be identifiable, and explainable. The PhD project therefore aims to bridge ideas from disentangled representation learning (Whittington et al ICLR 2023), contrastive learning (Schneider et al Nature 2023), and new works in causal component analysis (Liang et al arXiv 2023), to build new methods for causal discovery in neuroscience.

Primary Host: Mackenzie Mathis (EPFL & Harvard University)
Exchange Host: Timothy Behrens (University of Oxford)
PhD Duration: 01 February 2024 - 01 February 2029
Exchange Duration: - Ongoing - Ongoing