Enrique Valero
PhD
Bayesian networks for interpretable machine learning

The project orbit arounds the state-of-the-art topic of explainable artificial intelligence, that aims toward making intelligent system easier to understand. Most of the current literature covers topics related to neural networks and deep learning applied to supervised learning tasks. However, simpler models with good performance are usually ignored, even though explainability is still desirable and other type of tasks are underrepresented in the literature. As such, my research will focus on using Bayesian for improved machine learning explainability and how to increase the interpretability of these models themselves. Bayesian networks can provide a strong statistical and mathematical component to a field that is currently highly based in heuristics and approximantions and, furthermore, these models can perform a wide variety of tasks, such as supervised and unsupervised learning, anomaly detection and time series analysis.

Track:
Academic Track
PhD Duration:
October 1st, 2022 - August 1st, 2025
First Exchange:
September 1st, 2024 - April 1st, 2025
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.