Siyuan Guo
Animate intelligence intake a multitude of information and learn through non-linear, dependent information flow. However current understanding of why machines learn is centered on the assumption of having independent and identically distributed data. Similar to most machine learning fields, causality is developed based on i.i.d. data, and is known to be impossible to uncover unique causal structures based on observational data alone. "Causal de Finetti (2023)" showed that in non-i.i.d., specifically exchangeable data, allows unique causal structure identification from observational data and builds the basis to study causality in exchangeable framework. "Do Finetti (2024)" builds on the previous work and establishes a formal do-calculus framework in exchangeable data. "Out-of-variable Generalization for discriminative models (2024)" studies learning in piecing together finite, marginal information. The quest to understand and enable machine intelligence in non-i.i.d. data has just started: 'Causal de Finetti' and 'Do Finetti' showed that causal reasoning is better enabled in non-i.i.d. data and there is much left to do from developing algorithms that perform at scale to extending the existing causal framework to exchangeable contexts. Going beyond i.i.d. has been the bottleneck for machine learning and exchangeable data offers a realistic and doable next step.