Generalization out of distribution
Giambattista Parascandolo (Ph.D. Student)
While current techniques in machine learning have been showing tremendous success at generalization in the i.i.d. setting when large quantities of data and compute are available, performance consistently drops as soon as we try to extend our models to data out of distribution. Tasks such as transfer learning, meta-learning, domain adaptation, domain generalization, all currently prove to be very hard to tackle. In my PhD I am analyzing and developing new techniques to improve generalization out of distribution. My work is going to focus on building abstractions and disentangled representations that allow, for example, for novel compositions of independent mechanisms.
|Primary Host:||Bernhard Schölkopf (Max Planck Institute for Intelligent Systems)|
|Exchange Host:||Thomas Hofmann (ETH Zürich)|
|PhD Duration:||01 March 2017 - 01 October 2021|
|Exchange Duration:||01 February 2020 - 01 May 2021 - Ongoing|