Thumb ticker md robin

Neural Architecture Search

Binxin Ru (Ph.D. Student)

The success of machine learning algorithms relies heavily on the appropriate choices of model architectures and hyperaparameters. Designing a model often requires strong expertise and selecting these hyperparameters are traditionally done via laborious trial and error. This has created strong demand for ways to automatically design and configure machine learning algorithms to perform well for specific tasks (AutoML). Bayesian optimisation (BO), being a powerful tool for optimising black-box expensive-to-evaluate problems, is a promising solution to this challenge of AutoML. Our research thus focuses on further advancing the efficiency and performance of BO methods while making BO methods a competitive candidate for a larger variety of AutoML tasks. Specifically, on improving BO methods, we addressed the speed bottleneck of information-theoretic strategies, developed an asynchronous batch selection method to enhance the parallelisation of BO and proposed a novel framework which combines the strengths of multi-armed bandits and BO to handle practical input space which involves multiple categorical and continuous variables. Regarding novelly adapting BO for AutoML tasks, we demonstrated the usefulness of our BO methods on a variety of hyperparameter tuning tasks and proposed a highly query efficient black-box attack method, which harnesses BO and Bayesian model selection for finding successful adversarial examples as well the optimal dimension reduction ratio. Finally, we also developed a neat BO-based strategy for neural architecture search (NAS); this NAS strategy exploits the Weisfeiler-Lehman graph kernel in a Gaussian process surrogate to naturally handle the graph representation of architectures while extracting interpretable features responsible for architecture performance, making a first attempt towards interpretable NAS. For the exchange program, we would like to build on our current knowledge and experience on BO and NAS to further improve the field of NAS and thus AutoML. Several important directions that we want to investigate include transfer learning NAS, which exploits the useful knowledge learnt from old tasks to warm-start the search on a new task, as well as unifying the optimisation of architectures and hyper-parameters in one automated framework to provide more optimal AutoML solution.

Primary Host: Michael A. Osborne (University of Oxford)
Exchange Host: Frank Hutter (University of Freiburg)
PhD Duration: 01 October 2017 - 31 December 2021
Exchange Duration: 01 July 2021 - 31 October 2021 - Ongoing