Lukas Hauzenberger

Parameter-efficient Finetuning

Lukas Hauzenberger (Ph.D. Student)

Parameter counts of widely used LLMs today such as GPT -4, LlaMA, Megatron or BLOOM are in the billions or even trillions. As a result, these models can be too slow or large for some real-world tasks where compute power and disk storage are limited, for example in mobile or edge computing where battery life and storage are constrained. Furthermore, hardware requirements of these models limit the number of practitioners and institutions that are able to use, analyze or improve them further, thus hurting the democratization of Al. This challenge isn't unique to LLMs, but a broader issue in machine learning. Many different methods already exist to enhance the efficiency of neural networks in general, and large language models in particular, with respect to training/inference time, model size, amount of training data, etc. The goal of the PhD project is to make new contributions to this field.

Primary Host: Sepp Hochreiter (Johannes Kepler University Linz)
Exchange Host: Edoardo Maria Ponti (University of Edinburgh & University of Cambridge)
PhD Duration: 01 December 2023 - 30 November 2027
Exchange Duration: - Ongoing - Ongoing