Lukas Hauzenberger
PhD
Johannes Kepler University Linz (JKU)
Parameter-efficient Finetuning

Parameter counts of widely used LLMs today such as GPT -4, LlaMA, Megatron or BLOOM are in the billions or even trillions. As a result, these models can be too slow or large for some real-world tasks where compute power and disk storage are limited, for example in mobile or edge
computing where battery life and storage are constrained. Furthermore, hardware requirements of these models limit the number of practitioners and institutions that are able to use, analyze or improve them further, thus hurting the democratization of Al. This challenge isn't unique to LLMs, but a broader issue in machine learning. Many different methods already exist to enhance the efficiency of neural networks in general, and large language models in particular, with respect to training/inference time, model size, amount of training data, etc. The goal of the PhD project is to make new contributions to this field.

Track:
Academic Track
PhD Duration:
December 1st, 2023 - November 30th, 2027
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.