Parham Yazdkhasti

PhD
CISPA Helmholtz Center for Information Security (CISPA)
Scalable and Efficient Optimization Methods for Deep Learning

This PhD project explores new optimization techniques to make deep learning more efficient, scalable, and accessible. A key focus is on distributed and federated learning, where large models are trained across multiple devices or institutions without centralizing data. The research will also address challenges such as reducing the memory footprint of optimizers, designing methods that scale well in parallel and distributed settings, and adapting algorithms to advanced architectures like transformers. Beyond large-scale clusters, the project emphasizes making deep learning training feasible on consumer-grade hardware. By combining theory and practice, the work aims to deliver optimization strategies that reduce resource consumption, improve scalability, and broaden access to state-of-the-art machine learning methods.

Track:
Academic Track
PhD Duration:
January 1st, 2026 - December 31st, 2029
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.