Ali Zindari
This PhD project explores the intersection of optimization theory, learning dynamics, and distributed intelligence. Its goal is to understand how high-dimensional models generalize across tasks, data distributions, and collaborative learning settings. By integrating theoretical analysis with principled algorithmic design, the research will study how optimization processes influence learning behavior and how this understanding can guide the development of scalable and reliable training methods. The outcomes are expected to impact distributed and federated learning, where multiple agents or devices collaborate in training without sharing raw data, enabling more cooperative and privacy-preserving learning systems. Ultimately, the project seeks to strengthen the theoretical foundations of modern machine learning while guiding the design of robust, general, and efficient learning algorithms.