Federated Learning (FL) has emerged as a promising approach for training machine learning models using decentralized data sources while maintaining users' privacy. Unlike traditional centralized learning, FL allows users to train models collaboratively without needing to collect their raw data on a central server, making FL particularly suitable for privacy-sensitive applications such as healthcare, finance, and autonomous systems. However, FL faces several challenges, such as statistical heterogeneity, which arises in the presence of variations in data distributions across clients. In computer vision applications, factors such as class and domain imbalance are primary sources of heterogeneity, leading to notable performance degradation and instability during federated training. This Ph.D. project addresses the challenge of statistical heterogeneity in FL, aiming to provide insights into the interplay between FL algorithms, statistical heterogeneity, and specific computer vision tasks, paving the way for future research in this rapidly evolving field.