Yaohong Yang
Differential privacy (DP) can prevent models from memorizing sensitive information by incorporating noise into the training process. Unfortunately, this often leads to a reduction in model accuracy. While careful adjustment of the training process can minimize accuracy loss, this approach can be computationally expensive and requires significant expertise. This project aims to develop an AI assistant that helps users select optimal models and configurations for training under DP. This assistant will enable less experienced users to train robust models without excessive computational demands, significantly impacting the DP research community and facilitating the adaptation of DP to new challenges. By reducing the computational requirements and protecting individual privacy, our assistant will lower the barriers to using DP. It minimizes the need for multiple training data iterations to tune hyperparameters, indirectly promoting the use of DP and narrowing the utility gap between DP and non-DP models.