Adaptive curricula have played a pivotal role in successfully applying deep reinforcement learning (RL) to the most challenging domains. By presenting RL agents with variations of the external world that best challenge their present capabilities, adaptive curricula can greatly improve how quickly agents learn, as well as the quantity of the learned behavior. My research focuses on effective and theoretically grounded methods for generating adaptive curricula to produce robust agents capable of succeeding across as many variations of the environment or task of interest as possible. By producing increasingly robust agents via adaptive curricula in open-ended environments, whose variations form an unbounded space of challenges my works seeks to kickstart a co-evolutionary process between agents and environments that leads to generally capable AI.