Much of modern AI focuses on explicit problem formulations, with full autonomy preferred if possible. In this project, we relax this last constraint and aim to develop methods for AI assistants where we may instead prefer maximal autonomy. Such problems arise naturally in critical decision making systems and various design problems. An important characteristic of such problems is that the end goal is often ill-posed and/or evolving, and the user can recognize solutions once they see them. To this end, we will build AI assistants that know what they do not know, are able to minimize their interaction with users whilst querying the users as efficiently as possible, and are able to model the user and learn their preferences to be better able to assist them.