In this project, we will explore new ways to reconstruct and model detailed human motion and action in the context of the physical surroundings they happen in. We will explore several research strands in this space. We will explore new types of multi-modal sensing that combine cameras, including body-worn cameras, with alternative multi-modal on-body sensors. We develop new algorithms formulations that enable human motion reconstruction and simulation from these unconventional multi-modal sensor combinations at very high accuracy and robustness, over extended periods of time, and at very high efficiency. Here, the jointmodeling of human and scene context will be an important aspect. In this context, we will also explore new ways to combine explicit (e.g. physics-based) representatons and neural representations, end explore how latest generative modeling approaches can adapted in our inherently underconstrained problem setting.