Effective and smooth human-AI collaboration in decision-making requires the AI assistant to accurately comprehend human intentions by observing behaviour. While much of the research in this field emphasizes advanced AI techniques for inverse modelling and obtains promising results, it often oversimplifies human behaviour assumptions, leading to potentially ineffective AI assistance in the real-world. Specifically, humans can be computationally rational, meaning that human behaviour is influenced by latent bounds which lead to sub-optimal decisions. This PhD research focuses on developing AI assistants that infer human intentions with the theory of mind and recognizes human computational rationality, adaptability, and proactive nested reasoning. Our overarching goal is to provide better AI assistance for humans regarding both quality and efficiency in sequential decision-making.