Ji Shi
Manipulating objects with an anthropomorphic robotic hand brings greater flexibility, precision, and versatility compared to a simple gripper. However, challenges such as high-dimensionality, intermittent contact patterns, and safety constraints make controlling the robot to perform a wide range of tasks highly challenging. However, Model-based reinforcement learning shows its excellence by learning the dynamics model of the robot and generalizing it through policy learning or online planning.
Furthermore, modeling itself can not only focus on low-level dynamic processes, but also be extended to higher and more abstract levels, such as robotic skills, object affordances, and causality detection. This enables a true feedback loop of observing, reasoning, executing, and making further observations-ultimately leading the robot truly understand the physical world.
Moreover, the recent advancements in large foundation models have inspired the robotics community to reconsider the possibility of achieving a truly generalist robot. Building on this inspiration, we hope to fuse multi-modal information properly and construct models with properties favourable for dexterous manipulation, and finally empower anthropomorphic robotic hands to more human-like behaviors on diverse tasks.