A reachability-expressive motion planning algorithm to enhance human-robot collaboration

February 22, 2022 feature

The robot models human belief update and uses expressive motions to show its reachable workspace. After the calibration, the human with accurate capability estimation can assign proper roles to the robot. Credit: Gao et al.

A team of researchers at University of California, Los Angeles (UCLA)’s Center for Vision, Cognition, Learning, and Autonomy (VCLA), led by Prof. Song-Chun Zhu, recently developed an approach that could help to align a human user’s assessment of what a robot can do with its true capabilities. This approach, presented in a paper published in IEEE Robotics and Automation Letters, is based …

Be the first to comment

Leave a Reply

Your email address will not be published.