Presenter: Shlok Dharmendra Raval
Faculty Sponsor: Donghyun Kim
School: UMass Amherst
Research Area: Computer Science
Session: Poster Session 5, 3:15 PM - 4:00 PM, 165, D6
ABSTRACT
Reinforcement learning has enabled impressive locomotion behaviors in legged robots, yet most controllers remain agnostic to human biomechanics that could improve task-specific performance, such as energy efficiency and robustness. This project investigates how embedding biomechanical constraints from human motion into policy design can enhance robot performance on targeted motor tasks. I curate a library of canonical lower-limb motions, including level walking, stair ascent, and sit-to-stand from existing human biomechanics datasets and in-lab motion capture experiments. For each motion, I extract task-relevant constraints such as joint range envelopes, center of mass trajectories, and ground reaction force patterns and translate them into policy structure, state and action shaping, and reward terms for a simulated bipedal robot. Initial policies are obtained using a motion retargeting and imitation learning pipeline that maps recorded human trajectories onto the robot, after which I progressively introduce biomechanics-inspired constraints into the policy architecture, observation and action spaces, and reward design to study their impact on performance. I then train separate reinforcement learning policies for each motion using these constrained formulations and compare them against baseline policies trained without biomechanical priors. Performance is evaluated in physics simulation using metrics such as energy cost of transport, tracking accuracy, and disturbance recovery. Expected results are that biomechanically informed policies will achieve comparable or improved task success rates while reducing control effort and producing more human-like kinematics, providing a reusable pipeline for integrating human movement science into robotic control.