Sim-to-real reinforcement learning (RL) for humanoid robots with high-gear ratio actuators remains challenging due to complex actuator dynamics and the absence of torque sensors. To address this, we propose a novel RL framework leveraging foot-mounted inertial measurement units (IMUs). Instead of pursuing detailed actuator modeling and system identification, we utilize foot-mounted IMU measurements to enhance rapid stabilization capabilities over challenging terrains. Additionally, we propose symmetric data augmentation dedicated to the proposed observation space and random network distillation to enhance bipedal locomotion learning over rough terrain. We validate our approach through hardware experiments on a miniature-sized humanoid EVAL-03 over a variety of environments. The experimental results demonstrate that our method improves rapid stabilization capabilities over non-rigid surfaces and sudden environmental transitions.
Our miniature-sized humanoid EVAL-03 is equipped with an IMU on each of the left and right feet, respectively, as well as a body-mounted IMU. To leverage these sensors, our observations include linear accelerations and angular velocities from the IMUs mounted on the left and right feet, as well as those from the body-mounted IMU. The action space consists of target joint positions for the low-level PD controller, while we employ a low-pass filter on the target positions to prevent damage to the actuators.
w/ Foot-Mounted IMUs | w/o Foot-Mounted IMUs 1 | w/o Foot-Mounted IMUs 2 |
---|---|---|
To evaluate the effectiveness of the proposed method, we have compared the following three policies in the hardware experiments:
@article{katayama2025learning,
title={Learning Bipedal Locomotion on Gear-Driven Humanoid Robot Using Foot-Mounted IMUs},
author={Sotaro Katayama and Yuta Koda and Norio Nagatsuka and and Masaya Kinoshita},
joiurnal={arXv preprint arxiv:2504.00614},
year={2025},
}