Code for the papers:
-
Robust Humanoid Walking on Compliant and Uneven Terrain with Deep Reinforcement Learning
Rohan P. Singh, Mitsuharu Morisawa, Mehdi Benallegue, Zhaoming Xie, Fumio Kanehiro -
Learning Bipedal Walking for Humanoids with Current Feedback
Rohan P. Singh, Zhaoming Xie, Pierre Gergondet, Fumio Kanehiro -
Learning Bipedal Walking On Planned Footsteps For Humanoid Robots
Rohan P. Singh, Mehdi Benallegue, Mitsuharu Morisawa, Rafael Cisneros, Fumio Kanehiro
A rough outline for the repository that might be useful for adding your own robot:
LearningHumanoidWalking/
├── envs/ <-- Actions and observation space, PD gains, simulation step, control decimation, init, ...
├── tasks/ <-- Reward function, termination conditions, and more...
├── rl/ <-- Code for PPO, actor/critic networks, observation normalization process...
├── models/ <-- MuJoCo model files: XMLs/meshes/textures
└── scripts/ <-- Utility scripts, etc.
- Python version: 3.12.4
- pip install:
- mujoco==3.2.2
- ray==2.40.0
- pytorch=2.5.1
- intel-openmp
- mujoco-python-viewer
- transforms3d
- scipy
Environment names supported:
Task Description | Environment name |
---|---|
Basic Standing Task | 'h1' |
Basic Walking Task | 'jvrc_walk' |
Stepping Task (using footsteps) | 'jvrc_step' |
$ python run_experiment.py train --logdir <path_to_exp_dir> --num_procs <num_of_cpu_procs> --env <name_of_environment>
$ python run_experiment.py eval --logdir <path_to_actor_pt>
Or, we could write a rollout script specific to each environment.
For example, debug_stepper.py
can be used with the jvrc_step
environment.
$ PYTHONPATH=.:$PYTHONPATH python scripts/debug_stepper.py --path <path_to_exp_dir>
If you find this work useful in your own research, please cite the following works:
For omnidirectional walking:
@inproceedings{singh2024robust,
title={Robust Humanoid Walking on Compliant and Uneven Terrain with Deep Reinforcement Learning},
author={Singh, Rohan P and Morisawa, Mitsuharu and Benallegue, Mehdi and Xie, Zhaoming and Kanehiro, Fumio},
booktitle={2024 IEEE-RAS 23rd International Conference on Humanoid Robots (Humanoids)},
pages={497--504},
year={2024},
organization={IEEE}
}
For simulating "back-emf" effect and other randomizations:
@article{xie2023learning,
title={Learning bipedal walking for humanoids with current feedback},
author={Xie, Zhaoming and Gergondet, Pierre and Kanehiro, Fumio and others},
journal={IEEE Access},
volume={11},
pages={82013--82023},
year={2023},
publisher={IEEE}
}
For walking on footsteps:
@inproceedings{singh2022learning,
title={Learning Bipedal Walking On Planned Footsteps For Humanoid Robots},
author={Singh, Rohan P and Benallegue, Mehdi and Morisawa, Mitsuharu and Cisneros, Rafael and Kanehiro, Fumio},
booktitle={2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)},
pages={686--693},
year={2022},
organization={IEEE}
}
The code in this repository was heavily inspired from apex. Clock-based reward terms and some other ideas were originally proposed by the team from OSU DRL for the Cassie robot, so please also cite the works of Jonah Siekmann, Helei Duan, Jereme Dao and others.