This repository contains a complete research and experimentation framework built on ROS 2 and MuJoCo for learning dynamic projectile evasion with the Solo12 quadruped robot.
The key design goal of this project is high-speed reactivity:
- Perception: Utilizing an event-based camera to track fast-moving objects with microsecond latency.
- Agility: Moving beyond static path planning by using Model-Based Reinforcement Learning (DreamerV3).
- Real-Time: Integrating high-frequency motor control (100Hz) with the neuromorphic vision system.
This project assumes familiarity with ROS 2 and Reinforcement Learning.
Before working with this repository, you should:
- Have ROS 2 Humble installed on Ubuntu 22.04
- Have MuJoCo installed for simulation
- Have a Python environment with PyTorch and standard RL libraries
Clone the repository using the following command:
git clone --recurse-submodules https://github.com/Telios/doge.gitMake a new conda environment and install the required packages using the following commands:
conda env create -f environment.yml
conda activate dogeInstall the external libraries
./install_external.shFirst you need to calibrate the robot.
bash scripts/calibrate.shTo run doge with Solo12, use the following command:
bash scripts/run_doge.shFollow the instructions in the terminal to start doge.
- Simulation Success: The DreamerV3 agent successfully learned a robust policy in simulation, capable of consistently dodging incoming projectiles by coordinating body movement and orientation.
doge.mp4
- Sim-to-Real Challenges: While the policy performed well in simulation, the direct transfer to the physical Solo12 robot faced a significant sim-to-real gap. The real-world physics and sensor noise profiles proved distinct enough that the policy requires further refinement.
- Future Work: Identifying pathways for bridging this gap include more aggressive domain randomization and sophisticated reward shaping to encourage more stable real-world behaviors.


