Skip to content

Telios/doge

Repository files navigation

DOGE : Dynamic Obstacle Ground-based Evasion

This repository contains a complete research and experimentation framework built on ROS 2 and MuJoCo for learning dynamic projectile evasion with the Solo12 quadruped robot.

The key design goal of this project is high-speed reactivity:

  • Perception: Utilizing an event-based camera to track fast-moving objects with microsecond latency.
  • Agility: Moving beyond static path planning by using Model-Based Reinforcement Learning (DreamerV3).
  • Real-Time: Integrating high-frequency motor control (100Hz) with the neuromorphic vision system.

Table of Contents

Prerequisites

This project assumes familiarity with ROS 2 and Reinforcement Learning.

Before working with this repository, you should:

  • Have ROS 2 Humble installed on Ubuntu 22.04
  • Have MuJoCo installed for simulation
  • Have a Python environment with PyTorch and standard RL libraries

Installation

Clone the repository using the following command:

git clone --recurse-submodules https://github.com/Telios/doge.git

Make a new conda environment and install the required packages using the following commands:

conda env create -f environment.yml
conda activate doge

Install the external libraries

./install_external.sh

Usage

First you need to calibrate the robot.

bash scripts/calibrate.sh

To run doge with Solo12, use the following command:

bash scripts/run_doge.sh

Follow the instructions in the terminal to start doge.

Results & Limitations

Simulation results from the event-based camera

Simulation results in Mujoco

  • Simulation Success: The DreamerV3 agent successfully learned a robust policy in simulation, capable of consistently dodging incoming projectiles by coordinating body movement and orientation.
doge.mp4
  • Sim-to-Real Challenges: While the policy performed well in simulation, the direct transfer to the physical Solo12 robot faced a significant sim-to-real gap. The real-world physics and sensor noise profiles proved distinct enough that the policy requires further refinement.
  • Future Work: Identifying pathways for bridging this gap include more aggressive domain randomization and sophisticated reward shaping to encourage more stable real-world behaviors.

About

Model based RL framework based on dreamerv3 for dodging projectiles with a Quadruped robot

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors