Connect your OpenClaw AI assistant (Clawson) to a Reachy Mini robot. Ultra-responsive voice conversation through OpenAI Realtime API, intelligent responses from OpenClaw, and expressive robot movements.
Works with physical robot OR MuJoCo simulator!
You don't need a physical Reachy Mini robot to use ClawBody!
ClawBody works with the Reachy Mini Simulator, a MuJoCo-based physics simulation
that runs on your computer. Watch Clawson move and express emotions on screen
while you talk to your OpenClaw agent.
# Install simulator support pip install "reachy-mini[mujoco]" # Start the simulator (opens 3D window) reachy-mini-daemon --sim # In another terminal, run ClawBody clawbody --gradio
๐ Mac Users: Use mjpython -m reachy_mini.daemon.app.main --sim instead
ClawBody combines real-time voice conversation, OpenClaw intelligence, and expressive robot motion.
Sub-second latency using OpenAI's Realtime API for natural, responsive conversation.
Full Clawson capabilitiesโtools, memory, personalityโthrough the OpenClaw gateway.
See through the robot's camera. Ask Clawson what it sees and get visual descriptions.
Audio-driven head wobble, emotions, dances, and natural movements while speaking.
No robot? Run with MuJoCo simulator and watch Clawson move in a 3D window.
Best of both worlds: OpenAI's voice tech + OpenClaw's full AI capabilities.
From speech to response in under a second
Choose your setup:
Option A: ๐ค Physical Reachy Mini robot
Option B: ๐ฅ๏ธ MuJoCo Simulator (free, no hardware!)
Get ClawBody running with the simulator
# Clone ClawBody git clone https://github.com/tomrikert/clawbody cd clawbody # Create virtual environment python -m venv .venv source .venv/bin/activate # Install ClawBody + simulator pip install -e . pip install "reachy-mini[mujoco]" # Configure (edit with your OpenClaw URL and OpenAI key) cp .env.example .env nano .env # Terminal 1: Start simulator reachy-mini-daemon --sim # Terminal 2: Run ClawBody clawbody --gradio