Skip to content

Home

Logo

Mujorax, a JAX-native MuJoCo environment suite for Envrax.



Mujorax is a lightweight open-source JAX-native MuJoCo environment suite for single-agent Reinforcement Learning (RL), built on top of Envrax []. It wraps MuJoCo Playground [] environments with Envrax's JaxEnv so you can use them with envrax.make, envrax.make_vec, and the rest of Envrax's tooling.

It comes with 25 environments from the DM Control Suite. All environment logic follows a stateless functional design that builds on top of the MJX [], JAX [], and Chex [] packages to benefit from JAX accelerator efficiency.

Why Mujorax?

Envrax [] provides a JAX-native Gymnasium-style [] API standard for RL environments, but it doesn't ship with any environments of its own. One of the biggest spaces in RL is robotics, and the gold-standard physics engine for this is MuJoCo []. This makes it the perfect fit for one of the first Envrax environment suites!

MuJoCo Playground [] is Google DeepMind's open-source library of MuJoCo environments, built on top of MJX [] (MuJoCo's JAX port that preserves the simulator's full physics fidelity). It already solves the hard parts: research-validated reward and termination logic for DM Control, locomotion, and manipulation environments. The only catch is that its environments expose a Brax-style MjxEnv API, which doesn't quite fit Envrax's API standard.

Rather than reinventing the wheel, Mujorax acts as a thin, type-safe wrapper around the MuJoCo Playground environments to maximise their benefits while maintaining Envrax's API standard, making it completely plug-and-play with Envrax's toolkit.

Acknowledgements

Mujorax wouldn't be possible without these incredible projects:

❤ Thank you to all the developers involved - you guys are awesome! ❤

  • Getting Started


    What are you waiting for?!

    Get Started

  • Open Source, MIT


    Mujorax is licensed under the MIT License.

    License