Introducing *MolmoSpaces*: A large-scale, fully open platform + benchmark for embodied AI research
The next wave of AI will act in the physical world, but building robots that generalize across new environments rather than simply replaying learned behaviors requires far more diverse training data than exists today. That's where MolmoSpaces comes in.
MolmoSpaces brings together 230k+ indoor scenes, 130k+ object models, and 42M annotated robotic grasps into a single open ecosystem built on two foundations:
◘ Objaverse, one of the largest open collections of 3D objects
◘ Our THOR family of interactive simulation environments
MolmoSpaces is grounded in physics simulation with validated physical parameters tuned for realistic robotics manipulation, and includes a trajectory-generation pipeline for reproducible embodied AI demonstrations and imitation learning at scale. All assets, scenes, and tools are open and modular – provided in MJCF with USD conversion for cross-simulator portability – so you can plug in new embodiments, regenerate grasps, and run experiments across MuJoCo, ManiSkill, and NVIDIA Isaac Lab/Sim.
MolmoSpaces supports teleoperation via mobile platforms like Teledex, so you can collect demonstrations right from your phone, compatible with embodiment setups including DROID and CAP with no extra configuration needed.
We're also releasing *MolmoSpaces-Bench*, a new benchmark for evaluating generalist policies under systematic, controlled variation. Researchers can isolate individual factors – object properties, layouts, task complexity, lighting, dynamics, instruction phrasing, and more – across thousands of realistic scenes.
Explore MolmoSpaces today and start building—we can't wait to see what the community does with it:
maxloh•1h ago
Introducing *MolmoSpaces*: A large-scale, fully open platform + benchmark for embodied AI research
The next wave of AI will act in the physical world, but building robots that generalize across new environments rather than simply replaying learned behaviors requires far more diverse training data than exists today. That's where MolmoSpaces comes in.
MolmoSpaces brings together 230k+ indoor scenes, 130k+ object models, and 42M annotated robotic grasps into a single open ecosystem built on two foundations:
◘ Objaverse, one of the largest open collections of 3D objects
◘ Our THOR family of interactive simulation environments
MolmoSpaces is grounded in physics simulation with validated physical parameters tuned for realistic robotics manipulation, and includes a trajectory-generation pipeline for reproducible embodied AI demonstrations and imitation learning at scale. All assets, scenes, and tools are open and modular – provided in MJCF with USD conversion for cross-simulator portability – so you can plug in new embodiments, regenerate grasps, and run experiments across MuJoCo, ManiSkill, and NVIDIA Isaac Lab/Sim.
MolmoSpaces supports teleoperation via mobile platforms like Teledex, so you can collect demonstrations right from your phone, compatible with embodiment setups including DROID and CAP with no extra configuration needed.
We're also releasing *MolmoSpaces-Bench*, a new benchmark for evaluating generalist policies under systematic, controlled variation. Researchers can isolate individual factors – object properties, layouts, task complexity, lighting, dynamics, instruction phrasing, and more – across thousands of realistic scenes.
Explore MolmoSpaces today and start building—we can't wait to see what the community does with it:
Blog: https://allenai.org/blog/molmospaces
Demo: https://molmospaces.allen.ai/
⬇ Code: https://github.com/allenai/molmospaces
Data: https://huggingface.co/datasets/allenai/molmospaces
Paper: http://allenai.org/papers/molmospaces