ManiSkill#

_images/teaser.png

Sample of environments/robots rendered with ray-tracing. Scene datasets sourced from AI2THOR and ReplicaCAD

Open In Colab PyPI version Docs status Discord

ManiSkill is a powerful unified framework for robot simulation and training powered by SAPIEN. The entire stack is as open-source as possible and ManiSkill v3 is in beta release now. Among its features include:

  • GPU parallelized visual data collection system. On the high end you can collect RGBD + Segmentation data at 20k FPS with a 4090 GPU, 10-100x faster compared to most other simulators.

  • Example tasks covering a wide range of different robot embodiments (quadruped, mobile manipulators, single-arm robots) as well as a wide range of different tasks (table-top, locomotion, dextrous manipulation)

  • GPU parallelized tasks, enabling incredibly fast synthetic data collection in simulation

  • GPU parallelized tasks support simulating diverse scenes where every parallel environment has a completely different scene/set of objects

  • Flexible task building API that abstracts away much of the complex GPU memory management code

ManiSkill plans to enable all kinds of workflows, including but not limited to 2D/3D vision-based reinforcement learning, imitation learning, sense-plan-act, etc. There will also be more assets/scenes supported by ManiSkill (e.g. AI2THOR) in addition to other features such as digital-twins for evaluation of real-world policies, see our roadmap for planned features that will be added over time before the official v3 is released.

NOTE: This project currently is in a beta release, so not all features have been added in yet and there may be some bugs. If you find any bugs or have any feature requests please post them to our GitHub issues or discuss about them on GitHub discussions. We also have a Discord Server through which we make announcements and discuss about ManiSkill.