researchVLAOpen Source
OpenVLA
by Stanford / UC Berkeley / TRI / DeepMind
Architecture
VLA
Parameters
7B
Training Data
970K episodes (Open X-Embodiment)
License
MIT
Status
RESEARCH
Open Source
Yes
Robots Supported
Franka PandaWidowXGoogle RobotBridge V2 robots
About
Open-source 7B parameter vision-language-action model trained on 970K real robot episodes from the Open X-Embodiment dataset. OpenVLA outperforms Google's RT-2-X by 16.5% on generalization benchmarks while being fully open and reproducible.
Key Differentiator
Fully open-source 7B VLA outperforming RT-2-X by 16.5% on generalization benchmarks
Funding Context
Academic consortium funded by NSF, Toyota Research Institute, and Google DeepMind grants
Milestones
2024-06OpenVLA paper published with 7B model
2024-07Model weights and code released under MIT license
2024-10Community fine-tuning ecosystem emerges
2025-02OpenVLA v2 with improved sim-to-real transfer
2025-06Adopted by 20+ research labs worldwide