FIG.AIFigure 03+1.5B.SeriesCTSLA.BOTOptimus+8k.Units.Q1BOS.DYNAtlas+New.CEO.2026AGIL.ROBDigit+AMZN.ScaleNEURA4NE-1+$1B.SeriesDAPP.TRONApollo+$520M.ExtAUNIT.REEG1+3k.Ships.25SUNDAYHomeBot+$1.15B.ValGALBOTG1+RMB2.5B.SerBSANC.AIPhoenix+$90M.SeriesD1X.TECHNEO+$125M.SerCMIND.ROBStealth+Founded.2026FUND.YTD2026$5.8B.Raised
researchVLAOpen Source

OpenVLA

by Stanford / UC Berkeley / TRI / DeepMind
Architecture
VLA
Parameters
7B
Training Data
970K episodes (Open X-Embodiment)
License
MIT
Status
RESEARCH
Open Source
Yes
Robots Supported
Franka PandaWidowXGoogle RobotBridge V2 robots
About

Open-source 7B parameter vision-language-action model trained on 970K real robot episodes from the Open X-Embodiment dataset. OpenVLA outperforms Google's RT-2-X by 16.5% on generalization benchmarks while being fully open and reproducible.

Key Differentiator
Fully open-source 7B VLA outperforming RT-2-X by 16.5% on generalization benchmarks
Funding Context

Academic consortium funded by NSF, Toyota Research Institute, and Google DeepMind grants

Milestones
2024-06OpenVLA paper published with 7B model
2024-07Model weights and code released under MIT license
2024-10Community fine-tuning ecosystem emerges
2025-02OpenVLA v2 with improved sim-to-real transfer
2025-06Adopted by 20+ research labs worldwide