The model side of embodied AI is moving at a pace nobody expected. pi0, pi0.5, pi0.6, OpenPI, and dozens of fine-tuned variants are landing weekly. Open-weight VLAs are no longer a research preview. They are a tool you can download and run today.
Hardware has not kept up.
The model-hardware mismatch
Modern VLA architectures assume a certain embodiment. Bimanual arms. Visual input from a head-mounted camera. End-effector control with enough degrees of freedom to handle real objects in real environments.
Most available hardware falls into two categories, and neither works well for VLA iteration.
Category one: the right form factor but too expensive to run unconfident policies on. These are the $80K to $200K+ humanoids from major labs. The hardware is excellent. The problem is that when a crash costs $10K in parts, you only deploy policies you are confident in. You test conservatively. You schedule experiments weeks in advance because the robot is shared across three projects and a demo for investors next Thursday.
Category two: affordable but the wrong shape. Single arms, toy platforms, or configurations that do not match the training distribution of the models you want to run. You can iterate fast, but your results do not transfer to the embodiment the VLA was designed for.
The result: most researchers are training policies on one embodiment and hoping they transfer to another. That is not iteration. That is hope.
Running bad policies is how research works
The instinct at well-funded labs is to only deploy policies they are confident will work. This makes sense when the hardware is expensive and fragile. But it has a cost that is easy to miss.
Running only confident policies means you only learn what you already know works. You confirm existing approaches. You refine things that are already good. You never discover the surprising failure that leads to the next insight.
The breakthroughs come from running the policy you are not sure about and watching exactly how it fails. Is it a perception issue? A timing problem? Does the gripper slip at the same point every time? Does the policy generalize to a different object, or does it fall apart completely?
You cannot answer these questions from simulation. You cannot answer them from reading the loss curve. You can only answer them by deploying the policy on physical hardware and watching.
This requires hardware cheap enough and robust enough that failure is a data point, not a disaster.
What has been missing
I think the thing that has been missing from humanoid VLA development is simple: a humanoid form factor that one person can iterate on. That is it.
Bimanual, because that is what modern VLAs assume. Desk-scale, because that is where the tasks are and where the failure cost is manageable. Compatible with the tools the community already uses: LeRobot, HuggingFace, standard URDF definitions. Priced so that running a bad policy on it is not a financial event.
This is Ronin. The constraint is whether the hardware in front of you is something you can actually try things on. Ronin is the first humanoid form factor I have worked with where the answer is unambiguously yes.
One person, one day, one humanoid form factor. Run the bad policy and find out.
What changes when the hardware matches the model
Policy transfer stops being a research question and starts being a workflow step. You collect data on the same embodiment you deploy on. No domain gap from retargeting. No hoping that the sim-to-real transfer holds up.
The gap between sim and real shrinks, not because the sim got better, but because you can validate in real faster than you can tune the sim. When deploying on hardware takes minutes instead of weeks, the calculus changes. Running the real thing becomes cheaper than making the simulation more accurate.
Researchers who could never justify a humanoid platform can now run humanoid experiments. Not toy experiments on a platform that vaguely resembles a humanoid, but real bimanual manipulation tasks on hardware that matches the embodiment the community is building models for.
The models are ready. The open-source tooling is ready. The missing piece was always a humanoid platform that treats failed policies as information, not liability. That piece now exists.
Ronin ships Q4 2026. Reserve with a fully refundable $200 deposit.
Reserve now