zlacker

[return to "Gemini Robotics On-Device brings AI to local robotic devices"]
1. polski+Ka1[view] [source] 2025-06-24 20:52:54
>>meetpa+(OP)
What is the model architecture? I'm assuming it's far away from LLMs, but I'm curious about knowing more. Can anyone provide links that describe architectures for VLA?
◧◩
2. KoolKa+yc1[view] [source] 2025-06-24 21:03:07
>>polski+Ka1
Actually very close to one I'd say.

It's a "visual language action" VLA model "built on the foundations of Gemini 2.0".

As Gemini 2.0 has native language, audio and video support, I suspect it has been adapted to include native "action" data too, perhaps only on output fine-tuning rather than input/output at training stage (given its Gemini 2.0 foundation).

Natively multimodal LLM's are basically brains.

◧◩◪
3. martyt+Wg1[view] [source] 2025-06-24 21:34:00
>>KoolKa+yc1
OpenVLA is basically a slightly modified, fine-tuned llama2. I found the launch/intro talk by lead author to be quite accessible: https://www.youtube.com/watch?v=-0s0v3q7mBk
◧◩◪◨
4. KoolKa+2a2[view] [source] 2025-06-25 07:58:06
>>martyt+Wg1
In the paper at the bottom of googles page, this VLA says it is built on the foundations of Gemini 2.0 (hence my quotations). They'd be using Gemini 2.0 rather than llama.

https://arxiv.org/pdf/2503.20020

[go to top]