zlacker

[parent] [thread] 0 comments
1. martyt+(OP)[view] [source] 2025-06-24 17:27:22
You can think of these as essentially multi-modal LLMs, which is to say you can have very small/fast ones (SmolVLA - 0.5B params) that are good at specific tasks, and larger/slower more general ones (OpenVLA - a finetuned llama2 7B). So a rpi could be used for some very specific tasks, but even the more general ones could run on beefy consumer hardware.
[go to top]