I call BS on this...it's an LLM...
Guarantees of correctness and safety are obviously of huge concern, hence the main article. But it's absolutely not unreasonable to see these models allowing humanoid robots capable of various day to day activities and work.
https://tidybot.cs.princeton.edu/ https://innermonologue.github.io/
https://www.microsoft.com/en-us/research/group/autonomous-sy...
The alignment problem will come up when the robot control system notices that the guy with the stick is interfering with the robot's goals.
also it knows when to use a calculator if it has access to one so it's not a big deal
Would you be "comforted" that this mega-genius is worse at arithmetic than you are and doesn't remember what it did yesterday?
Probably not. You might well be worried that this weird psychopath is going to get a medical license and cut the wrong number of fingers off of a whole bunch of patients.
That can guide me through the process of writing a Navier-Stokes simulation…
In a foreign language…
That can be trivially put into a loop and tasked with acting like an agent…
And which is good enough that people are already seriously asking themselves if they need to hire people to do certain tasks…
…
Why call BS?
It's not perfect, sure, but it's not making a highly regional joke about the Isle of White Ferry[0] either.
[0] "What's brown and comes steaming out the back of Cowes?"
How so? If they cannot drive a car?
https://www.psy.ox.ac.uk/news/the-brain-is-a-prediction-mach...
It looks like being LLM-based is helpful for generating control scripts and communicating its reasoning. Text seems to provide useful building blocks for higher-order reasoning and behavior. As with humans!
Another comment already links to demos and papers of LFMs operating robots and agents in 3D environments.