zlacker

[parent] [thread] 9 comments
1. chaxor+(OP)[view] [source] 2023-07-05 18:33:02
It's important to recognize that the model is fully capable of operating in open world environments, with visual stimuli and motor output, go achieve high level tasks. This has been demonstrated in proofs of concepts several times now with systems such as voyager et al. So, while there are certainly some details that are important, much of them are the annoyances that we devs deal with all the time (how to connect various parts of a system properly, etc) the fundamental capabilities of expressivity in these models are not that limited. Certainly limited in some sense (as seen in the several papers applying category theoretic arguments to transformers) but for many engineering applications in the world, these models are very capable and useful.

Guarantees of correctness and safety are obviously of huge concern, hence the main article. But it's absolutely not unreasonable to see these models allowing humanoid robots capable of various day to day activities and work.

replies(3): >>Dennis+dq >>jgalt2+w41 >>hgsgm+4d1
2. Dennis+dq[view] [source] 2023-07-05 20:24:17
>>chaxor+(OP)
To save others the trouble, I googled Voyager, it's pretty interesting. I had no idea an LLM could do this sort of thing:

https://voyager.minedojo.org/

replies(2): >>famous+ex >>yldedl+4X1
◧◩
3. famous+ex[view] [source] [discussion] 2023-07-05 20:58:44
>>Dennis+dq
Other examples(in the real world) you might find interesting.

https://tidybot.cs.princeton.edu/ https://innermonologue.github.io/

https://palm-e.github.io/

https://www.microsoft.com/en-us/research/group/autonomous-sy...

replies(1): >>Animat+0A
◧◩◪
4. Animat+0A[view] [source] [discussion] 2023-07-05 21:11:31
>>famous+ex
> https://palm-e.github.io/

The alignment problem will come up when the robot control system notices that the guy with the stick is interfering with the robot's goals.

replies(1): >>c_cran+IB
◧◩◪◨
5. c_cran+IB[view] [source] [discussion] 2023-07-05 21:18:45
>>Animat+0A
A robot control system without a mechanical override in favor of the stick is a poor one indeed.
6. jgalt2+w41[view] [source] 2023-07-06 00:07:20
>>chaxor+(OP)
> It's important to recognize that the model is fully capable of operating in open world environment

How so? If they cannot drive a car?

replies(1): >>chaxor+Lw9
7. hgsgm+4d1[view] [source] 2023-07-06 01:09:44
>>chaxor+(OP)
I don't understand why Voyager benefits from being an LLM, vs a "normal" Neural Net. It's not talking to anyone or learning from text.
replies(1): >>Footke+5w1
◧◩
8. Footke+5w1[view] [source] [discussion] 2023-07-06 03:30:33
>>hgsgm+4d1
> We introduce Voyager, the first LLM-powered embodied lifelong learning agent to drive exploration, master a wide range of skills, and make new discoveries continually without human intervention in Minecraft. Voyager is made possible through three key modules: 1) an automatic curriculum that maximizes exploration; 2) a skill library for storing and retrieving complex behaviors; and 3) a new iterative prompting mechanism that generates executable code for embodied control.

It looks like being LLM-based is helpful for generating control scripts and communicating its reasoning. Text seems to provide useful building blocks for higher-order reasoning and behavior. As with humans!

◧◩
9. yldedl+4X1[view] [source] [discussion] 2023-07-06 07:24:26
>>Dennis+dq
Voyager is pretty cool, but it's not transferable to the real world at all. The automatic curriculum relies on lots of specific knowledge from people talking about how to get better at Minecraft. The skill library writes programs using the Mineflayer API, which provides primitives for all physics, entities, actions, state etc. A real-life analogue of that would be like solving robotics and perception real quick.
◧◩
10. chaxor+Lw9[view] [source] [discussion] 2023-07-08 05:24:34
>>jgalt2+w41
What evidence do you have that allows you to make the assertion that they 'cannot drive a car'?
[go to top]