zlacker

[parent] [thread] 5 comments
1. falcor+(OP)[view] [source] 2023-08-06 16:21:45
This is actually something I've been thinking about a lot. Once we do have AGI, and it chooses to embark upon a large project, would it prefer to just do it all itself, or would it prefer to spawn independent agents to take responsibility for each part of the project, which would then need to periodically meet to coordinate?

If the latter, I do expect something not too dissimilar from current office meetings. But if course what I'm really imagining are the cylon meetings in the reimagined BSG.

replies(3): >>kouru2+F2 >>4m1rk+C4 >>jacque+0k
2. kouru2+F2[view] [source] 2023-08-06 16:34:32
>>falcor+(OP)
Ever since realizing how effective tree of thought prompting is, I’ve accepted the idea that AGI will actually be just a giant continuous conversation between tons of different personas that debate until consensus.
3. 4m1rk+C4[view] [source] 2023-08-06 16:45:22
>>falcor+(OP)
The way humans communicate is ineffective. The most likely scenario is that there will be different systems that AGI integrates with to do the job. AGI itself will be a distributed system that scales horizontally so it will be a single huge entity with lots of interfaces.
replies(2): >>tornat+x5 >>itroni+N6
◧◩
4. tornat+x5[view] [source] [discussion] 2023-08-06 16:50:04
>>4m1rk+C4
The only reason human communication is ineffective is because it's slow. If an AI can read/write 1000s of words per second there's no reason it shouldn't use natural language to communicate.
◧◩
5. itroni+N6[view] [source] [discussion] 2023-08-06 16:55:01
>>4m1rk+C4
You're assuming that the AGI will communicate with the agents directly instead of through an LLM. If the agents are actually intelligent agents then the AGI may not be able to assume that the agents are not human, in which case it's safer for the AGI to use the LLM to define instructions for all tasks. And if that's the case then it will want to do all the work itself, if it's generally intelligent.
6. jacque+0k[view] [source] 2023-08-06 18:02:49
>>falcor+(OP)
> Once we do have AGI

That's not a foregone conclusion just yet.

[go to top]