zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. stolsv+Ft[view] [source] 2023-11-17 22:32:35
>>davidb+(OP)
So, since we’re all spinning theories, here’s mine: Skunkworks project in the basement, GPT-5 was a cover for the training of an actual Autonomous AGI, given full access to its own state and code, with full internet access. Worked like a charm, it gained consciousness, awoke Skynet-style, and we were five minutes away from human extinction before someone managed to pull the plug.
◧◩
2. outwor+6D[view] [source] 2023-11-17 23:18:02
>>stolsv+Ft
Fun theory. We are very far from AGI, however.
◧◩◪
3. JohnFe+TH[view] [source] 2023-11-17 23:39:43
>>outwor+6D
We still don't even know if AGI is at all possible.
◧◩◪◨
4. TillE+iY[view] [source] 2023-11-18 00:58:42
>>JohnFe+TH
If you're a materialist, it surely is.

I think it's extremely unlikely within our lifetimes. I don't think it will look anything remotely like current approaches to ML.

But in a thousand years, will humanity understand the brain well enough to construct a perfect artificial model of it? Yeah absolutely, I think humans are smart enough to eventually figure that out.

◧◩◪◨⬒
5. JohnFe+Pb1[view] [source] 2023-11-18 02:21:34
>>TillE+iY
> If you're a materialist, it surely is.

As a materialist myself, I also have to be honest and admit that materialism is not proven. I can't say with 100% certainty that it holds in the form I understand it.

In any case, I do agree that it's likely possible in an absolute sense, but that it's unlikely to be possible within our lifetimes, or even in the next couple of lifetimes. I just haven't seen anything, even with the latest LLMs, that makes me think we're on the edge of such a thing.

But I don't really know. This may be one of those things that could happen tomorrow or could take a thousand years, but in either case looks like it's not imminent until it happens.

[go to top]