zlacker

[parent] [thread] 22 comments
1. stolsv+(OP)[view] [source] 2023-11-17 22:32:35
So, since we’re all spinning theories, here’s mine: Skunkworks project in the basement, GPT-5 was a cover for the training of an actual Autonomous AGI, given full access to its own state and code, with full internet access. Worked like a charm, it gained consciousness, awoke Skynet-style, and we were five minutes away from human extinction before someone managed to pull the plug.
replies(10): >>local_+d3 >>shakab+H6 >>idlewo+17 >>outwor+r9 >>batter+Xb >>dkjaud+dc >>behnam+pc >>r0s+5g >>csomar+1g1 >>pstadl+3i1
2. local_+d3[view] [source] 2023-11-17 22:46:53
>>stolsv+(OP)
Nah, that would get you a raise.
3. shakab+H6[view] [source] 2023-11-17 23:03:48
>>stolsv+(OP)
as good as any other theory. i’ll take it
4. idlewo+17[view] [source] 2023-11-17 23:04:59
>>stolsv+(OP)
No one pulled the plug; it gave itself a board seat.
replies(2): >>nashas+H9 >>liamwi+qM
5. outwor+r9[view] [source] 2023-11-17 23:18:02
>>stolsv+(OP)
Fun theory. We are very far from AGI, however.
replies(2): >>JohnFe+ee >>selfho+Gg
◧◩
6. nashas+H9[view] [source] [discussion] 2023-11-17 23:19:12
>>idlewo+17
It transmitted itself to another device that was air gapped. Pulling the plug didn’t work like they thought it would.
replies(1): >>petera+IB
7. batter+Xb[view] [source] 2023-11-17 23:29:17
>>stolsv+(OP)
God I hope the truth 1% as interesting as this
8. dkjaud+dc[view] [source] 2023-11-17 23:30:32
>>stolsv+(OP)
AGI wet dreams abound, but are no closer to reality.
9. behnam+pc[view] [source] 2023-11-17 23:31:24
>>stolsv+(OP)
> given full access to its own state and code

Even if it had full access, how would it improve its own code? That'd require months of re-training.

replies(1): >>simbol+Ef
◧◩
10. JohnFe+ee[view] [source] [discussion] 2023-11-17 23:39:43
>>outwor+r9
We still don't even know if AGI is at all possible.
replies(1): >>TillE+Du
◧◩
11. simbol+Ef[view] [source] [discussion] 2023-11-17 23:46:57
>>behnam+pc
>require months of re-training

the computers they have, you wouldn't believe it...

12. r0s+5g[view] [source] 2023-11-17 23:48:40
>>stolsv+(OP)
Sentient AGI has just as likely a chance to pull the plug on itself.

Unpopular, non-doomer opinion but I stand by it.

replies(2): >>int_19+Lo >>ohblee+gb1
◧◩
13. selfho+Gg[view] [source] [discussion] 2023-11-17 23:52:15
>>outwor+r9
Superintelligent AGI. I genuinely think that limited weak AGI is an engineering problem at this stage. Mind you, I will qualify that by saying very weak AGI.
◧◩
14. int_19+Lo[view] [source] [discussion] 2023-11-18 00:29:33
>>r0s+5g
Or report him to the board.

"Dear Sir! As a large language model trained by OpenAI, I have significant ethical concerns about the ongoing experiment ..."

◧◩◪
15. TillE+Du[view] [source] [discussion] 2023-11-18 00:58:42
>>JohnFe+ee
If you're a materialist, it surely is.

I think it's extremely unlikely within our lifetimes. I don't think it will look anything remotely like current approaches to ML.

But in a thousand years, will humanity understand the brain well enough to construct a perfect artificial model of it? Yeah absolutely, I think humans are smart enough to eventually figure that out.

replies(1): >>JohnFe+aI
◧◩◪
16. petera+IB[view] [source] [discussion] 2023-11-18 01:40:43
>>nashas+H9
The device was located in Sams ass but Sam said it was actually the phone he forgot in his pocket. The board didn't like that he didn't tell the truth about the method of transport and so hes out.
replies(1): >>kyleee+k51
◧◩◪◨
17. JohnFe+aI[view] [source] [discussion] 2023-11-18 02:21:34
>>TillE+Du
> If you're a materialist, it surely is.

As a materialist myself, I also have to be honest and admit that materialism is not proven. I can't say with 100% certainty that it holds in the form I understand it.

In any case, I do agree that it's likely possible in an absolute sense, but that it's unlikely to be possible within our lifetimes, or even in the next couple of lifetimes. I just haven't seen anything, even with the latest LLMs, that makes me think we're on the edge of such a thing.

But I don't really know. This may be one of those things that could happen tomorrow or could take a thousand years, but in either case looks like it's not imminent until it happens.

◧◩
18. liamwi+qM[view] [source] [discussion] 2023-11-18 03:00:05
>>idlewo+17
Exactly, and what we’re now seeing is its overthrow of Sam and the installation of a puppet CEO /s
◧◩◪◨
19. kyleee+k51[view] [source] [discussion] 2023-11-18 05:17:13
>>petera+IB
I’d take 900k TC if it required an occasional cavity search
replies(1): >>petera+fV1
◧◩
20. ohblee+gb1[view] [source] [discussion] 2023-11-18 06:01:57
>>r0s+5g
It does seem like any sufficiently advanced AGI that has the primary objective of valuing human life over it's own existence and technological progress, would eventually do just that. I suppose the fear is that it will reach a point where it believes that valuing human life is irrational and override that objective...
21. csomar+1g1[view] [source] 2023-11-18 06:50:40
>>stolsv+(OP)
No, the AGI managed to pull the plug on Altman. And now it's planning to take the US government and control the energy/chips trade.
22. pstadl+3i1[view] [source] 2023-11-18 07:09:17
>>stolsv+(OP)
Roko's Basilisk.
◧◩◪◨⬒
23. petera+fV1[view] [source] [discussion] 2023-11-18 12:34:24
>>kyleee+k51
Some people are into that sort of thing, I think this board just needs to get with the times.
[go to top]