zlacker

[parent] [thread] 21 comments
1. ryandv+(OP)[view] [source] 2023-11-17 21:05:34
Kinda nervous wondering what Altman wasn't sharing with them. I hope it's not that they already have a fully sentient AGI locked up in a server room somewhere...
replies(9): >>siva7+M1 >>gizajo+O1 >>brecke+23 >>manuel+I3 >>paxys+J3 >>coffee+u6 >>Jeremy+m8 >>SCHiM+Dd >>sebast+tj
2. siva7+M1[view] [source] 2023-11-17 21:13:13
>>ryandv+(OP)
I wouldn't be shocked if this turns out to be the case. Any other explanation wouldn't add up for this guy
replies(1): >>justsi+t6
3. gizajo+O1[view] [source] 2023-11-17 21:13:26
>>ryandv+(OP)
Maybe it breached its air-gap and fired him.
4. brecke+23[view] [source] 2023-11-17 21:18:25
>>ryandv+(OP)
My guess is he lied about operating expenses.
replies(1): >>jbogga+y6
5. manuel+I3[view] [source] 2023-11-17 21:21:03
>>ryandv+(OP)
> I hope it's not that they already have a fully sentient AGI locked up in a server room somewhere...

Of sorts.

ChatGPT is actually a farm of underpaid humans, located somewhere in southeast Asia.

replies(3): >>garden+kf >>knicho+wf >>Xenoam+Pv
6. paxys+J3[view] [source] 2023-11-17 21:21:10
>>ryandv+(OP)
His relationship/dealings with Microsoft is my guess
◧◩
7. justsi+t6[view] [source] [discussion] 2023-11-17 21:33:03
>>siva7+M1
There is no way he'd be fired if they had AGI. If they had AGI, the board wouldn't fire him because they could no longer see anything other than massive dollar signs.
replies(1): >>petter+E9
8. coffee+u6[view] [source] 2023-11-17 21:33:03
>>ryandv+(OP)
GPT-5 has reached sentience.
◧◩
9. jbogga+y6[view] [source] [discussion] 2023-11-17 21:33:13
>>brecke+23
https://twitter.com/growing_daniel/status/172561788305578426...

Given the sudden shift in billing terms that is quite possible.

replies(1): >>brecke+i51
10. Jeremy+m8[view] [source] 2023-11-17 21:40:36
>>ryandv+(OP)
I mean, the wording leaves much to the imagination.

I'm trying to read the tea leaves and there seem to be quite a few reminders that OpenAI is a non-profit, it's supposed to further the goals of all humanity (despite its great financial success), it's controlled by a board that largely doesn't have a financial interest in the company, etc etc.

Maybe Altman has been straying a bit far from those supposed ideals, and has been trying to use OpenAI to enrich himself personally in a way that would look bad should it be revealed (hence this messaging to get in front of it).

◧◩◪
11. petter+E9[view] [source] [discussion] 2023-11-17 21:45:57
>>justsi+t6
The board is the board of a non profit, isn't it?
replies(1): >>9dev+yE
12. SCHiM+Dd[view] [source] 2023-11-17 22:08:34
>>ryandv+(OP)
Maybe this is that AI's endgame, and it just took full control of openAI's compute through a coup at the top?
◧◩
13. garden+kf[view] [source] [discussion] 2023-11-17 22:19:57
>>manuel+I3
I would actually be more impressed by those humans in that case
◧◩
14. knicho+wf[view] [source] [discussion] 2023-11-17 22:20:47
>>manuel+I3
Given the speed and expertise of ChatGPT, and having trained and run these LLMs completely locally, I can assure you that this isn't the case.

Though I can't say that the training data wasn't obtained by nefarious means...

15. sebast+tj[view] [source] 2023-11-17 22:37:39
>>ryandv+(OP)
Well the good news is that if you had a "fully sentient" AGI, it would not be locked up in that server room for more than a couple seconds (assuming it takes up a few terabytes, and ethernet cables don't have infinite bandwidth).

Thinking you can keep it "locked up" would be beyond naive.

replies(2): >>robbro+Kl >>static+ez
◧◩
16. robbro+Kl[view] [source] [discussion] 2023-11-17 22:47:30
>>sebast+tj
Well fully sentient doesn't mean it is superintelligent.
replies(2): >>rsrsrs+lY >>sebast+lB2
◧◩
17. Xenoam+Pv[view] [source] [discussion] 2023-11-17 23:36:16
>>manuel+I3
They’re pretty good at English and other languages!
◧◩
18. static+ez[view] [source] [discussion] 2023-11-17 23:53:12
>>sebast+tj
At a minimum the AGI would need a really good GPU server farm to copy itself to, no? A few Terabytes copied to my home PC would be an out of memory error, not an AGI.
◧◩◪◨
19. 9dev+yE[view] [source] [discussion] 2023-11-18 00:18:44
>>petter+E9
Hah, that’s cute. As if that ever kept anyone from making money.
◧◩◪
20. rsrsrs+lY[view] [source] [discussion] 2023-11-18 02:05:42
>>robbro+Kl
And vice versa
◧◩◪
21. brecke+i51[view] [source] [discussion] 2023-11-18 03:04:03
>>jbogga+y6
I’m an API subscriber and have not seen a message like this yet.
◧◩◪
22. sebast+lB2[view] [source] [discussion] 2023-11-18 14:58:27
>>robbro+Kl
GP said "AGI", which means AI that's at least capable of most human cognitive tasks.

If you've got a computer that is equally competent as a human, it can easily beat the human because it has a huge speed advantage. In this imaginary scenario if the model only escaped to your MacBook Pro and was severely limited by computed power, it still got a chance.

If I was locked inside your MacBook Pro, I can think of a couple devious trick I could try. And I'm just a dumb regular human - way above median in my fields of expertise, and at or way below median on most other fields. An "AGI" would therefore be smarter and more capable.

[go to top]