zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. fabian+Z8[view] [source] 2023-11-17 20:55:54
>>davidb+(OP)
I would translate "not consistently candid with the board" as "he lied to the board about something important enough to fire him on the spot". This seems like the kind of statement lawyers would advise you to not make publicly unless you have proof, and it seems unusual compared to most statements of that kind that are intentionally devoid of any information or blame.
◧◩
2. ryandv+jb[view] [source] 2023-11-17 21:05:34
>>fabian+Z8
Kinda nervous wondering what Altman wasn't sharing with them. I hope it's not that they already have a fully sentient AGI locked up in a server room somewhere...
◧◩◪
3. sebast+Mu[view] [source] 2023-11-17 22:37:39
>>ryandv+jb
Well the good news is that if you had a "fully sentient" AGI, it would not be locked up in that server room for more than a couple seconds (assuming it takes up a few terabytes, and ethernet cables don't have infinite bandwidth).

Thinking you can keep it "locked up" would be beyond naive.

◧◩◪◨
4. robbro+3x[view] [source] 2023-11-17 22:47:30
>>sebast+Mu
Well fully sentient doesn't mean it is superintelligent.
◧◩◪◨⬒
5. sebast+EM2[view] [source] 2023-11-18 14:58:27
>>robbro+3x
GP said "AGI", which means AI that's at least capable of most human cognitive tasks.

If you've got a computer that is equally competent as a human, it can easily beat the human because it has a huge speed advantage. In this imaginary scenario if the model only escaped to your MacBook Pro and was severely limited by computed power, it still got a chance.

If I was locked inside your MacBook Pro, I can think of a couple devious trick I could try. And I'm just a dumb regular human - way above median in my fields of expertise, and at or way below median on most other fields. An "AGI" would therefore be smarter and more capable.

[go to top]