zlacker

[parent] [thread] 15 comments
1. ademeu+(OP)[view] [source] 2023-11-17 22:13:10
Sam implied OpenAI had a major breakthrough a few weeks ago in a panel yesterday:

"Like 4 times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I've gotten to be in the room when we sort of like, pushed the veil of ignorance back and the frontier of discovery forward. And getting to do that is like the professional honor of a lifetime".

https://www.youtube.com/watch?v=ZFFvqRemDv8#t=13m22s

This is going to sound terrible, but I really hope this is a financial or ethical scandal about Sam Altman personally and he did something terribly wrong, because the alternative is that this is about how close we are to true AGI.

Superhuman intelligence could be a wonderful thing if done right, but the world is not ready for a fast take-off, and the governance structure of OpenAI certainly wouldn't be ready for it either it seems.

replies(3): >>mi3law+D >>mardif+Y4 >>csomar+pk1
2. mi3law+D[view] [source] 2023-11-17 22:16:38
>>ademeu+(OP)
On the contrary, the video you linked to is likely to be part of the lie that ousted Altman.

He's also said very recently that to get to AGI "we need another breakthrough" (source https://garymarcus.substack.com/p/has-sam-altman-gone-full-g... )

To predicate a company so massive as OpenAI on a premise that you know to not be true seems like a big enough lie.

replies(1): >>ademeu+56
3. mardif+Y4[view] [source] 2023-11-17 22:36:24
>>ademeu+(OP)
Why would they fire him because they are close to AGI? I get that they would go on full panic mode but firing the CEO wouldn't make sense since openai has AGI as an objective. The board wasn't exactly unaware of that.
replies(2): >>ademeu+p9 >>kunley+WA1
◧◩
4. ademeu+56[view] [source] [discussion] 2023-11-17 22:40:55
>>mi3law+D
Fair enough, but having worked for an extremely secretive FAANG myself, "we need XYZ" is the kind of thing I'd expect to hear if you have XYZ internally but don't want to reveal it yet. It could basically mean "we need XYZ relative to the previous product" or more specifically "we need another breakthrough than LLMs, and we recently made a major breakthrough unrelated to LLMs". I'm not saying that's the case but I don't think the signal-to-noise ratio in his answer is very high.

More importantly, OpenAI's claim (whether you believe it or not) has always been that their structure is optimised towards building AGI, and that everything else including the for-profit part is just a means to that end: https://openai.com/our-structure and https://openai.com/blog/openai-lp

Either the board doesn't actually share that goal, or what you are saying shouldn't matter to them. Sam isn't an engineer, it's not his job to make the breakthrough, only to keep the lights on until they do if you take their mission literally.

Unless you're arguing that Sam claimed they were closer to AGI to the board than they really are (rather than hiding anything from them) in order to use the not-for-profit part of the structure in a way the board disagreed with, or some other financial shenanigans?

As I said, I hope you're right, because the alternative is a lot scarier.

replies(2): >>mi3law+7n1 >>vvndom+Zz1
◧◩
5. ademeu+p9[view] [source] [discussion] 2023-11-17 22:55:41
>>mardif+Y4
You're right, I was imagining that he decided to hide the (full extent of?) the breakthrough to the board and do things covertly for some reason which could warrant firing him, but that's a pretty unlikely prior: why would he hide it from the board in the first place, given AGI is literally the board's mission? One reason might be that he wants to slow down this AGI progress until they've made more progress on safety and decided to hide it for that reason, and the board disagrees, but that sounds too much like a movie script to be real and very unlikely!

As I said, while I do have a mostly positive opinion of Sam Altman (I disagree with him on certain things but I and trust him a lot more than the vast majority of tech CEOs and politicians and I'd rather he be in the room when true superhuman intelligence is created than them), I hope this has nothing to do with AGI and it's "just" a personal scandal.

replies(1): >>static+2J
◧◩◪
6. static+2J[view] [source] [discussion] 2023-11-18 01:59:30
>>ademeu+p9
Altman told people on reddit OpenAI had achieved AGI and then when they reacted in surprise said he was "just meming".

https://www.independent.co.uk/tech/chatgpt-ai-agi-sam-altman...

I don't really get "meme" culture but is that really how someone who believed their company is going to create AGI soon would behave? Turning the possibility of the success of their mission into a punchline?

7. csomar+pk1[view] [source] 2023-11-18 06:52:44
>>ademeu+(OP)
No, we are not close to AGI. And AGIs can't leave machines yet, so humans will still be humans. This paranoia about a parroting machine is unwarranted.
replies(1): >>aidama+AC1
◧◩◪
8. mi3law+7n1[view] [source] [discussion] 2023-11-18 07:17:57
>>ademeu+56
I think my point is different than what you're breaking down here.

The only way that OpenAI was able to sell MS and others on the 100x capped non-profit and other BS was because of the AGI/superintelligence narraitive. Sam was that salesman. And Sam does seem to sincerely believe that AGI and superintelligence are realities on OpenAI's path, a perfect salesman.

But then... maybe that AGI conviction was oversold? To a level some would have interpreted as "less than candid," that's my claim.

Speaking as a technologist actually building AGI up from animal-levels following evolution (and as a result totally discounting superintelligence), I do think Sam's AGI claims veered on the edge of reality as lies.

replies(1): >>dragon+Hp1
◧◩◪◨
9. dragon+Hp1[view] [source] [discussion] 2023-11-18 07:44:38
>>mi3law+7n1
Both factions in this appear publicly to see AGI as imminent, and mishandling its imminence to be an existential threat; the dispute appears to be about what to do about that imminence. If they didn't both see it as imminent, the dispute would probably be less intense.

This has something of a character of a doctrinal dispute among true believers in a millennial cult

replies(1): >>mi3law+kB1
◧◩◪
10. vvndom+Zz1[view] [source] [discussion] 2023-11-18 09:15:02
>>ademeu+56
Sam has been doing a pretty damn obvious charismatic cult leader thingy for quite a while now. The guy is dangerous as fuck and needs to be committed to an institution, not given any more money.
◧◩
11. kunley+WA1[view] [source] [discussion] 2023-11-18 09:22:51
>>mardif+Y4
I think they fired him because they are _not_ close to the AGI (no one is), but he lied to the potential investors how they are.

That's against a popular sentiment about the upcoming "breakthrough", but also most probable given the characteristics of the approach they took.

◧◩◪◨⬒
12. mi3law+kB1[view] [source] [discussion] 2023-11-18 09:25:41
>>dragon+Hp1
I totally agree.

They must be under so much crazy pressure at OpenAI that it indeed is like a cult. I'm glad to see the snake finally eat iself. Hopefully that'll return some sanity to our field.

◧◩
13. aidama+AC1[view] [source] [discussion] 2023-11-18 09:38:03
>>csomar+pk1
you're right. agi has been here since GPT-3 at the least.

it's honestly sad when people who have clearly not use gpt4 would call it a parroting machine. that is incredibly ignorant.

replies(1): >>ric2b+8N1
◧◩◪
14. ric2b+8N1[view] [source] [discussion] 2023-11-18 11:08:20
>>aidama+AC1
Let me know when GPT can even play chess without making invalid moves, then we can talk about how capable it is of logical thinking.
replies(1): >>kliber+Sz2
◧◩◪◨
15. kliber+Sz2[view] [source] [discussion] 2023-11-18 16:16:05
>>ric2b+8N1
Let me know when you can prove that "logical" and "intelligent" were ever stored on the same shelf, much less being meaningfully equivalent. If anything, we know that making a general intelligence (the only natural example of it we know) emulate logic is crazily inefficient and susceptive to biases that are entirely non-existent (save for bugs) in much simpler (and energy-efficient) implementations of said logic.
replies(1): >>ric2b+H63
◧◩◪◨⬒
16. ric2b+H63[view] [source] [discussion] 2023-11-18 19:05:44
>>kliber+Sz2
An AGI that can't even play a game of chess, a game that children learn to play, without making an invalid move doesn't really sound like an AGI.
[go to top]