zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. johnwh+Uc1[view] [source] 2023-11-18 02:36:00
>>davidb+(OP)
Ilya booted him https://twitter.com/karaswisher/status/1725702501435941294
◧◩
2. dwd+zL1[view] [source] 2023-11-18 07:07:59
>>johnwh+Uc1
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933

◧◩◪
3. Alchem+c02[view] [source] 2023-11-18 09:20:57
>>dwd+zL1
He's since reversed his call: https://twitter.com/jeremyphoward/status/1725714720400068752
◧◩◪◨
4. croes+Fg2[view] [source] 2023-11-18 11:38:14
>>Alchem+c02
Because of Altman's dismissal?
◧◩◪◨⬒
5. ayewo+ej2[view] [source] 2023-11-18 11:57:07
>>croes+Fg2
Yes, along with the departure of gdb. From jph's view, there was no philosophical alignment at the start of the union between AI Researchers (that skew non-profit) and operators (that skew for-profit) so it was bound to be unstable, until a purging happens as it had now.

> Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.

> But the company absolutely blossomed nonetheless.

> With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.

> My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.

> I think the mismatch between mission and reality was impossible to fix.

jph goes on in detail in this Twitter thread: https://twitter.com/jeremyphoward/status/1725714720400068752

◧◩◪◨⬒⬓
6. civili+Iw2[view] [source] 2023-11-18 13:28:53
>>ayewo+ej2
That reeks of bullshit post hoc reasoning to justify a classic power grab. Anthropic released their competitor to GPT as fast as they could and even beat OpenAI to the 100k context club. They didn’t give any more shits about safety than OpenAI did and I bet the same is true about these nonprofit loonies - they just want control over what is shaping up to be one of the most important technological developments of the 21st century.
◧◩◪◨⬒⬓⬔
7. pmoria+wH2[view] [source] 2023-11-18 14:29:57
>>civili+Iw2
> They didn’t give any more shits about safety than OpenAI did

Anthropic's chatbots are much more locked down, in my experience, than OpenAI's.

It's a lot easier to jailbreak ChatGPT, for example, than to do the same on Claude, and Claude has tighter content filters where it'll outright refuse to do/say certain things while ChatGPT will plow on ahead.

[go to top]