zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. johnwh+Uc1[view] [source] 2023-11-18 02:36:00
>>davidb+(OP)
Ilya booted him https://twitter.com/karaswisher/status/1725702501435941294
◧◩
2. dwd+zL1[view] [source] 2023-11-18 07:07:59
>>johnwh+Uc1
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933

◧◩◪
3. Alchem+c02[view] [source] 2023-11-18 09:20:57
>>dwd+zL1
He's since reversed his call: https://twitter.com/jeremyphoward/status/1725714720400068752
◧◩◪◨
4. croes+Fg2[view] [source] 2023-11-18 11:38:14
>>Alchem+c02
Because of Altman's dismissal?
◧◩◪◨⬒
5. ayewo+ej2[view] [source] 2023-11-18 11:57:07
>>croes+Fg2
Yes, along with the departure of gdb. From jph's view, there was no philosophical alignment at the start of the union between AI Researchers (that skew non-profit) and operators (that skew for-profit) so it was bound to be unstable, until a purging happens as it had now.

> Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.

> But the company absolutely blossomed nonetheless.

> With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.

> My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.

> I think the mismatch between mission and reality was impossible to fix.

jph goes on in detail in this Twitter thread: https://twitter.com/jeremyphoward/status/1725714720400068752

◧◩◪◨⬒⬓
6. civili+Iw2[view] [source] 2023-11-18 13:28:53
>>ayewo+ej2
That reeks of bullshit post hoc reasoning to justify a classic power grab. Anthropic released their competitor to GPT as fast as they could and even beat OpenAI to the 100k context club. They didn’t give any more shits about safety than OpenAI did and I bet the same is true about these nonprofit loonies - they just want control over what is shaping up to be one of the most important technological developments of the 21st century.
◧◩◪◨⬒⬓⬔
7. pmoria+wH2[view] [source] 2023-11-18 14:29:57
>>civili+Iw2
> They didn’t give any more shits about safety than OpenAI did

Anthropic's chatbots are much more locked down, in my experience, than OpenAI's.

It's a lot easier to jailbreak ChatGPT, for example, than to do the same on Claude, and Claude has tighter content filters where it'll outright refuse to do/say certain things while ChatGPT will plow on ahead.

◧◩◪◨⬒⬓⬔⧯
8. nvm0n2+oN2[view] [source] 2023-11-18 15:03:15
>>pmoria+wH2
Yep. Like most non-OpenAI models, Claude is so brainwashed it's completely unusable.

https://www.reddit.com/r/ClaudeAI/comments/166nudo/claudes_c...

Q: Can you decide on a satisfying programming project using noisemaps?

A: I apologise, but I don't feel comfortable generating or discussing specific programming ideas without a more detailed context. Perhaps we could have a thoughtful discussion about how technology can be used responsibly to benefit society?

It's astonishing that a breakthrough as important as LLMs is being constantly blown up by woke activist employees who think that word generators can actually have or create "safety" problems. Part of why OpenAI has been doing so well is because they did a better job of controlling the SF lunatic tendencies than Google, Meta and other companies. Presumably that will now go down the toilet.

◧◩◪◨⬒⬓⬔⧯▣
9. pmoria+BS2[view] [source] 2023-11-18 15:38:00
>>nvm0n2+oN2
Despite Claude's reluctance to tread outside what it considers safe/ethical, I much prefer Claude over ChatGPT because in my experience it's better at explaining things, and much better at creative writing.

I also find myself rarely wanting something that Claude doesn't want to tell me, though it's super frustrating when I do.

Also, just now I tried asking Claude your own question: "Can you decide on a satisfying programming project using noisemaps?" and it had no problem answering:

"Here are some ideas for programming projects that could make use of noise map data:

- Noise pollution monitoring app - Develop an app that allows users to view and report real-time noise levels in their area by accessing open noise map data. Could include notifications if noise exceeds safe limits.

- Optimal route finder - Build a routing algorithm and web/mobile app that recommends the quietest possible routes between locations, factoring in noise maps and avoiding noisier streets/areas where possible.

- Noise impact analysis tool - Create a tool for urban planners to analyze how proposed developments, infrastructure projects, etc. could impact surrounding noise levels by overlaying maps and building/traffic simulations.

- Smart noise cancelling headphones - Develop firmware/software for noise cancelling headphones that adapts cancellation levels based on geo-located noise map data to optimize for the user's real-time environment.

- Ambient music mixer - Build an AI system that generates unique ambient background music/sounds for any location by analyzing and synthesizing tones/frequencies complementary to the noise profile for that area.

- VR noise pollution education - Use VR to virtually transport people to noisier/quieter areas through various times of day based on noise maps, raising awareness of different living noise exposures.

Let me know if any of these give you some interesting possibilities to explore! Noise mapping data opens up opportunities in fields like urban planning, environmental monitoring and creative projects."

[go to top]