zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. BitWis+3T[view] [source] 2024-03-01 16:30:05
>>modele+(OP)
Wouldn't you have to prove damages in a lawsuit like this? What damages does Musk personally suffer if OpenAI has in fact broken their contract?
◧◩
2. Kepler+yW[view] [source] 2024-03-01 16:46:58
>>BitWis+3T
A non-profit took his money and decided to be for profit and compete with the AI efforts of his own companies?
◧◩◪
3. a_wild+3Z[view] [source] 2024-03-01 16:58:12
>>Kepler+yW
Yeah, OpenAI basically grafted a for-profit entity onto the non-profit to bypass their entire mission. They’re now extremely closed AI, and are valued at $80+ billion.

If I donated millions to them, I’d be furious.

◧◩◪◨
4. api+901[view] [source] 2024-03-01 17:02:55
>>a_wild+3Z
It's almost like the guy behind an obvious grift like Worldcoin doesn't always work in good faith.

What gives me even less sympathy for Altman is that he took OpenAI, whose mission was open AI, and turned it not only closed but then immediately started a world tour trying to weaponize fear-mongering to convince governments to effectively outlaw actually open AI.

◧◩◪◨⬒
5. Spooky+p11[view] [source] 2024-03-01 17:10:31
>>api+901
Everything around it seems so shady.

The strangest thing to me is that the shadiness seems completely unnecessary, and really requires a very critical eye for anything associated with OpenAI. Google seems like the good guy in AI lol.0

◧◩◪◨⬒⬓
6. ethbr1+841[view] [source] 2024-03-01 17:21:15
>>Spooky+p11
Google, the one who haphazardly allows diversity prompt rewriting to be layered on top of their models, with seemingly no internal adversarial testing or public documentation?
◧◩◪◨⬒⬓⬔
7. ben_w+c61[view] [source] 2024-03-01 17:29:24
>>ethbr1+841
"We had a bug" is shooting fish in a barrel, when it comes to software.

I was genuinely concerned about their behaviour towards Timnit Gebru, though.

◧◩◪◨⬒⬓⬔⧯
8. concor+YA1[view] [source] 2024-03-01 19:57:12
>>ben_w+c61
It's specifically been trained to be, well, the best term is "woke" (despite the word's vagueness, LLMs mean you can actually have alignment towards very fuzzy ideas). They have started fixing things (e.g. it no longer changes between "would be an immense tragedy" and "that's a complex issue" depending on what ethnicity you talk about when asking whether it would be sad if that ethnicity went extinct), but I suspect they'll still end up a lot more biased than ChatGPT.
[go to top]