zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. BitWis+3T[view] [source] 2024-03-01 16:30:05
>>modele+(OP)
Wouldn't you have to prove damages in a lawsuit like this? What damages does Musk personally suffer if OpenAI has in fact broken their contract?
◧◩
2. Kepler+yW[view] [source] 2024-03-01 16:46:58
>>BitWis+3T
A non-profit took his money and decided to be for profit and compete with the AI efforts of his own companies?
◧◩◪
3. a_wild+3Z[view] [source] 2024-03-01 16:58:12
>>Kepler+yW
Yeah, OpenAI basically grafted a for-profit entity onto the non-profit to bypass their entire mission. They’re now extremely closed AI, and are valued at $80+ billion.

If I donated millions to them, I’d be furious.

◧◩◪◨
4. api+901[view] [source] 2024-03-01 17:02:55
>>a_wild+3Z
It's almost like the guy behind an obvious grift like Worldcoin doesn't always work in good faith.

What gives me even less sympathy for Altman is that he took OpenAI, whose mission was open AI, and turned it not only closed but then immediately started a world tour trying to weaponize fear-mongering to convince governments to effectively outlaw actually open AI.

◧◩◪◨⬒
5. Spooky+p11[view] [source] 2024-03-01 17:10:31
>>api+901
Everything around it seems so shady.

The strangest thing to me is that the shadiness seems completely unnecessary, and really requires a very critical eye for anything associated with OpenAI. Google seems like the good guy in AI lol.0

◧◩◪◨⬒⬓
6. ethbr1+841[view] [source] 2024-03-01 17:21:15
>>Spooky+p11
Google, the one who haphazardly allows diversity prompt rewriting to be layered on top of their models, with seemingly no internal adversarial testing or public documentation?
◧◩◪◨⬒⬓⬔
7. ben_w+c61[view] [source] 2024-03-01 17:29:24
>>ethbr1+841
"We had a bug" is shooting fish in a barrel, when it comes to software.

I was genuinely concerned about their behaviour towards Timnit Gebru, though.

◧◩◪◨⬒⬓⬔⧯
8. ethbr1+5o1[view] [source] 2024-03-01 18:51:59
>>ben_w+c61
If you build a black box, and a bug that seems like it should have been caught in testing comes through, and there's limited documentation that the black box was programmed to do that, it makes me nervous.

Granted, stupid fun-sy public-facing image generation project.

But I'm more worried about the lack of transparency around the black box, and the internal adversarial testing that's being applied to it.

Google has an absolute right to build a model however they want -- but they should be able to proactively document how it functions, what it should and should not be used for, and any guardrails they put around it.

Is there anywhere that says "Given a prompt, Bard will attempt to deliver a racially and sexually diverse result set, and that will take precedence over historical facts"?

By all means, I support them building that model! But that's a pretty big 'if' that should be clearly documented.

◧◩◪◨⬒⬓⬔⧯▣
9. prepen+st1[view] [source] 2024-03-01 19:16:08
>>ethbr1+5o1
> Google has an absolute right to build a model however they want

I don’t think anyone is arguing google doesn’t have the right. The argument is that google is incompetent and stupid for creating and releasing such a poor model.

◧◩◪◨⬒⬓⬔⧯▣▦
10. ethbr1+vU1[view] [source] 2024-03-01 22:01:52
>>prepen+st1
I try and call out my intent explicitly, because I hate when hot-button issues get talked past.

IMHO, there are distinct technical/documentation (does it?) and ethical (should it?) issues here.

Better to keep them separate when discussing.

[go to top]