zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. HarHar+vu1[view] [source] 2024-03-01 19:23:01
>>modele+(OP)
Any competent lawyer is going to get Musk on the stand reiterating his opinions about the danger of AI. If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.

◧◩
2. starbu+nw1[view] [source] 2024-03-01 19:33:01
>>HarHar+vu1
> If the tech really is dangerous then being more closed arguably is in the public's best interest

If that was true, then they shouldn't have started off like that to begin with. You can't have it both ways. Either you are pursuing your goal to be open (as the name implies) or the way you set yourself up was ill-suited all along.

◧◩◪
3. HarHar+2N1[view] [source] 2024-03-01 21:12:16
>>starbu+nw1
Their position evolved. Many people at the time disagreed that having open source AGI - putting it in the hands of many people - was the best way to mitigate the potential danger. Note that this original stance of OpenAI was before they started playing with transformers and having anything that was beginning to look like AI/AGI. Around the time of GPT-3 was when they said "this might be dangerous, we're going to hold it back".

There's nothing wrong with changing your opinion based on fresh information.

◧◩◪◨
4. starbu+0R1[view] [source] 2024-03-01 21:38:25
>>HarHar+2N1
> There's nothing wrong with changing your opinion based on fresh information.

I don't really get that twist. What "fresh" information arrived here suddenly? The structure they gave themselves was chosen explicitly with the risks of future developments in mind. In fact, that was why they chose that specific structure as outlined in the complaint. How can it now be called new information that there are actually risks involved? That was the whole premise of creating that organization in the form it was done to begin with!

◧◩◪◨⬒
5. HarHar+Yj2[view] [source] 2024-03-02 01:20:09
>>starbu+0R1
The fresh information was seeing who built an AGI, and what it looks like.

When OpenAI was founded it was expected that AGI would likely come out of Google, with OpenAI doing the world a favor by replicating this wonderful technology and giving it to the masses. One might have imagined AGI would be some Spock-like stone cold super intelligence.

As it turns out, OpenAI themselves were the first to create something AGI-like, so the role they envisaged for themselves was totally flipped. Not only this, but this AGI wasn't an engineered intelligence but rather a stochastic parrot, trained on the internet, and incredibly toxic; as much of a liability as a powerful tool.

OpenAIs founding mission of AI democracy has turned into one of protecting us from this bullshitting psychopath that they themselves created, while at the same time raising the billions of dollars it takes to iterate on something so dumb it needs to be retrained from scratch every time you want to update it.

[go to top]