zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. HarHar+vu1[view] [source] 2024-03-01 19:23:01
>>modele+(OP)
Any competent lawyer is going to get Musk on the stand reiterating his opinions about the danger of AI. If the tech really is dangerous then being more closed arguably is in the public's best interest, and this is certainly the reason OpenAI have previously given.

Not saying I agree that being closed source is in the public good, although one could certainly argue that accelerating the efforts of bad actors to catch up would not be a positive.

◧◩
2. starbu+nw1[view] [source] 2024-03-01 19:33:01
>>HarHar+vu1
> If the tech really is dangerous then being more closed arguably is in the public's best interest

If that was true, then they shouldn't have started off like that to begin with. You can't have it both ways. Either you are pursuing your goal to be open (as the name implies) or the way you set yourself up was ill-suited all along.

◧◩◪
3. HarHar+2N1[view] [source] 2024-03-01 21:12:16
>>starbu+nw1
Their position evolved. Many people at the time disagreed that having open source AGI - putting it in the hands of many people - was the best way to mitigate the potential danger. Note that this original stance of OpenAI was before they started playing with transformers and having anything that was beginning to look like AI/AGI. Around the time of GPT-3 was when they said "this might be dangerous, we're going to hold it back".

There's nothing wrong with changing your opinion based on fresh information.

◧◩◪◨
4. pauldd+UB2[view] [source] 2024-03-02 05:07:38
>>HarHar+2N1
> fresh information

What fresh information?

◧◩◪◨⬒
5. HarHar+8f3[view] [source] 2024-03-02 13:38:08
>>pauldd+UB2
They were founded on the premise that some large player (specifically Google) would develop AGI, keep it closed, and maybe not develop it in the best interests (safety) of the public. The founding charter was essentially to try to ensure that AI was developed safely, which at the time they believed would be best done by making it open source and available to everyone (this was anyways contentious from day 1 - a bit like saying the best defense against bio-hackers is to open source the DNA for Ebola).

What goes unsaid, perhaps, is that back then (before the transformer had even been invented, before AlphaGo, what people might have imagined AGI to look like (some kind of sterile super-intelligence) was very different from the LLM-based "AGI" that eventually emerged.

So, what changed, what was the fresh information that warranted a change of opinion that open source was not the safest approach?

I'd say a few things.

1) As it turned out, OpenAI themselves were the first to develop a fledgling AGI, so they were not in the role they envisaged of open sourcing something to counteract an evil closed source competitor.

2) The LLM-based form of AGI that OpenAI developed was really not what anyone imagined it would be. The danger of what OpenAI developed, so far, isn't some doomsday "AI takes over the world" scenario, but rather that it's inherently a super-toxic chatbot (did you see OpenAI's examples of how it was before RLHF ?!) that is potentially disruptive and negative to society because of what it is rather than because of it's intelligence. The danger (and remedy) is not, so far, what OpenAI originally thought it would be.

3) OpenAI have been quite open about this in the past: Musk leaving, being their major source of funds, forced OpenAI to make changes in how they were funded. At the same time as this was happening (around GPT 2.0), it was becoming evident how extraordinarily expensive this unanticipated path to AGI was going to be to continue developing (Altman has indicated a cost of $100M+ to train GPT-3 - maybe including hardware). They were no longer looking for a benefactor like Musk willing/able to donate a few $10's of millions, but needed a partner able to put billions into the effort, which necessitated an investor expecting a return on investment, and hence the corporate structure change to accommodate that.

◧◩◪◨⬒⬓
6. pauldd+jj3[view] [source] 2024-03-02 14:18:40
>>HarHar+8f3
> some large player (specifically Google) would develop AGI, keep it closed, and maybe not develop it in the best interests (safety) of the public

https://youtu.be/1LVt49l6aP8

[go to top]