zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. shubha+B7[view] [source] 2023-11-22 06:50:16
>>staran+(OP)
At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].

> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration

Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."

[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...

◧◩
2. arkety+p9[view] [source] 2023-11-22 07:02:03
>>shubha+B7
For all the talk about responsible progress, the irony of their inability to align even their own incentives in this enterprise deserves ridicule. It's a big blow to their credibility and questions whatever ethical concerns they hold.
◧◩◪
3. concor+Dm[view] [source] 2023-11-22 08:39:39
>>arkety+p9
Alignment is considered an extremely hard problem for a reason. It's already nigh impossible when you're dealing with humans.

Btw: do you think ridicule eould be helpful here?

◧◩◪◨
4. arkety+tn[view] [source] 2023-11-22 08:45:58
>>concor+Dm
I can see how ridicule of this specific instance could be the best medicine for an optimal outcome, even by a utilitarian argument, which I generally don't like to make by the way. It is indeed nigh impossible, which is kind of my point. They could have shown more humility. If anything, this whole debacle has been a moral victory for e/acc, seeing how the brightest of minds are at a loss dealing with alignment anyway.
◧◩◪◨⬒
5. Feepin+9r[view] [source] 2023-11-22 09:19:46
>>arkety+tn
I don't understand how the conclusion of this is "so we should proceed with AI" rather than "so we should immediately outlaw all foundation model training". Clearly corporate self-governance has failed completely.
[go to top]