zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. shubha+B7[view] [source] 2023-11-22 06:50:16
>>staran+(OP)
At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].

> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration

Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."

[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...

◧◩
2. silenc+59[view] [source] 2023-11-22 07:00:14
>>shubha+B7
Honestly "Safety" is the word in the AI talk that nobody can quantify or qualify in any way when it comes to these conversations.

I've stopped caring about anyone who uses the word "safety". It's vague and a hand-waive-y way to paint your opponents as dangerous without any sort of proof or agreed upon standard for who/what/why makes something "safety".

◧◩◪
3. fsloth+dc[view] [source] 2023-11-22 07:20:47
>>silenc+59
Exactly this. The ’safety’ people sound like delusional quacks.

”But they are so smart…” argument is bs. Nobody can be presumed to be super good outside their own specific niche. Linus Pauling and vitamin C.

Until we have at least a hint of a mechanistic model if AI driven extinction event, nobody can be an expert on it, and all talk in that vein is self important delusional hogwash.

Nobody is pro-apocalypse! We are drowning in things an AI could really help with.

With the amount of energy needed for any sort of meaningfull AI results, you can always pull the plug if stuff gets too weird.

◧◩◪◨
4. JumpCr+pd[view] [source] 2023-11-22 07:30:03
>>fsloth+dc
Now do nuclear.
◧◩◪◨⬒
5. fsloth+ef[view] [source] 2023-11-22 07:43:18
>>JumpCr+pd
War or power production?:)

Those are different things.

Nuclear war is exactly the kind of thing for which we do have excellent expertise. Unlike for AI safety which seems more like bogus cult atm.

Nuclear power would be the best form of large scale power production for many situations. And smaller scale too in forms of emerging SMR:s.

◧◩◪◨⬒⬓
6. JumpCr+7h[view] [source] 2023-11-22 07:57:25
>>fsloth+ef
I suppose the whole regime. I'm not an AI safetyist, mostly because I don't think we're anywhere close to AI. But if you were sitting on the precipice of atomic power, as AI safetyists believe they are, wouldn't caution be prudent?
[go to top]