zlacker

[return to "We have reached an agreement in principle for Sam to return to OpenAI as CEO"]
1. shubha+B7[view] [source] 2023-11-22 06:50:16
>>staran+(OP)
At the end of the day, we still don't know what exactly happened and probably, never will. However, it seems clear there was a rift between Rapid Commercialization (Team Sam) and Upholding the Original Principles (Team Helen/Ilya). I think the tensions were brewing for quite a while, as it's evident from an article written even before GPT-3 [1].

> Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration

Team Helen acted in panic, but they believed they would win since they were upholding the principles the org was founded on. But they never had a chance. I think only a minority of the general public truly cares about AI Safety, the rest are happy seeing ChatGPT helping with their homework. I know it's easy to ridicule the sheer stupidity the board acted with (and justifiably so), but take a moment to think of the other side. If you truly believed that Superhuman AI was near, and it could act with malice, won't you try to slow things down a bit?

Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before. Maybe, it isn't without substance as I thought it to be. Hopefully, there won't be a day when Team Helen gets to say, "This is exactly what we wanted to prevent."

[1]: https://www.technologyreview.com/2020/02/17/844721/ai-openai...

◧◩
2. krisof+ni[view] [source] 2023-11-22 08:07:55
>>shubha+B7
> Honestly, I myself can't take the threat seriously. But, I do want to understand it more deeply than before.

I very much recommend reading the book “Superintelligence: Paths, Dangers, Strategies” from Nick Bostrom.

It is a seminal work which provides a great introduction into these ideas and concepts.

I found myself in the same boat as you do. I was seeing otherwise inteligent and rational people worry about this “fairy tale” of some AI uprising. Reading that book give me an appreciation of the idea as a serious intelectual excercise.

I still don’t agree with everything contained in the book. And definietly don’t agree with everything the AI doomsayers write, but i believe if more people would read it that would elevate the discourse. Instead of rehashing the basics again and again we could build on them.

◧◩◪
3. Solven+mo[view] [source] 2023-11-22 08:53:41
>>krisof+ni
Who needs a book to understand the crazy overwhelming scale at which AI can dictate even online news/truth/discourse/misinformation/propaganda. And that's just barely the beginning.
◧◩◪◨
4. krisof+Pr[view] [source] 2023-11-22 09:25:37
>>Solven+mo
Not sure if you are sarcastic or not. :) Let’s assume you are not:

The cool thing is that it doesn’t only talk about AIs. It talks about a more general concept it calls a superinteligence. It has a definition but I recommend you read the book for it. :) AIs are just one of the few enumerated possible implementations of a superinteligence.

The other type is for example corporations. This is a usefull perspective because it lets us recognise that our attempts to control AIs is not a new thing. We have the same principal-agent control problem in many other parts of our life. How do you know the company you invest in has interests which align with yours? How do you know that politicians and parties you vote for represent your interests? How do you know your lawyer/accountant/doctor has your interest at their hearth? (Not all of these are superinteligences, but you get the gist.)

◧◩◪◨⬒
5. cyanyd+AO[view] [source] 2023-11-22 12:36:22
>>krisof+Pr
I wonder how much this is connected to the "effective altruism" movement which seems to project this idea that the "ends justify the means" in a very complex matter, where it suggests such badly formulated ideas like "If we invest in oil companies, we can use that investment to fight climate change".

I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics: Just because you know what the "goal" of some isolated system is, that does not mean you know what the outcome is of implementing that goal on a broad scale.

So OpenAI has the same problem: They definitely know what the goal is, but they're not prepared _in any meaningful sense_ for what the broadscale outcome is.

If you really care about AI safety, you'd be putting it under government control as utility, like everything else.

That's all. That's why government exists.

◧◩◪◨⬒⬓
6. krisof+Pj1[view] [source] 2023-11-22 15:12:42
>>cyanyd+AO
> I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics

And I'd sayu should read the book so we can have a nice chat about it. Making wild guesses and assumptions is not really useful.

> If you really care about AI safety, you'd be putting it under government control as utility, like everything else.

This is a bit jumbled. How do you think "control as utility" would help? What would it help with?

[go to top]