I very much recommend reading the book “Superintelligence: Paths, Dangers, Strategies” from Nick Bostrom.
It is a seminal work which provides a great introduction into these ideas and concepts.
I found myself in the same boat as you do. I was seeing otherwise inteligent and rational people worry about this “fairy tale” of some AI uprising. Reading that book give me an appreciation of the idea as a serious intelectual excercise.
I still don’t agree with everything contained in the book. And definietly don’t agree with everything the AI doomsayers write, but i believe if more people would read it that would elevate the discourse. Instead of rehashing the basics again and again we could build on them.
The cool thing is that it doesn’t only talk about AIs. It talks about a more general concept it calls a superinteligence. It has a definition but I recommend you read the book for it. :) AIs are just one of the few enumerated possible implementations of a superinteligence.
The other type is for example corporations. This is a usefull perspective because it lets us recognise that our attempts to control AIs is not a new thing. We have the same principal-agent control problem in many other parts of our life. How do you know the company you invest in has interests which align with yours? How do you know that politicians and parties you vote for represent your interests? How do you know your lawyer/accountant/doctor has your interest at their hearth? (Not all of these are superinteligences, but you get the gist.)
I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics: Just because you know what the "goal" of some isolated system is, that does not mean you know what the outcome is of implementing that goal on a broad scale.
So OpenAI has the same problem: They definitely know what the goal is, but they're not prepared _in any meaningful sense_ for what the broadscale outcome is.
If you really care about AI safety, you'd be putting it under government control as utility, like everything else.
That's all. That's why government exists.
And I'd sayu should read the book so we can have a nice chat about it. Making wild guesses and assumptions is not really useful.
> If you really care about AI safety, you'd be putting it under government control as utility, like everything else.
This is a bit jumbled. How do you think "control as utility" would help? What would it help with?