zlacker

[parent] [thread] 2 comments
1. krisof+(OP)[view] [source] 2023-11-22 09:25:37
Not sure if you are sarcastic or not. :) Let’s assume you are not:

The cool thing is that it doesn’t only talk about AIs. It talks about a more general concept it calls a superinteligence. It has a definition but I recommend you read the book for it. :) AIs are just one of the few enumerated possible implementations of a superinteligence.

The other type is for example corporations. This is a usefull perspective because it lets us recognise that our attempts to control AIs is not a new thing. We have the same principal-agent control problem in many other parts of our life. How do you know the company you invest in has interests which align with yours? How do you know that politicians and parties you vote for represent your interests? How do you know your lawyer/accountant/doctor has your interest at their hearth? (Not all of these are superinteligences, but you get the gist.)

replies(1): >>cyanyd+Lm
2. cyanyd+Lm[view] [source] 2023-11-22 12:36:22
>>krisof+(OP)
I wonder how much this is connected to the "effective altruism" movement which seems to project this idea that the "ends justify the means" in a very complex matter, where it suggests such badly formulated ideas like "If we invest in oil companies, we can use that investment to fight climate change".

I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics: Just because you know what the "goal" of some isolated system is, that does not mean you know what the outcome is of implementing that goal on a broad scale.

So OpenAI has the same problem: They definitely know what the goal is, but they're not prepared _in any meaningful sense_ for what the broadscale outcome is.

If you really care about AI safety, you'd be putting it under government control as utility, like everything else.

That's all. That's why government exists.

replies(1): >>krisof+0S
◧◩
3. krisof+0S[view] [source] [discussion] 2023-11-22 15:12:42
>>cyanyd+Lm
> I'd sayu the AI safety problem as a whole is similar to the safety problem of eugenics

And I'd sayu should read the book so we can have a nice chat about it. Making wild guesses and assumptions is not really useful.

> If you really care about AI safety, you'd be putting it under government control as utility, like everything else.

This is a bit jumbled. How do you think "control as utility" would help? What would it help with?

[go to top]