zlacker

[parent] [thread] 10 comments
1. concor+(OP)[view] [source] 2023-11-22 08:24:47
> The whole point of AI safety is keeping it away from those parts of the world.

No, it's to ensure it doesn't kill you and everyone you love.

replies(2): >>hef198+w2 >>vkou+Q5
2. hef198+w2[view] [source] 2023-11-22 08:42:59
>>concor+(OP)
No, we are far, far from skynet. So far AI fails at driving a car.

AI is an incredibly powerful tool for spreading propaganda, and thatvis used by people who want to kill you and your loved ones (usually radicals trying to get into a position of power, who show little regard fornbormal folks regardless of which "side" they are on). That's the threat, not Skynet...

replies(1): >>concor+qd
3. vkou+Q5[view] [source] 2023-11-22 09:11:58
>>concor+(OP)
My concern isn't some kind of run-away science-fantasy Skynet or gray goo scenario.

My concern is far more banal evil. Organizations with power and wealth using it to further consolidate their power and wealth, at the expense of others.

replies(2): >>Feepin+i6 >>concor+yd
◧◩
4. Feepin+i6[view] [source] [discussion] 2023-11-22 09:16:57
>>vkou+Q5
Yes well, then your concern is not AI safety.
replies(1): >>vkou+08
◧◩◪
5. vkou+08[view] [source] [discussion] 2023-11-22 09:30:28
>>Feepin+i6
You're wrong. This is exactly AI safety, as we can see from the OpenAI charter:

> Broadly distributed benefits

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Hell, it's the first bullet point on it!

You can't just define AI safety concerns to be 'the set of scenarios depicted in fairy tales', and then dismiss them as 'well, fairy tales aren't real...'

replies(2): >>concor+Yc >>Feepin+sf
◧◩◪◨
6. concor+Yc[view] [source] [discussion] 2023-11-22 10:11:55
>>vkou+08
The many different definitions of "AI safety" is ridiculous.
◧◩
7. concor+qd[view] [source] [discussion] 2023-11-22 10:14:25
>>hef198+w2
How far we are from Skynet is a matter of much debate, but median guess amongst experts is a mere 40 years to human level AI last I checked, which was admittedly a few years back.

Is that "far, far" in your view?

replies(1): >>hef198+2e
◧◩
8. concor+yd[view] [source] [discussion] 2023-11-22 10:16:01
>>vkou+Q5
That's AI Ethics.
◧◩◪
9. hef198+2e[view] [source] [discussion] 2023-11-22 10:21:18
>>concor+qd
Because we are 20 years away from fusion and 2 years away from Level 5 FSD for decades.

So far, "AI" writes better than some / most humans making stuff up in the process and creates digital art, and fakes, better and faster than humans. It still requires a human to trigger it to do so. And as long as glorified ML has no itent of its own, the risk to society through media and news and social media manipulation is far, far bigger than literal Skynet...

◧◩◪◨
10. Feepin+sf[view] [source] [discussion] 2023-11-22 10:36:37
>>vkou+08
Sure, but conversely you can say "ensuring that OpenAI doesn't get to run the universe is AI safety" (right) but not "is the main and basically only part of AI safety" (wrong). The concept of AI safety spans lots of threats, and we have to avoid all of them. It's not enough to avoid just one.
replies(1): >>vkou+ci1
◧◩◪◨⬒
11. vkou+ci1[view] [source] [discussion] 2023-11-22 16:38:37
>>Feepin+sf
Sure. And as I addressed at the start of this sub thread, I don't exactly think that the OpenAi board is perfectly positioned to navigate this problem.

I just know that it's hard to do much worse than putting this question in the hands of a highly optimized profit-first enterprise.

[go to top]