zlacker

[parent] [thread] 14 comments
1. vkou+(OP)[view] [source] 2023-11-22 07:31:14
Meanwhile, those working on commercialization are by definition going to be gatekeepers and beneficiaries of it, not you. The organizations that pay for it will pay for it to produce results that are of benefit to them, probably at my expense [1].

Do I think Helen has my interests at heart? Unlikely. Do Sam or Satya? Absolutely not!

[1] I can't wait for AI doctors working for insurers to deny me treatments, AI vendors to figure out exactly how much they can charge me for their dynamically-priced product, AI answering machines to route my customer support calls through Dante's circles of hell...

replies(2): >>konsch+i3 >>didntc+sd
2. konsch+i3[view] [source] 2023-11-22 07:55:47
>>vkou+(OP)
> produce results that are of benefit to them, probably at my expense

The world is not zero-sum. Most economic transactions benefit both parties and are a net benefit to society, even considering externalities.

replies(1): >>vkou+K5
◧◩
3. vkou+K5[view] [source] [discussion] 2023-11-22 08:15:11
>>konsch+i3
> The world is not zero-sum.

No, but some parts of it very much are. The whole point of AI safety is keeping it away from those parts of the world.

How are Sam and Satya going to do that? It's not in Microsoft's DNA to do that.

replies(1): >>concor+Z6
◧◩◪
4. concor+Z6[view] [source] [discussion] 2023-11-22 08:24:47
>>vkou+K5
> The whole point of AI safety is keeping it away from those parts of the world.

No, it's to ensure it doesn't kill you and everyone you love.

replies(2): >>hef198+v9 >>vkou+Pc
◧◩◪◨
5. hef198+v9[view] [source] [discussion] 2023-11-22 08:42:59
>>concor+Z6
No, we are far, far from skynet. So far AI fails at driving a car.

AI is an incredibly powerful tool for spreading propaganda, and thatvis used by people who want to kill you and your loved ones (usually radicals trying to get into a position of power, who show little regard fornbormal folks regardless of which "side" they are on). That's the threat, not Skynet...

replies(1): >>concor+pk
◧◩◪◨
6. vkou+Pc[view] [source] [discussion] 2023-11-22 09:11:58
>>concor+Z6
My concern isn't some kind of run-away science-fantasy Skynet or gray goo scenario.

My concern is far more banal evil. Organizations with power and wealth using it to further consolidate their power and wealth, at the expense of others.

replies(2): >>Feepin+hd >>concor+xk
◧◩◪◨⬒
7. Feepin+hd[view] [source] [discussion] 2023-11-22 09:16:57
>>vkou+Pc
Yes well, then your concern is not AI safety.
replies(1): >>vkou+Ze
8. didntc+sd[view] [source] 2023-11-22 09:18:37
>>vkou+(OP)
Ideally I'd like no gatekeeping, i.e. open model release, but that's not something OAI or most "AI ethics" aligned people are interested in (though luckily others are). So if we must have a gatekeeper, I'd rather it be one with plain old commercial interests than ideological ones. It's like the C S Lewis quote about robber barons vs busybodies again

Yet again, the free market principle of "you can have this if you pay me enough" offers more freedom to society than the central "you can have this if we decide you're allowed it"

◧◩◪◨⬒⬓
9. vkou+Ze[view] [source] [discussion] 2023-11-22 09:30:28
>>Feepin+hd
You're wrong. This is exactly AI safety, as we can see from the OpenAI charter:

> Broadly distributed benefits

> We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Hell, it's the first bullet point on it!

You can't just define AI safety concerns to be 'the set of scenarios depicted in fairy tales', and then dismiss them as 'well, fairy tales aren't real...'

replies(2): >>concor+Xj >>Feepin+rm
◧◩◪◨⬒⬓⬔
10. concor+Xj[view] [source] [discussion] 2023-11-22 10:11:55
>>vkou+Ze
The many different definitions of "AI safety" is ridiculous.
◧◩◪◨⬒
11. concor+pk[view] [source] [discussion] 2023-11-22 10:14:25
>>hef198+v9
How far we are from Skynet is a matter of much debate, but median guess amongst experts is a mere 40 years to human level AI last I checked, which was admittedly a few years back.

Is that "far, far" in your view?

replies(1): >>hef198+1l
◧◩◪◨⬒
12. concor+xk[view] [source] [discussion] 2023-11-22 10:16:01
>>vkou+Pc
That's AI Ethics.
◧◩◪◨⬒⬓
13. hef198+1l[view] [source] [discussion] 2023-11-22 10:21:18
>>concor+pk
Because we are 20 years away from fusion and 2 years away from Level 5 FSD for decades.

So far, "AI" writes better than some / most humans making stuff up in the process and creates digital art, and fakes, better and faster than humans. It still requires a human to trigger it to do so. And as long as glorified ML has no itent of its own, the risk to society through media and news and social media manipulation is far, far bigger than literal Skynet...

◧◩◪◨⬒⬓⬔
14. Feepin+rm[view] [source] [discussion] 2023-11-22 10:36:37
>>vkou+Ze
Sure, but conversely you can say "ensuring that OpenAI doesn't get to run the universe is AI safety" (right) but not "is the main and basically only part of AI safety" (wrong). The concept of AI safety spans lots of threats, and we have to avoid all of them. It's not enough to avoid just one.
replies(1): >>vkou+bp1
◧◩◪◨⬒⬓⬔⧯
15. vkou+bp1[view] [source] [discussion] 2023-11-22 16:38:37
>>Feepin+rm
Sure. And as I addressed at the start of this sub thread, I don't exactly think that the OpenAi board is perfectly positioned to navigate this problem.

I just know that it's hard to do much worse than putting this question in the hands of a highly optimized profit-first enterprise.

[go to top]