zlacker

[parent] [thread] 10 comments
1. silenc+(OP)[view] [source] 2023-11-22 07:00:14
Honestly "Safety" is the word in the AI talk that nobody can quantify or qualify in any way when it comes to these conversations.

I've stopped caring about anyone who uses the word "safety". It's vague and a hand-waive-y way to paint your opponents as dangerous without any sort of proof or agreed upon standard for who/what/why makes something "safety".

replies(3): >>antupi+53 >>fsloth+83 >>garden+L51
2. antupi+53[view] [source] 2023-11-22 07:20:09
>>silenc+(OP)
I like alignment more it is pretty quantifiable and sometimes it goes against 'safety' because Claude and Openai are censoring models.
3. fsloth+83[view] [source] 2023-11-22 07:20:47
>>silenc+(OP)
Exactly this. The ’safety’ people sound like delusional quacks.

”But they are so smart…” argument is bs. Nobody can be presumed to be super good outside their own specific niche. Linus Pauling and vitamin C.

Until we have at least a hint of a mechanistic model if AI driven extinction event, nobody can be an expert on it, and all talk in that vein is self important delusional hogwash.

Nobody is pro-apocalypse! We are drowning in things an AI could really help with.

With the amount of energy needed for any sort of meaningfull AI results, you can always pull the plug if stuff gets too weird.

replies(1): >>JumpCr+k4
◧◩
4. JumpCr+k4[view] [source] [discussion] 2023-11-22 07:30:03
>>fsloth+83
Now do nuclear.
replies(1): >>fsloth+96
◧◩◪
5. fsloth+96[view] [source] [discussion] 2023-11-22 07:43:18
>>JumpCr+k4
War or power production?:)

Those are different things.

Nuclear war is exactly the kind of thing for which we do have excellent expertise. Unlike for AI safety which seems more like bogus cult atm.

Nuclear power would be the best form of large scale power production for many situations. And smaller scale too in forms of emerging SMR:s.

replies(1): >>JumpCr+28
◧◩◪◨
6. JumpCr+28[view] [source] [discussion] 2023-11-22 07:57:25
>>fsloth+96
I suppose the whole regime. I'm not an AI safetyist, mostly because I don't think we're anywhere close to AI. But if you were sitting on the precipice of atomic power, as AI safetyists believe they are, wouldn't caution be prudent?
replies(1): >>fsloth+Ed
◧◩◪◨⬒
7. fsloth+Ed[view] [source] [discussion] 2023-11-22 08:40:16
>>JumpCr+28
I’m not an expert, just my gut talking. If they had god in a box, US state would be much more hands on. Now it looks more like an attempt at regulatory capture to stifle competition. ”Think of the safety”! ”Lock this away”! If they actually had skynet US gov has very effective and very discreet methods to handle such clear and present danger (barring intelligence failure ofc, but those happen mostly because something falls under your radar).
replies(1): >>JohnPr+5S
◧◩◪◨⬒⬓
8. JohnPr+5S[view] [source] [discussion] 2023-11-22 13:55:31
>>fsloth+Ed
Could you give a clear mechanistic model of how the US would handle such a danger?
replies(2): >>fsloth+H92 >>JumpCr+Gu2
9. garden+L51[view] [source] 2023-11-22 14:53:48
>>silenc+(OP)
I broadly agree but there needs to be some regulation in place. Check out https://en.wikipedia.org/wiki/Instrumental_convergence#Paper...
◧◩◪◨⬒⬓⬔
10. fsloth+H92[view] [source] [discussion] 2023-11-22 19:42:13
>>JohnPr+5S
For example: Two guys come in, say "Give us the godbox or your company seizes to exist. Here is a list of companies that seized to exist because the did not do as told".

Pretty much the same method was used to shut down Rauma-Repola submarines https://yle.fi/a/3-5149981

After? They get the godbox. I have no idea what happens to it after that. Modelweights are stored in secure govt servers, installed backdoors are used to cleansweep the corporate systems of any lingering model weights. Etc.

◧◩◪◨⬒⬓⬔
11. JumpCr+Gu2[view] [source] [discussion] 2023-11-22 21:28:57
>>JohnPr+5S
Defense Production Act, something something.
[go to top]