zlacker

[parent] [thread] 35 comments
1. Meekro+(OP)[view] [source] 2023-11-18 23:26:07
This idea that ChatGPT is going to suddenly turn evil and start killing people is based on a lot of imagination and no observable facts. No one has ever been able to demonstrate an "unsafe" AI of any kind.
replies(8): >>cthalu+H >>arisAl+b1 >>resour+K1 >>threes+h3 >>xcv123+85 >>MVisse+di >>chasd0+ql >>macOSC+7s
2. cthalu+H[view] [source] 2023-11-18 23:30:07
>>Meekro+(OP)
I do not believe AGI poses an exponential threat. I honestly don't believe we're particularly close to anything resembling AGI, and I certainly don't think transformers are going to get us there.

But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.

replies(1): >>sho_hn+b5
3. arisAl+b1[view] [source] 2023-11-18 23:32:48
>>Meekro+(OP)
Almost all top AI scientists including the top 3 Bengio, Hinton and Ilya and Sam actually think there is a good probability of that. Let me think: listen to the guy that actually built GPT 4 or some redditor that knows best?
replies(1): >>laidof+K2
4. resour+K1[view] [source] 2023-11-18 23:35:47
>>Meekro+(OP)
Factually inaccurate results = unsafety. This cannot be fixed under the current model, which has no concept of truth. What kind of "safety" are they talking about then?
replies(3): >>Meekro+N2 >>spacem+J4 >>s1arti+Dc
◧◩
5. laidof+K2[view] [source] [discussion] 2023-11-18 23:40:38
>>arisAl+b1
I think smart people can become quickly out of touch and can become high on their own sense of self importance. They think they’re Oppenheimer, they’re closer to Martin Cooper.
replies(2): >>Gigabl+eP >>arisAl+Fa1
◧◩
6. Meekro+N2[view] [source] [discussion] 2023-11-18 23:41:14
>>resour+K1
In the context of this thread, "safety" refers to making sure we don't create an AGI that turns evil.

You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.

replies(1): >>resour+j6
7. threes+h3[view] [source] 2023-11-18 23:43:40
>>Meekro+(OP)
> No one has ever been able to demonstrate an "unsafe" AI of any kind

"A man has been crushed to death by a robot in South Korea after it failed to differentiate him from the boxes of food it was handling, reports say."

https://www.bbc.com/news/world-asia-67354709

replies(3): >>kspace+15 >>sensei+87 >>s1arti+9c
◧◩
8. spacem+J4[view] [source] [discussion] 2023-11-18 23:51:25
>>resour+K1
If factually inaccurate results = unsafety, then the internet must be the most unsafe place on the planet!
replies(1): >>resour+09
◧◩
9. kspace+15[view] [source] [discussion] 2023-11-18 23:53:15
>>threes+h3
This is an "AI is too dumb" danger, whereas the AI prophets of doom want us to focus on "AI is too smart" dangers.
replies(1): >>Davidz+5Y
10. xcv123+85[view] [source] 2023-11-18 23:53:38
>>Meekro+(OP)
> No one has ever been able to demonstrate an "unsafe" AI of any kind.

Do you believe AI trained for military purposes is going to be safe and friendly to the enemy?

replies(2): >>Meekro+B9 >>curtis+tf
◧◩
11. sho_hn+b5[view] [source] [discussion] 2023-11-18 23:53:40
>>cthalu+H
I don't think we need AIs to posess superhuman intelligence to cause us a lot of work - legislatively regulating and policing good old limited humans already requires a lot of infrastructure.
replies(1): >>cthalu+Zc
◧◩◪
12. resour+j6[view] [source] [discussion] 2023-11-18 23:57:48
>>Meekro+N2
How the thing can be called "AGI" if it has no concept of truth? Is it like "60% accuracy is not an AGI, but 65% is"? The argument can be made that 90% accuracy is worse than 60% (people will become more confident to trust the results blindly).
◧◩
13. sensei+87[view] [source] [discussion] 2023-11-19 00:01:44
>>threes+h3
Oh no do not use that. That was servo based, AI drones, which I think is the real "safety issue"

>>38199233

replies(1): >>threes+C9
◧◩◪
14. resour+09[view] [source] [discussion] 2023-11-19 00:10:59
>>spacem+J4
The internet is not called "AGI". It's the notion of AGI that brought "safety" to the forefront. AI folks became victims of their hype. Renaming the term into something less provocative/controversial (ML?) can reduce expectations to the level of the internet - problem solved?
replies(1): >>autoex+0f
◧◩
15. Meekro+B9[view] [source] [discussion] 2023-11-19 00:14:13
>>xcv123+85
No more than any other weapon. When we talk about "safety in gun design", that's about making sure the gun doesn't accidentally kill someone. "AI safety" seems to be a similar idea -- making sure it doesn't decide on its own to go on a murder spree.
replies(1): >>xcv123+Qr
◧◩◪
16. threes+C9[view] [source] [discussion] 2023-11-19 00:14:14
>>sensei+87
All robots are servo based.

And there is every reason to believe this is an ML classification issue since similar robots are in widespread use.

◧◩
17. s1arti+9c[view] [source] [discussion] 2023-11-19 00:29:16
>>threes+h3
And someone lost their fingers in the garbage disposal. A robot packer is not AI any more than my toilet or a landslide.
◧◩
18. s1arti+Dc[view] [source] [discussion] 2023-11-19 00:32:05
>>resour+K1
Truth has very little to do with the safety questions raised by AI.

Factually accurate results also = unsafety. Knowledge = unsafety, free humans = unsafety.

replies(1): >>resour+9e
◧◩◪
19. cthalu+Zc[view] [source] [discussion] 2023-11-19 00:33:48
>>sho_hn+b5
Certainly. I think at current "AI" just enables us to continue making the same bad decisions we were already making, though, albeit at a faster pace. It's existential in that those same bad decisions might lead to existential threats, e.g. climate change, continued inter-nation aggression and warfare, etc., I suppose, but I don't think the majority of the AI safety crowd is worried about the LLMs of today bringing about the end of the world, and talking about ChatGPT in that context is, to me, a misrepresentation of what they are actually most worried about.
◧◩◪
20. resour+9e[view] [source] [discussion] 2023-11-19 00:42:03
>>s1arti+Dc
But they (AI folks) keep talking about "safety" all the time. What is their definition of safety then? What are they trying to achieve?
replies(1): >>s1arti+Jj
◧◩◪◨
21. autoex+0f[view] [source] [discussion] 2023-11-19 00:47:53
>>resour+09
> The internet is not called "AGI"

Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.

replies(1): >>resour+oi
◧◩
22. curtis+tf[view] [source] [discussion] 2023-11-19 00:50:26
>>xcv123+85
That a military AI helps to kill enemies doesn't look particularly "unsafe" to me, at least not more "unsafe" than a fighter jet or an aircraft carrier is; they're all complex systems accurately designed to kill enemies in a controlled way; killing people is the whole point of their existence, not an unwanted side effect. If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe", but nobody has ever been able to demonstrate an "unsafe" AI of any kind according to this definition (so far).
replies(1): >>chasd0+am
23. MVisse+di[view] [source] 2023-11-19 01:08:56
>>Meekro+(OP)
You should read the safety paper of GPT-4. It can easily manipulate humans to attains it goals.
replies(1): >>mattkr+CD
◧◩◪◨⬒
24. resour+oi[view] [source] [discussion] 2023-11-19 01:09:45
>>autoex+0f
I fully agree with that. But if you read this thread or any other recent HN thread, you will see "AGI... AGI... AGI" as if it's a real thing. The whole openai debacle with firing/rehiring sama revolves around (non-existent) "AGI" and its imaginary safety/unsafety, and if you dare to question this whole narrative, you will get beaten up.
◧◩◪◨
25. s1arti+Jj[view] [source] [discussion] 2023-11-19 01:20:59
>>resour+9e
I dont think it has a fixed definition. It is an ambiguous idea that AI will not do or lead to bad things.
26. chasd0+ql[view] [source] 2023-11-19 01:35:06
>>Meekro+(OP)
The “safety” they’re talking about isn’t about actual danger but more like responses that don’t comply with the political groupthink de jour.
◧◩◪
27. chasd0+am[view] [source] [discussion] 2023-11-19 01:41:08
>>curtis+tf
> If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe"

So is “unsafe” just another word for buggy then?

replies(1): >>curtis+zf1
◧◩◪
28. xcv123+Qr[view] [source] [discussion] 2023-11-19 02:16:00
>>Meekro+B9
Strictly obeying their overlords. Ensuring that we don't end up with Skynet and Terminators.
29. macOSC+7s[view] [source] 2023-11-19 02:16:53
>>Meekro+(OP)
An Uber self-driving car killed a person.
◧◩
30. mattkr+CD[view] [source] [discussion] 2023-11-19 03:31:55
>>MVisse+di
Does it have goals beyond “find a likely series of tokens that extends the input?”

Is the idea that it will hack into NORAD and a launch a first-strike to increase the log-likelihood of “WWIII was begun by…?”

replies(1): >>Davidz+xY
◧◩◪
31. Gigabl+eP[view] [source] [discussion] 2023-11-19 04:57:16
>>laidof+K2
This applies equally to their detractors.
◧◩◪
32. Davidz+5Y[view] [source] [discussion] 2023-11-19 06:26:01
>>kspace+15
This sort of prediction is by its nature speculative. The argument is not--or should not--be certain doom. But rather that the uncertainty on outcomes is so large that even extreme tails has nontrivial weight
◧◩◪
33. Davidz+xY[view] [source] [discussion] 2023-11-19 06:31:30
>>mattkr+CD
I think this is misguided. There can be goals internal to the system which do not arise from goals of the external system. For example, when simulating a chess game, it (behaves identically to) has a goal of winning the game. This is not a written expressed goal but is emergent. Like the goals of a human are emergent from the biological system which on the cellular level have very different goals
◧◩◪
34. arisAl+Fa1[view] [source] [discussion] 2023-11-19 08:37:51
>>laidof+K2
So in a vaccum if top experts telling you X is Y and you without being top expert yourself if you had to choose you would chose that they are high and not that you misunderstood something?
replies(1): >>laidof+QJt
◧◩◪◨
35. curtis+zf1[view] [source] [discussion] 2023-11-19 09:24:41
>>chasd0+am
Buggy in a way that harms unintended targets, yes.
◧◩◪◨
36. laidof+QJt[view] [source] [discussion] 2023-11-27 23:54:16
>>arisAl+Fa1
Correct, because experts in one domain are not immune to fallacious thinking in an adjacent one. Part of being an expert is communicating to the wider public, and if you sound as grandiose as some of the AI doomers to the layman you've failed already.
[go to top]