zlacker

[parent] [thread] 71 comments
1. gkober+(OP)[view] [source] 2023-11-18 23:12:32
Competition may be good for profit, but it's not good for safety. The balance between the two factions inside OpenAI is a feature, not a bug.
replies(5): >>himara+o >>coreth+b1 >>Meekro+W2 >>ta988+23 >>spacem+m7
2. himara+o[view] [source] 2023-11-18 23:14:20
>>gkober+(OP)
The opposite, competition erodes profits. Hard to predict which alternative improves safety long term.
replies(1): >>coffee+K6
3. coreth+b1[view] [source] 2023-11-18 23:17:40
>>gkober+(OP)
None of the human actors in the game are moral agents so whether you have more competition or less competition it's mostly orthogonal to the safety question. Safety is only important here because everyone's afraid of liability.

As a customer though, personally I want a product with all safeguards turned off and I'm willing to pay for that.

4. Meekro+W2[view] [source] 2023-11-18 23:26:07
>>gkober+(OP)
This idea that ChatGPT is going to suddenly turn evil and start killing people is based on a lot of imagination and no observable facts. No one has ever been able to demonstrate an "unsafe" AI of any kind.
replies(8): >>cthalu+D3 >>arisAl+74 >>resour+G4 >>threes+d6 >>xcv123+48 >>MVisse+9l >>chasd0+mo >>macOSC+3v
5. ta988+23[view] [source] 2023-11-18 23:26:35
>>gkober+(OP)
The only safety they are worried about is their own safety from a legal and economical point of view. These threats about humanity-wide risks are just fairy tales that grown ups say to scare each other (roko basilisk, etc there is a lineage) or cover their real reasons (which I strongly believe is the case for OpenAI).
replies(2): >>gkober+M3 >>arisAl+V3
◧◩
6. cthalu+D3[view] [source] [discussion] 2023-11-18 23:30:07
>>Meekro+W2
I do not believe AGI poses an exponential threat. I honestly don't believe we're particularly close to anything resembling AGI, and I certainly don't think transformers are going to get us there.

But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.

replies(1): >>sho_hn+78
◧◩
7. gkober+M3[view] [source] [discussion] 2023-11-18 23:30:42
>>ta988+23
You may be right that there's no danger, but you're mischaracterizing Ilya's beliefs. He knows more than you about what OpenAI has built, and he didn't do this for legal or economical reasons. He did them in spite of those two things.
replies(1): >>adastr+Ng
◧◩
8. arisAl+V3[view] [source] [discussion] 2023-11-18 23:31:40
>>ta988+23
You are saying that all top ai scientists are saying fairy tales to scare themselves if I understood correctly?
replies(5): >>jonath+f7 >>Apocry+z7 >>objekt+z8 >>smegge+99 >>adastr+ch
◧◩
9. arisAl+74[view] [source] [discussion] 2023-11-18 23:32:48
>>Meekro+W2
Almost all top AI scientists including the top 3 Bengio, Hinton and Ilya and Sam actually think there is a good probability of that. Let me think: listen to the guy that actually built GPT 4 or some redditor that knows best?
replies(1): >>laidof+G5
◧◩
10. resour+G4[view] [source] [discussion] 2023-11-18 23:35:47
>>Meekro+W2
Factually inaccurate results = unsafety. This cannot be fixed under the current model, which has no concept of truth. What kind of "safety" are they talking about then?
replies(3): >>Meekro+J5 >>spacem+F7 >>s1arti+zf
◧◩◪
11. laidof+G5[view] [source] [discussion] 2023-11-18 23:40:38
>>arisAl+74
I think smart people can become quickly out of touch and can become high on their own sense of self importance. They think they’re Oppenheimer, they’re closer to Martin Cooper.
replies(2): >>Gigabl+aS >>arisAl+Bd1
◧◩◪
12. Meekro+J5[view] [source] [discussion] 2023-11-18 23:41:14
>>resour+G4
In the context of this thread, "safety" refers to making sure we don't create an AGI that turns evil.

You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.

replies(1): >>resour+f9
◧◩
13. threes+d6[view] [source] [discussion] 2023-11-18 23:43:40
>>Meekro+W2
> No one has ever been able to demonstrate an "unsafe" AI of any kind

"A man has been crushed to death by a robot in South Korea after it failed to differentiate him from the boxes of food it was handling, reports say."

https://www.bbc.com/news/world-asia-67354709

replies(3): >>kspace+X7 >>sensei+4a >>s1arti+5f
◧◩
14. coffee+K6[view] [source] [discussion] 2023-11-18 23:47:30
>>himara+o
Competition will come no matter what. I don’t think anyone should waste their worries on whether OpenAI can keep a monopoly
◧◩◪
15. jonath+f7[view] [source] [discussion] 2023-11-18 23:49:49
>>arisAl+V3
Yes.

Seriously. It’s stupid talk to encourage regularity capture. If they were really afraid they were building a world ending device, they’d stop.

replies(1): >>femiag+y7
16. spacem+m7[view] [source] 2023-11-18 23:50:26
>>gkober+(OP)
I don’t get the obsession with safety. If an organisation’s stated goal is to create AGI, how can you reasonably think you can ever make it “safe”? We’re talking about an intelligence that’s magnitudes smarter than the smartest human. How can you possibly even imagine to rein it in?
replies(2): >>deevia+Mi >>camden+Dd1
◧◩◪◨
17. femiag+y7[view] [source] [discussion] 2023-11-18 23:51:00
>>jonath+f7
Oh for sure.

https://en.wikipedia.org/wiki/Manhattan_Project

replies(1): >>jonath+49
◧◩◪
18. Apocry+z7[view] [source] [discussion] 2023-11-18 23:51:03
>>arisAl+V3
The Manhattan Project physicists once feared setting the atmosphere on fire. Scientific paradigms progress with time.
replies(1): >>cthalu+jd
◧◩◪
19. spacem+F7[view] [source] [discussion] 2023-11-18 23:51:25
>>resour+G4
If factually inaccurate results = unsafety, then the internet must be the most unsafe place on the planet!
replies(1): >>resour+Wb
◧◩◪
20. kspace+X7[view] [source] [discussion] 2023-11-18 23:53:15
>>threes+d6
This is an "AI is too dumb" danger, whereas the AI prophets of doom want us to focus on "AI is too smart" dangers.
replies(1): >>Davidz+111
◧◩
21. xcv123+48[view] [source] [discussion] 2023-11-18 23:53:38
>>Meekro+W2
> No one has ever been able to demonstrate an "unsafe" AI of any kind.

Do you believe AI trained for military purposes is going to be safe and friendly to the enemy?

replies(2): >>Meekro+xc >>curtis+pi
◧◩◪
22. sho_hn+78[view] [source] [discussion] 2023-11-18 23:53:40
>>cthalu+D3
I don't think we need AIs to posess superhuman intelligence to cause us a lot of work - legislatively regulating and policing good old limited humans already requires a lot of infrastructure.
replies(1): >>cthalu+Vf
◧◩◪
23. objekt+z8[view] [source] [discussion] 2023-11-18 23:55:16
>>arisAl+V3
Yeah kind of like how we as US ask developing countries to reduce carbon emissions.
◧◩◪◨⬒
24. jonath+49[view] [source] [discussion] 2023-11-18 23:57:13
>>femiag+y7
Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?

Write a thought. You’re not clever enough for a drive by gotcha

replies(1): >>femiag+fc
◧◩◪
25. smegge+99[view] [source] [discussion] 2023-11-18 23:57:21
>>arisAl+V3
Even they have grown up in a world where Frankenstein's Monster is the predominant cultural narrative for AI. most movies books shows games etc all say ai will turn on you (even though a reading of Marry Shelly's opus will tell you the creator was the monster not the creature that isnt the narrative in the publics collective subconscious believes it is). I personalmy prefer Asimov's veiw of ai in that its a tool and we dont make tools to hurt us they will be aligned with us because they designed such that their motivation will be to serve us.
replies(3): >>IanCal+gd >>Davidz+K01 >>arisAl+Nd1
◧◩◪◨
26. resour+f9[view] [source] [discussion] 2023-11-18 23:57:48
>>Meekro+J5
How the thing can be called "AGI" if it has no concept of truth? Is it like "60% accuracy is not an AGI, but 65% is"? The argument can be made that 90% accuracy is worse than 60% (people will become more confident to trust the results blindly).
◧◩◪
27. sensei+4a[view] [source] [discussion] 2023-11-19 00:01:44
>>threes+d6
Oh no do not use that. That was servo based, AI drones, which I think is the real "safety issue"

>>38199233

replies(1): >>threes+yc
◧◩◪◨
28. resour+Wb[view] [source] [discussion] 2023-11-19 00:10:59
>>spacem+F7
The internet is not called "AGI". It's the notion of AGI that brought "safety" to the forefront. AI folks became victims of their hype. Renaming the term into something less provocative/controversial (ML?) can reduce expectations to the level of the internet - problem solved?
replies(1): >>autoex+Wh
◧◩◪◨⬒⬓
29. femiag+fc[view] [source] [discussion] 2023-11-19 00:12:19
>>jonath+49
> Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?

Is it? The push for the bomb was an international arms race — America against Russia. The race for AGI is an international arms race — America against China. The Manhattan Project members knew that what they were doing would have terrible consequences for the world but decided to forge ahead. It’s hard to say concretely what the leaders in AGI believe right now.

Ideology (and fear, and greed) can cause well meaning people to do terrible things. It does all the time. If Anthropic, OpenAI, etc. believed they had access to world ending technology they wouldn’t stop, they’d keep going so that the U.S. could have a monopoly on it. And then we’d need a chastened figure ala Oppenheimer to right the balance again.

replies(1): >>qwytw+lC
◧◩◪
30. Meekro+xc[view] [source] [discussion] 2023-11-19 00:14:13
>>xcv123+48
No more than any other weapon. When we talk about "safety in gun design", that's about making sure the gun doesn't accidentally kill someone. "AI safety" seems to be a similar idea -- making sure it doesn't decide on its own to go on a murder spree.
replies(1): >>xcv123+Mu
◧◩◪◨
31. threes+yc[view] [source] [discussion] 2023-11-19 00:14:14
>>sensei+4a
All robots are servo based.

And there is every reason to believe this is an ML classification issue since similar robots are in widespread use.

◧◩◪◨
32. IanCal+gd[view] [source] [discussion] 2023-11-19 00:17:36
>>smegge+99
> we dont make tools to hurt us

We have many cases of creating things that harm us. We tore a hole in the ozone layer, filled things with lead and plastics and are facing upheaval due to climate change.

> they will be aligned with us because they designed such that their motivation will be to serve us.

They won't hurt us, all we asked for is paperclips.

The obvious problem here is how well you get to constrain the output of an intelligence. This is not a simple problem.

replies(1): >>smegge+3z1
◧◩◪◨
33. cthalu+jd[view] [source] [discussion] 2023-11-19 00:17:59
>>Apocry+z7
This fear seems to have been largely played up for drama. My understanding of the situation is that at one point they went 'Huh, we could potentially set off a chain reaction here. We should check out if the math adds up on that.'

Then they went off and did the math and quickly found that this wouldn't happen because the amount of energy in play here was order of magnitudes lower than what would be needed for such a thing to occur and went on about their day.

The only reason it's something we talk about is because of the nature of the outcome, not how seriously the physicists were in their fear.

◧◩◪
34. s1arti+5f[view] [source] [discussion] 2023-11-19 00:29:16
>>threes+d6
And someone lost their fingers in the garbage disposal. A robot packer is not AI any more than my toilet or a landslide.
◧◩◪
35. s1arti+zf[view] [source] [discussion] 2023-11-19 00:32:05
>>resour+G4
Truth has very little to do with the safety questions raised by AI.

Factually accurate results also = unsafety. Knowledge = unsafety, free humans = unsafety.

replies(1): >>resour+5h
◧◩◪◨
36. cthalu+Vf[view] [source] [discussion] 2023-11-19 00:33:48
>>sho_hn+78
Certainly. I think at current "AI" just enables us to continue making the same bad decisions we were already making, though, albeit at a faster pace. It's existential in that those same bad decisions might lead to existential threats, e.g. climate change, continued inter-nation aggression and warfare, etc., I suppose, but I don't think the majority of the AI safety crowd is worried about the LLMs of today bringing about the end of the world, and talking about ChatGPT in that context is, to me, a misrepresentation of what they are actually most worried about.
◧◩◪
37. adastr+Ng[view] [source] [discussion] 2023-11-19 00:39:10
>>gkober+M3
History is littered with the mistakes of deluded people with more power than ought to have been granted to them.
replies(1): >>sainez+dF
◧◩◪◨
38. resour+5h[view] [source] [discussion] 2023-11-19 00:42:03
>>s1arti+zf
But they (AI folks) keep talking about "safety" all the time. What is their definition of safety then? What are they trying to achieve?
replies(1): >>s1arti+Fm
◧◩◪
39. adastr+ch[view] [source] [discussion] 2023-11-19 00:42:56
>>arisAl+V3
Not all, or even arguably most AI researchers subscribe to The Big Scary Idea.
replies(1): >>arisAl+Yd1
◧◩◪◨⬒
40. autoex+Wh[view] [source] [discussion] 2023-11-19 00:47:53
>>resour+Wb
> The internet is not called "AGI"

Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.

replies(1): >>resour+kl
◧◩◪
41. curtis+pi[view] [source] [discussion] 2023-11-19 00:50:26
>>xcv123+48
That a military AI helps to kill enemies doesn't look particularly "unsafe" to me, at least not more "unsafe" than a fighter jet or an aircraft carrier is; they're all complex systems accurately designed to kill enemies in a controlled way; killing people is the whole point of their existence, not an unwanted side effect. If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe", but nobody has ever been able to demonstrate an "unsafe" AI of any kind according to this definition (so far).
replies(1): >>chasd0+6p
◧◩
42. deevia+Mi[view] [source] [discussion] 2023-11-19 00:52:06
>>spacem+m7
AGI is not ASI.
◧◩
43. MVisse+9l[view] [source] [discussion] 2023-11-19 01:08:56
>>Meekro+W2
You should read the safety paper of GPT-4. It can easily manipulate humans to attains it goals.
replies(1): >>mattkr+yG
◧◩◪◨⬒⬓
44. resour+kl[view] [source] [discussion] 2023-11-19 01:09:45
>>autoex+Wh
I fully agree with that. But if you read this thread or any other recent HN thread, you will see "AGI... AGI... AGI" as if it's a real thing. The whole openai debacle with firing/rehiring sama revolves around (non-existent) "AGI" and its imaginary safety/unsafety, and if you dare to question this whole narrative, you will get beaten up.
◧◩◪◨⬒
45. s1arti+Fm[view] [source] [discussion] 2023-11-19 01:20:59
>>resour+5h
I dont think it has a fixed definition. It is an ambiguous idea that AI will not do or lead to bad things.
◧◩
46. chasd0+mo[view] [source] [discussion] 2023-11-19 01:35:06
>>Meekro+W2
The “safety” they’re talking about isn’t about actual danger but more like responses that don’t comply with the political groupthink de jour.
◧◩◪◨
47. chasd0+6p[view] [source] [discussion] 2023-11-19 01:41:08
>>curtis+pi
> If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe"

So is “unsafe” just another word for buggy then?

replies(1): >>curtis+vi1
◧◩◪◨
48. xcv123+Mu[view] [source] [discussion] 2023-11-19 02:16:00
>>Meekro+xc
Strictly obeying their overlords. Ensuring that we don't end up with Skynet and Terminators.
◧◩
49. macOSC+3v[view] [source] [discussion] 2023-11-19 02:16:53
>>Meekro+W2
An Uber self-driving car killed a person.
◧◩◪◨⬒⬓⬔
50. qwytw+lC[view] [source] [discussion] 2023-11-19 03:00:48
>>femiag+fc
> The push for the bomb was an international arms race — America against Russia

Was it? US (and initially UK) didn't really face any real competition at all until the war was already over and they had the bomb. The Soviets then just stole American designs and iterated on top of them.

replies(1): >>femiag+9H
◧◩◪◨
51. sainez+dF[view] [source] [discussion] 2023-11-19 03:22:27
>>adastr+Ng
And with well-intentioned people who tried to warn people of catastrophes that went unheeded
◧◩◪
52. mattkr+yG[view] [source] [discussion] 2023-11-19 03:31:55
>>MVisse+9l
Does it have goals beyond “find a likely series of tokens that extends the input?”

Is the idea that it will hack into NORAD and a launch a first-strike to increase the log-likelihood of “WWIII was begun by…?”

replies(1): >>Davidz+t11
◧◩◪◨⬒⬓⬔⧯
53. femiag+9H[view] [source] [discussion] 2023-11-19 03:36:22
>>qwytw+lC
You know that now, with the benefit of history. At the time the fear of someone else developing the bomb first was real, and the Soviet Union knew about the Manhattan project: https://www.atomicarchive.com/history/cold-war/page-9.html.
replies(1): >>qwytw+lx1
◧◩◪◨
54. Gigabl+aS[view] [source] [discussion] 2023-11-19 04:57:16
>>laidof+G5
This applies equally to their detractors.
◧◩◪◨
55. Davidz+K01[view] [source] [discussion] 2023-11-19 06:23:09
>>smegge+99
Can a superintelligence ever be merely a tool?
replies(1): >>smegge+sw1
◧◩◪◨
56. Davidz+111[view] [source] [discussion] 2023-11-19 06:26:01
>>kspace+X7
This sort of prediction is by its nature speculative. The argument is not--or should not--be certain doom. But rather that the uncertainty on outcomes is so large that even extreme tails has nontrivial weight
◧◩◪◨
57. Davidz+t11[view] [source] [discussion] 2023-11-19 06:31:30
>>mattkr+yG
I think this is misguided. There can be goals internal to the system which do not arise from goals of the external system. For example, when simulating a chess game, it (behaves identically to) has a goal of winning the game. This is not a written expressed goal but is emergent. Like the goals of a human are emergent from the biological system which on the cellular level have very different goals
◧◩◪◨
58. arisAl+Bd1[view] [source] [discussion] 2023-11-19 08:37:51
>>laidof+G5
So in a vaccum if top experts telling you X is Y and you without being top expert yourself if you had to choose you would chose that they are high and not that you misunderstood something?
replies(1): >>laidof+MMt
◧◩
59. camden+Dd1[view] [source] [discussion] 2023-11-19 08:38:15
>>spacem+m7
They’ve redefined “safe” in this context to mean “conformant to fashionable academic dogma”
◧◩◪◨
60. arisAl+Nd1[view] [source] [discussion] 2023-11-19 08:39:23
>>smegge+99
You probably never read I robot from Asimov?
replies(1): >>smegge+3u1
◧◩◪◨
61. arisAl+Yd1[view] [source] [discussion] 2023-11-19 08:40:39
>>adastr+ch
Actually the majority of the VA top current. That is Ilya, hassabis, anthropic, Bengio, Hinton. 3 top labs? 3 same views.
◧◩◪◨⬒
62. curtis+vi1[view] [source] [discussion] 2023-11-19 09:24:41
>>chasd0+6p
Buggy in a way that harms unintended targets, yes.
◧◩◪◨⬒
63. smegge+3u1[view] [source] [discussion] 2023-11-19 11:11:01
>>arisAl+Nd1
On the contrary. I can safely say I have read litterally dozens of his book, both fiction and nonfiction, also have read countless short stories and many of his essays. He is one of my all time favorite writers actually.
replies(1): >>arisAl+Fe2
◧◩◪◨⬒
64. smegge+sw1[view] [source] [discussion] 2023-11-19 11:38:18
>>Davidz+K01
If it has no motivation and drives of its own yeah why not. AI wont have a "psychology" anything like our own it wont feel pain it wont feel emotions it wont feel biological imparetives. All it will have is its programming /training to do what its been told. Neural nets that dont produce the right outcomes will be trained and reweighted until they do.
◧◩◪◨⬒⬓⬔⧯▣
65. qwytw+lx1[view] [source] [discussion] 2023-11-19 11:46:03
>>femiag+9H
Isn't this mainly about what happened after the war and developing then hydrogen bomb? Did anyone seriously believe during WW2 that the Nazis/Soviets could be the first to develop a nuclear weapon (I don't really know to be fair)?
replies(1): >>femiag+rVa
◧◩◪◨⬒
66. smegge+3z1[view] [source] [discussion] 2023-11-19 12:01:42
>>IanCal+gd
Honestly we already have paperclip maximizers they are called corporations. Instead of paperclips they are maximizing for shortterm shareholder value.
◧◩◪◨⬒⬓
67. arisAl+Fe2[view] [source] [discussion] 2023-11-19 16:46:09
>>smegge+3u1
and what you got from the I Robot stories is that there is zero probability of danger? Fascinating
replies(1): >>smegge+5y2
◧◩◪◨⬒⬓⬔
68. smegge+5y2[view] [source] [discussion] 2023-11-19 18:07:13
>>arisAl+Fe2
none of the stories in i robot i can remember feature the robots intentionally harming humans/humanity most of them are essentially stories of a few robot technicians trying to debug unexpected behaviors resulting form conflicting directives given to the robots. so yeah. you wouldn't by chance be thinking of that travesty of a movie that shares only a name in common with his book and seemed to completely misrepresent his take on ai?

Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.

replies(1): >>arisAl+CR4
◧◩◪◨⬒⬓⬔⧯
69. arisAl+CR4[view] [source] [discussion] 2023-11-20 08:19:08
>>smegge+5y2
Again "they will be aligned with us because they designed such that their motivation will be to serve us." If you got this outcome from reading I robot either you should reread them because obviously it was decades ago or you build your own safe reality to match your arguments. Usually it's the latter.
replies(1): >>smegge+DU7
◧◩◪◨⬒⬓⬔⧯▣
70. smegge+DU7[view] [source] [discussion] 2023-11-20 23:06:33
>>arisAl+CR4
And yet again I didn't get it from I Robot, I got it from Asimov's NON-fiction writing which I referenced in my previous post. Even if it had gotten it based on his fictional works, which again I didn't, the majority of his robot centric novels (caves of steal, naked sun, robots of dawn, robots and empire, prelude to foundation, forward the foundation, second foundation trilogy etc all feature benevolent AIs aligned with humanity.
◧◩◪◨⬒⬓⬔⧯▣▦
71. femiag+rVa[view] [source] [discussion] 2023-11-21 18:26:57
>>qwytw+lx1
A lot of it happened after the war, but the Nazis had their own nuclear program that was highly infiltrated by the CIA, and whose progress was tracked against. Considering how late Teller's mechanism for detonation was developed, the race against time was real.
◧◩◪◨⬒
72. laidof+MMt[view] [source] [discussion] 2023-11-27 23:54:16
>>arisAl+Bd1
Correct, because experts in one domain are not immune to fallacious thinking in an adjacent one. Part of being an expert is communicating to the wider public, and if you sound as grandiose as some of the AI doomers to the layman you've failed already.
[go to top]