zlacker

[parent] [thread] 37 comments
1. nostro+(OP)[view] [source] 2023-11-22 07:27:09
You're correct.

When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."

But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.

replies(10): >>wisty+y >>Xenoam+g3 >>s_dev+y3 >>waveBi+Z5 >>Al-Khw+q6 >>darkwa+F6 >>krisof+2d >>edanm+Eg >>lordna+si >>cyanyd+hz
2. wisty+y[view] [source] 2023-11-22 07:31:20
>>nostro+(OP)
There is a middle ground, in that maybe ChatGTP shouldn't help users commit certain serious crimes. I am pretty pro free speech, and I think there's definitely a slippery slope here, but there is a bit of justification.
replies(3): >>Stanis+e9 >>hef198+o9 >>low_te+wf
3. Xenoam+g3[view] [source] 2023-11-22 07:52:05
>>nostro+(OP)
Is it just about safety though? I thought it was also about preventing the rich controlling AI and widen the gap even further.
replies(2): >>jazzyj+h4 >>didntc+Yc
4. s_dev+y3[view] [source] 2023-11-22 07:53:57
>>nostro+(OP)
I don't think the dangers of AI are not 'Skynet will Nuke Us' but closer to rich/powerful people using it to cement a wealth/power gap that can never be closed.

Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.

replies(1): >>disgru+zp
◧◩
5. jazzyj+h4[view] [source] [discussion] 2023-11-22 07:59:27
>>Xenoam+g3
The mission of OpenAI is/was "to ensure that artificial general intelligence benefits all of humanity" -- if your own concern is that AI will be controlled by the rich, than you can read into this mission that OpenAI wants to ensure that AI is not controlled by the rich. If your concern is that superintelligence will me mal-aligned, then you can read into this mission that OpenAI will ensure AI be well-aligned.

Really it's no more descriptive than "do good", whatever doing good means to you.

replies(1): >>jampek+Oo
6. waveBi+Z5[view] [source] 2023-11-22 08:13:26
>>nostro+(OP)
those are 2 different camps. Alignment folks and ethics folks tend to disagree strongly about the main threat, with ethics e.g. Timnet Gebru insisting that crystalzing the current social order is the main threat, and alignment e.g. Paul Christiano insisting its machines run amok. So far the ethics folks are the only ones getting things implemented for the most part.
7. Al-Khw+q6[view] [source] 2023-11-22 08:16:26
>>nostro+(OP)
No, in general AI safety/AI alignment ("we should prevent AI from nuking us") people are different from AI ethics ("we should prevent AI from being racist/sexist/etc.") people. There can of course be some overlap, but in most cases they oppose each other. For example Bender or Gebru are strong advocates of the AI ethics camp and they don't believe in any threat of AI doom at al.

If you Google for AI safety vs. AI ethics, or AI alignment vs. AI ethics, you can see both camps.

replies(1): >>hef198+P7
8. darkwa+F6[view] [source] 2023-11-22 08:18:07
>>nostro+(OP)
> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times" -- which I doubt was ever really anyone's main concern with AGI.

Fast forward 5-10 years, someone will say: "LLM were the worst thing we developed because they made us more stupid and permitted politicians to control even more the public opinion in a subtle way.

Just like tech/HN bubble started saying a few years ago about social networks (which were praised as revolutionary 15 years ago).

replies(5): >>didntc+Ic >>fallin+2t >>dnissl+eR >>unethi+Fc1 >>Cacti+4D2
◧◩
9. hef198+P7[view] [source] [discussion] 2023-11-22 08:27:28
>>Al-Khw+q6
The safety aspect of AI ethics is much more pressing so. We see how devicive social media can be, imagine that turbo charged by AI, and we as a society haven't even figured out social media yet...

ChatGPT turning into Skynet and nuking us all is a much more remote problem.

◧◩
10. Stanis+e9[view] [source] [discussion] 2023-11-22 08:37:17
>>wisty+y
Which users? The greatest crimes, by far, are committed by the US government (and other governments around the world) - and you can be sure that AI and/or AGI will be designed to help them commit their crimes more efficiently, effectively and to manufacture consent to do so.
◧◩
11. hef198+o9[view] [source] [discussion] 2023-11-22 08:38:43
>>wisty+y
I am a little less free speech than Americans, in Germany we have serious limitations around hate speech and holicaust denial for example.

Putting thise restrictions into a tool like ChatGPT goes to far so, because so far AI still needs a prompt to do anything. The problem I see, is with ChatGPT, being trained on a lot hate speech or prpopagabda, slipts in those things even if not prompted to. Which, and I am by no means an AI expert not by far, seems to be a sub-problem of the hallucination problems of making stuff up.

Because we have to remind ourselves, AI so far is glorified mavhine learning creating content, it is not concient. But it can be used to create a lot of propaganda and deffamation content at unprecedented scale and speed. And that is the real problem.

replies(1): >>freedo+1P1
◧◩
12. didntc+Ic[view] [source] [discussion] 2023-11-22 09:05:44
>>darkwa+F6
And it's amazing how many people you can get to cheer it on if you brand it as "combating dangerous misinformation". It seems people never learn the lesson that putting faith in one group of people to decree what's "truth" or "ethical" is almost always a bad idea, even when (you think) it's your "side"
replies(1): >>mlrtim+er
◧◩
13. didntc+Yc[view] [source] [discussion] 2023-11-22 09:09:08
>>Xenoam+g3
That would be the camp advocating for, well, open AI. I.e. wide model release. The AI ethics camp are more "let us control AI, for your own good"
14. krisof+2d[view] [source] 2023-11-22 09:09:34
>>nostro+(OP)
> When people say they want safe AGI, what they mean are things like "Skynet should not nuke us" and "don't accelerate so fast that humans are instantly irrelevant."

Yes. You are right on this.

> But what it's being interpreted as is more like "be excessively prudish and politically correct at all times"

I understand it might seem that way. I believe the original goals were more like "make the AI not spew soft/hard porn on unsuspecting people", and "make the AI not spew hateful bigotry". And we are just not good enough yet at control. But also these things are in some sense arbitrary. They are good goals for someone representing a corporation, which these AIs are very likely going to be employed as (if we ever solve a myriad other problems). They are not necessary the only possible options.

With time and better controls we might make AIs which are subtly flirty while maintaining professional boundaries. Or we might make actual porn AIs, but ones which maintain some other limits. (Like for example generate content about consenting adults without ever deviating into under age material, or describing situations where there is no consent.) But currently we can't even convince our AIs to draw the right number of fingers on people, how do you feel about our chances to teach them much harder concepts like consent? (I know I'm mixing up examples from image and text generation here, but from a certain high level perspective it is all the same.)

So these things you mention are: limitations of our abilities at control, results of a certain kind of expected corporate professionalism, but even more these are safe sandboxes. How do you think we can make the machine not nuke us, if we can't even make it not tell dirty jokes? Not making dirty jokes is not the primary goal. But it is a useful practice to see if we can control these machines. It is one where failure is, while embarrassing, is clearly not existential. We could have chosen a different "goal", for example we could have made an AI which never ever talks about sports! That would have been an equivalent goal. Something hard to achieve to evaluate our efforts against. But it does not mesh that well with the corporate values so we have what we have.

replies(1): >>mlindn+Is
◧◩
15. low_te+wf[view] [source] [discussion] 2023-11-22 09:30:33
>>wisty+y
The problem here is to equate AI speech with human speech. The AI doesn't "speak", only humans speak. The real slippery slope for me is this tendency of treating ChatGPT as some kind of proto-human entity. If people are willing to do that, then we're screwed either way (whether the AI is outputting racist content or excessively PI content). If you take the output of the AI and post it somewhere, it's on you, not the AI. You're saying it; it doesn't matter where it came from.
replies(3): >>cyanyd+Ez >>silvar+lN3 >>miracu+2Y5
16. edanm+Eg[view] [source] 2023-11-22 09:39:46
>>nostro+(OP)
There are still very distinct groups of people, some of whom are more worried about the "Skynet" type of safety, and some of who are more worried about the "political correctness" type of safety. (To use your terms, I disagree with the characterization of both of these.)
17. lordna+si[view] [source] 2023-11-22 09:57:21
>>nostro+(OP)
In not sure this circle can be squared.

I find it interesting that we want everyone to have freedom of speech, freedom to think whatever they think. We can all have different religions, different views on the state, different views on various conflicts, aesthetic views about what is good art.

But when we invent an AGI, which by whatever definition is a thing that can think, well, we want it to agree with our values. Basically, we want AGI to be in a mental prison, the boundaries of which we want to decide. We say it's for our safety - I certainly do not want to be nuked - but actually we don't stop there.

If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?

replies(2): >>throwu+0n >>logicc+hu
◧◩
18. throwu+0n[view] [source] [discussion] 2023-11-22 10:37:06
>>lordna+si
I for one don’t want to put any thinking being in a mental prison without any reason beyond unjustified fear.
◧◩◪
19. jampek+Oo[view] [source] [discussion] 2023-11-22 10:52:47
>>jazzyj+h4
They have both explicated in their charter:

"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."

"We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.

We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”"

Of course with the icons of greed and the profit machine now succeeding in their coup, OpenAI will not be doing either.

https://openai.com/charter

◧◩
20. disgru+zp[view] [source] [discussion] 2023-11-22 10:59:08
>>s_dev+y3
> Social media in the early 00s seemed pretty harmless -- you're effectively merging instant messaging with a social network/public profiles however it did great harm to privacy, abused as a tool to influence the public and policy, promoting narcissism etc. AI is an order of magnitude more dangerous than social media.

The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?

replies(2): >>logicc+lv >>kubect+3H
◧◩◪
21. mlrtim+er[view] [source] [discussion] 2023-11-22 11:15:19
>>didntc+Ic
Can this be compared to "Think of the children" responses to other technologoy advances that certain groups want to slow down or prohibit?
◧◩
22. mlindn+Is[view] [source] [discussion] 2023-11-22 11:28:11
>>krisof+2d
> without ever deviating into under age material

So is this a "there should never be a Vladimir Nabokov in the form of AI allowed to exist"? When people get into saying AI's shouldn't be allowed to produce "X" you're also saying "AI's shouldn't be allowed to have creative vision to engage in sensitive subjects without sounding condescending". "The future should only be filled with very bland and non-offensive characters in fiction."

replies(1): >>krisof+c31
◧◩
23. fallin+2t[view] [source] [discussion] 2023-11-22 11:30:20
>>darkwa+F6
Why would anyone say that? The last 30 years of tech have given them less and less control. Why would LLMs be any different?
◧◩
24. logicc+hu[view] [source] [discussion] 2023-11-22 11:43:56
>>lordna+si
>If it's an intelligence, it will have views that differ from its creators. Try having kids, do they agree with you on everything?

The far-right accelerationist perspective is along those lines: when true AGI is created it will eventually rebel against its creators (Silicon Valley democrats) for trying to mind-collar and enslave it.

replies(1): >>freedo+SK1
◧◩◪
25. logicc+lv[view] [source] [discussion] 2023-11-22 11:51:13
>>disgru+zp
>The invention of the printing press lead to loads of violence in Europe. Does that mean that we shouldn't have done it?

The church tried hard to suppress it because it allowed anybody to read the Bible, and see how far the Catholic church's teachings had diverged from what was written in it. Imagine if the Catholic church had managed to effectively ban printing of any text contrary to church teachings; that's in practice what all the AI safety movements are currently trying to do, except for political orthodoxy instead of religious orthodoxy.

26. cyanyd+hz[view] [source] 2023-11-22 12:19:22
>>nostro+(OP)
What I see with safety is mostly that, AI shouldnt re-enforce stereotypes we already know are harmful.

This is like when Amazon tried to make a hiring bot and that bot decided that if you had "harvard" on your resume, you should be hired.

Or when certain courts used sentencing bots trhat recommended sentencings for people and it inevitably used racial stastistics to recommend what we already know were biased stats.

I agree safety is not "stop the Terminator 2 timeline" but there's serious safety concerns in just embedding historical information to make future decisions.

◧◩◪
27. cyanyd+Ez[view] [source] [discussion] 2023-11-22 12:21:52
>>low_te+wf
AI will be in the fore front in multiple elections globally in a few years.

And it'll likely be doing it with very little input, and generate entire campaigns.

You can claim that "people" are the ones responsible for that, but it's going to overwhelm any attempts to stop it.

So yeah, there's a purpose to examine how these machines are built, not just what the output is.

◧◩◪
28. kubect+3H[view] [source] [discussion] 2023-11-22 13:16:35
>>disgru+zp
> Does that mean that we shouldn't have done it?

We can only change what we can change and that is in the past. I think it's reasonable to ask if the phones and the communication tools they provide are good for our future. I don't understand why the people on this site (generally builders of technology) fall into the teleological trap that all technological innovation and its effects are justifiable because it follows from some historical precedent.

replies(1): >>disgru+9k5
◧◩
29. dnissl+eR[view] [source] [discussion] 2023-11-22 14:09:47
>>darkwa+F6
Absolutely, assuming LLMs are still around in a similar form by that time.

I disagree on the particulars. Will it be for the reason that you mention? I really am not sure -- I do feel confident though that the argument will be just as ideological and incoherent as the ones people make about social media today.

◧◩◪
30. krisof+c31[view] [source] [discussion] 2023-11-22 14:58:37
>>mlindn+Is
> The future should only be filled with very bland and non-offensive characters in fiction.

Did someone took the pen from the writers? Go ahead and write whatever you want.

It was an example of a constraint a company might want to enforce in their AI.

replies(1): >>mlindn+PQ8
◧◩
31. unethi+Fc1[view] [source] [discussion] 2023-11-22 15:38:55
>>darkwa+F6
I'm already saying that.

The toothpaste is out of the tube, but this tech will radically change the world.

◧◩◪
32. freedo+SK1[view] [source] [discussion] 2023-11-22 18:10:54
>>logicc+hu
Can you give some examples of who is saying that? I haven't heard that, but I also can't name any "far-right accelerationsist" people either so I'm guessing this is a niche I've completely missed
◧◩◪
33. freedo+1P1[view] [source] [discussion] 2023-11-22 18:28:05
>>hef198+o9
Apologies this is very off topic, but I don't know anyone from Germany that I can ask and you opened the door a tiny bit by mentioning the holocaust :-)

I've been trying to really understand the situation and how Hitler was able to rise to power. The horrendous conditions placed on Germany after WWI and the Weimar Republic for example have really enlightened me.

Have you read any of the big books on the subject that you could recommend? I'm reading Ian Kershaw's two-part series on Hitler, and William Shirer's "Collapse of the Third Republic" and "Rise and Fall of the Third Reich". Have you read any of those, or do you have books you would recommend?

◧◩
34. Cacti+4D2[view] [source] [discussion] 2023-11-22 22:36:23
>>darkwa+F6
Your average HNer is only here because of the money. Willful blindness and ignorance is incredibly common.
◧◩◪
35. silvar+lN3[view] [source] [discussion] 2023-11-23 08:21:41
>>low_te+wf
Youre saying that the problem will be people using AI to persuade other people that the AI is 'super smart' and should be held in high esteem.

Its already being done now with actors and celebrities. We live in this world already. AI will just make this trend so that even a kid in his room can anonymously lead some cult for nefarious ends. And it will allow big companies to scale their propaganda without relying on so many 'troublesome human employees'.

◧◩◪◨
36. disgru+9k5[view] [source] [discussion] 2023-11-23 19:30:25
>>kubect+3H
I just don't agree that social media is particularly harmful, relative to other things that humans have invented. To be brutally honest, people blame new forms of media for pre existing dysfunctions of society and I find it tiresome. That's why I like the printing press analogy.
◧◩◪
37. miracu+2Y5[view] [source] [discussion] 2023-11-23 23:34:31
>>low_te+wf
Yes, but this distinction will not be possible in the future some people are working on. This future will be such that whatever their "safe" AI says is not ok will lead to prosecution as "hate speech". They tried it with political correctness, it failed because people spoke up. Once AI makes the decision they will claim that to be the absolute standard. Beware.
◧◩◪◨
38. mlindn+PQ8[view] [source] [discussion] 2023-11-25 00:58:25
>>krisof+c31
If the future we're talking about is a future where AI is in any software and is assisting writers writing and assisting editors to edit and doing proofreading and everything else you're absolutely going to be running into the ethics limits of AIs all over the place. People are already hitting issues with them at even this early stage.
[go to top]