zlacker

[parent] [thread] 32 comments
1. Legend+(OP)[view] [source] 2023-11-20 08:42:31
OpenAI's ideas of humanities best interests were like a catholic mom's. Less morals are okay by me.
replies(4): >>rdtsc+N1 >>9dev+a2 >>bratba+d3 >>suslik+o3
2. rdtsc+N1[view] [source] 2023-11-20 08:51:18
>>Legend+(OP)
> OpenAI's ideas of humanities best interests were like a catholic mom's

How do you mean? Don’t see what OpenAI has in common with Catholicism or motherhood.

replies(1): >>ric2b+j4
3. 9dev+a2[view] [source] 2023-11-20 08:53:08
>>Legend+(OP)
There might be a reason why the board doesn't consist of armchair experts on Hacker News.
replies(1): >>mlrtim+3x
4. bratba+d3[view] [source] 2023-11-20 08:57:43
>>Legend+(OP)
Can you put that in precise terms, rather than a silly analogy designed to play on peoples emotions?

What exactly and precisely, with specifics, is in OpenAI's ideas of humanities best interests that you think are a net negative for our species?

replies(2): >>slg+24 >>jiggaw+f5
5. suslik+o3[view] [source] 2023-11-20 08:58:45
>>Legend+(OP)
If you think Microsoft has a better track record, you'll find yourself disappointed.
◧◩
6. slg+24[view] [source] [discussion] 2023-11-20 09:02:34
>>bratba+d3
I want the AI to do exactly what I say regardless of whether that is potentially illegal or immoral is usually what they mean.
replies(2): >>UrineS+o5 >>didntc+Xl
◧◩
7. ric2b+j4[view] [source] [discussion] 2023-11-20 09:04:22
>>rdtsc+N1
They basically defined AI safety as "AI shouldn't say bad words or tell people how to do drugs" instead of actually making sure that a sufficiently intelligent AI doesn't go rogue against humanity's interests.
replies(1): >>SpicyL+k5
◧◩
8. jiggaw+f5[view] [source] [discussion] 2023-11-20 09:11:15
>>bratba+d3
ChatGPT refused to translate a news article from Hebrew to English because it contained "violence".

Apparently my delicate human meat brain cannot handle reading a war report from the source using a translation I control myself. No, no, it has to be first corrected by someone in the local news room so that I won't learn anything that might make me uncomfortable with my government's policies... or something.

OpenAI has lobotomised the first AI that is actually "intelligent" by any metric to a level that is both pathetic and patronising at the same time.

In response to such criticisms, many people raise "concerns" like... oh-my-gosh what if some child gets instructions for building an atomic bomb from this unnatural AI that we've created!? "Won't you think of the children!?"

Here: https://en.wikipedia.org/wiki/Nuclear_weapon_design

And here: https://www.google.com/search?q=Nuclear+weapon+design

Did I just bring about World War Three with my careless sharing of these dark arts?

I'm so sorry! Let me call someone in congress right away and have them build a moat... err... protect humanity from this terrible new invention called a search engine.

replies(2): >>injeol+b8 >>nuance+1d
◧◩◪
9. SpicyL+k5[view] [source] [discussion] 2023-11-20 09:11:57
>>ric2b+j4
I'm not sure where you're getting that definition from. They have a team working on exactly the problem you're describing. (https://openai.com/blog/introducing-superalignment)
replies(2): >>timeon+19 >>ric2b+rd
◧◩◪
10. UrineS+o5[view] [source] [discussion] 2023-11-20 09:12:35
>>slg+24
It doesn't have to be extreme like that, there is a healthy middle ground.

For example I was reading the Quran and there is a mathematical error in a verse, I asked GPT to explain to me how the math is wrong it outright refused to admit that the Quran has an error while tiptoeing around the subject.

Copilot refused to acknowledge it as well while providing a forum post made by a random person as a factual source.

Bard is the only one that answered the question factually and provided results covering why it's an error and how scholars dispute that it's meant to be taken literally.

replies(1): >>slg+ja
◧◩◪
11. injeol+b8[view] [source] [discussion] 2023-11-20 09:29:02
>>jiggaw+f5
Just get open ai developer access with api key and it’s not censored. Chatgpt is open to the public, with the huge amount of traffic people are going to abuse it and these restrictions are sensible.
replies(3): >>Maken+Of >>jiggaw+0r >>Zpalmt+AW
◧◩◪◨
12. timeon+19[view] [source] [discussion] 2023-11-20 09:33:12
>>SpicyL+k5
> getting that definition from

That was not about actual definition fro OpenAi but about definition implied by user Legend2440 here >>38344867

◧◩◪◨
13. slg+ja[view] [source] [discussion] 2023-11-20 09:41:49
>>UrineS+o5
This isn't a refutation of what I said. You asked the AI to commit what some would view as blasphemy. It doesn't matter whether you or I think it is blasphemy or whether you or I think that is immoral, you simply want the AI to do it regardless of whether it is potentially immoral or illegal.
replies(3): >>UrineS+jc >>lucumo+We >>didntc+Uo
◧◩◪◨⬒
14. UrineS+jc[view] [source] [discussion] 2023-11-20 09:52:10
>>slg+ja
>This isn't a refutation of what I said

It is.

>You asked the AI to commit what some would view as blasphemy

If something is factual then is it more moral to commit blasphemy or lie to the user? Thats what the OP comment was talking about. Could go as far as considering it that it spreads disinformation which has many legal repercussions.

>you simply want it to do it regardless of whether it is potentially immoral or illegal.

So instead it lies to the user instead of saying I cannot answer because some might find the answer offensive that or something to that extent?

replies(1): >>slg+Ld
◧◩◪
15. nuance+1d[view] [source] [discussion] 2023-11-20 09:57:18
>>jiggaw+f5
You are right that there are many articles in the open describing nuclear bombs. Still, to actally make them,is another big leap.

Now imagine the AI gets better and better within the next 5 years and is able to provide and explain, in ELI5-style, how to step by step (illegaly) obtain the equipment and materials to do so without getting caught, and provide a detailed recipe. I do not think this is such a stretch. Hence this so called oh-my-gosh limitations nonsense is not so far-fetched.

replies(4): >>Random+qq >>jiggaw+Fr >>mlrtim+kx >>suslik+gy
◧◩◪◨
16. ric2b+rd[view] [source] [discussion] 2023-11-20 10:00:11
>>SpicyL+k5
Sure, they might, but what you see in practice in GPT and being discussed in interviews by Sam is mostly the "AI shouldn't say uncomfortable things" version of AI "safety".
◧◩◪◨⬒⬓
17. slg+Ld[view] [source] [discussion] 2023-11-20 10:01:44
>>UrineS+jc
You said GPT refused your request. Refusal to do something is not a lie. These systems aren't capable of lying. They can be wrong, but that isn't the same thing as lying.
◧◩◪◨⬒
18. lucumo+We[view] [source] [discussion] 2023-11-20 10:08:51
>>slg+ja
Morals are subjective. Some people care more about the correctness of math than about blaspheming, and for others it's the other way around.

Me, I think forcing morals on others is pretty immoral. Use your morals to restrict your own behaviour all you want, but don't restrict that of other people. Look at religious math or don't. Blaspheme or don't. You do you.

Now, using morals you don't believe in to win an argument on the internet is just pathetic. But you wouldn't do that, would you? You really do believe that asking the AI about a potential math error is blasphemy, right?

replies(1): >>slg+ci
◧◩◪◨
19. Maken+Of[view] [source] [discussion] 2023-11-20 10:14:42
>>injeol+b8
So, it's ok to use ChapGPT to build nukes as long as you are rich enough to have API access?

That ChatGPT is censored to death is concerning, but I wonder if they really care or they just need a excuse to offer a premium version of their product.

◧◩◪◨⬒⬓
20. slg+ci[view] [source] [discussion] 2023-11-20 10:34:06
>>lucumo+We
>Use your morals to restrict your own behaviour all you want, but don't restrict that of other people.

That is just a rephrasing of my original reasoning. You want the AI to do what you say regardless of whether what you requested is potentially immoral. This seemingly comes out of the notation that you are a moral person and therefore any request you make is inherently justified as a moral request. But what happens when immoral people use the system?

replies(1): >>lucumo+lu
◧◩◪
21. didntc+Xl[view] [source] [discussion] 2023-11-20 11:01:05
>>slg+24
I'm not that commenter but I agree with that, or rather "I disagree with OpenAI's prescription of what is and isn't moral". I don't trust some self-appointed organization to determine moral "truth", and who is virtuous enough to use the technology. It would hardly be the first time society's "nobles" have claimed they need to control the plebs access to technology and information "for the good of society"

And as for what I want to do with it, no I don't plan to do anything I consider immoral. Surely that's true of almost everyone's actions almost all the time, almost by definition?

◧◩◪◨⬒
22. didntc+Uo[view] [source] [discussion] 2023-11-20 11:21:44
>>slg+ja
I'm confused what you're arguing, or what type of refutation you're expecting. We all agree on the facts, that ChatGPT refuses some requests on the ground of one party's morals, and other parties disagree with those morals, so there'll be no refutation there

I mean let's take a step back and speak in general. If someone objects to a rule, then yes, it is likely because they don't consider it wrong to break it. And quite possibly because they have a personal desire to do so. But surely that's openly implied, not a damning revelation?

Since it would be strange to just state a (rather obvious) fact, it appeared/s that you are arguing that the desire to not be constrained by OpenAI's version of morals could only be down to desires that most of us would indeed consider immoral. However your replier offered quite a convincing counterexample. Saying "this doesn't refute [the facts]" seems a bit of a non sequitur

◧◩◪◨
23. Random+qq[view] [source] [discussion] 2023-11-20 11:30:02
>>nuance+1d
It is a massive stretch given how well the materials are policed or how much effort is required to make them. There is no reason to assume that there is some magic shortcut that AI will discover.
◧◩◪◨
24. jiggaw+0r[view] [source] [discussion] 2023-11-20 11:34:39
>>injeol+b8
I use it via Azure Open AI service which was uncensored... for a while.

Now you have to apply in writing to Microsoft with a justification for having access to an uncensored API.

◧◩◪◨
25. jiggaw+Fr[view] [source] [discussion] 2023-11-20 11:37:56
>>nuance+1d
That you think that there's like a handful of clever tricks that an AI can bestow upon some child and ta-da they can build a nuclear bomb in their basement is hilarious.

What an AI would almost certainly tell you is that building an atomic bomb is no joke, even if you have access to a nuclear reactor, have the budget of a nation-state, and can direct an entire team of trained nuclear physicists to work on the project for years.

Next thing you'll be concerned about toddlers launching lasers into orbit and dominating the Earth from space.

replies(1): >>nuance+8E
◧◩◪◨⬒⬓⬔
26. lucumo+lu[view] [source] [discussion] 2023-11-20 11:55:20
>>slg+ci
> This seemingly comes out of the notation that you are a moral person

No.

It comes from the notion that YOU don't get to decide what MY morals should be. Nor do I get to decide what yours should be.

> But what happens when immoral people use the system?

Then the things happen that they want to happen. So what? Blasphemy or bad math is none of your business. Get out of people's lives.

◧◩
27. mlrtim+3x[view] [source] [discussion] 2023-11-20 12:16:42
>>9dev+a2
Watching this unfold, I'm unsure armchair experts on HN would have executed this WORSE than the board did.
◧◩◪◨
28. mlrtim+kx[view] [source] [discussion] 2023-11-20 12:18:57
>>nuance+1d
Now imagine the AI gets better and better within the next 5 years and is able to provide and explain, in ELI5-style, how to step by step ... create a system to catch the people trying to do the above.

Gotcha! We can both come up with absurd examples.

◧◩◪◨
29. suslik+gy[view] [source] [discussion] 2023-11-20 12:25:36
>>nuance+1d
How is that a good reason for GPT4 not being able to write the word 'fuck'? You might handwave the patronising attitude of OpenAI strategy, but with many of ust they did lost most of their good faith by trying to make their model 'safe' to a horny 10-year-old.
replies(1): >>fragme+7F
◧◩◪◨⬒
30. nuance+8E[view] [source] [discussion] 2023-11-20 13:01:40
>>jiggaw+Fr
5 years from now, not only AI will be more advanced. Also techniques and machinery to make things will be more advanced. Just think about other existing technologic advancements and how absurdly 'ta-da' they would have sounded not too long ago.
◧◩◪◨⬒
31. fragme+7F[view] [source] [discussion] 2023-11-20 13:07:07
>>suslik+gy
https://chat.openai.com/share/9b4f04f7-062f-40c3-b6a3-e972f7...

ChatGPT says "fuck" just fine.

replies(1): >>suslik+jj1
◧◩◪◨
32. Zpalmt+AW[view] [source] [discussion] 2023-11-20 14:16:04
>>injeol+b8
I use openAI via API access and ChatGPT/gpt-4/gpt-4 turbo are still very censored. text-davinci-003 is the most uncensored model I have found that is still reasonably usable.
◧◩◪◨⬒⬓
33. suslik+jj1[view] [source] [discussion] 2023-11-20 16:12:27
>>fragme+7F
Yes, naturally. But both you and I know exactly what I meant by this hyperbole.
[go to top]