What exactly and precisely, with specifics, is in OpenAI's ideas of humanities best interests that you think are a net negative for our species?
Apparently my delicate human meat brain cannot handle reading a war report from the source using a translation I control myself. No, no, it has to be first corrected by someone in the local news room so that I won't learn anything that might make me uncomfortable with my government's policies... or something.
OpenAI has lobotomised the first AI that is actually "intelligent" by any metric to a level that is both pathetic and patronising at the same time.
In response to such criticisms, many people raise "concerns" like... oh-my-gosh what if some child gets instructions for building an atomic bomb from this unnatural AI that we've created!? "Won't you think of the children!?"
Here: https://en.wikipedia.org/wiki/Nuclear_weapon_design
And here: https://www.google.com/search?q=Nuclear+weapon+design
Did I just bring about World War Three with my careless sharing of these dark arts?
I'm so sorry! Let me call someone in congress right away and have them build a moat... err... protect humanity from this terrible new invention called a search engine.
For example I was reading the Quran and there is a mathematical error in a verse, I asked GPT to explain to me how the math is wrong it outright refused to admit that the Quran has an error while tiptoeing around the subject.
Copilot refused to acknowledge it as well while providing a forum post made by a random person as a factual source.
Bard is the only one that answered the question factually and provided results covering why it's an error and how scholars dispute that it's meant to be taken literally.
It is.
>You asked the AI to commit what some would view as blasphemy
If something is factual then is it more moral to commit blasphemy or lie to the user? Thats what the OP comment was talking about. Could go as far as considering it that it spreads disinformation which has many legal repercussions.
>you simply want it to do it regardless of whether it is potentially immoral or illegal.
So instead it lies to the user instead of saying I cannot answer because some might find the answer offensive that or something to that extent?
Now imagine the AI gets better and better within the next 5 years and is able to provide and explain, in ELI5-style, how to step by step (illegaly) obtain the equipment and materials to do so without getting caught, and provide a detailed recipe. I do not think this is such a stretch. Hence this so called oh-my-gosh limitations nonsense is not so far-fetched.
Me, I think forcing morals on others is pretty immoral. Use your morals to restrict your own behaviour all you want, but don't restrict that of other people. Look at religious math or don't. Blaspheme or don't. You do you.
Now, using morals you don't believe in to win an argument on the internet is just pathetic. But you wouldn't do that, would you? You really do believe that asking the AI about a potential math error is blasphemy, right?
That ChatGPT is censored to death is concerning, but I wonder if they really care or they just need a excuse to offer a premium version of their product.
That is just a rephrasing of my original reasoning. You want the AI to do what you say regardless of whether what you requested is potentially immoral. This seemingly comes out of the notation that you are a moral person and therefore any request you make is inherently justified as a moral request. But what happens when immoral people use the system?
And as for what I want to do with it, no I don't plan to do anything I consider immoral. Surely that's true of almost everyone's actions almost all the time, almost by definition?
I mean let's take a step back and speak in general. If someone objects to a rule, then yes, it is likely because they don't consider it wrong to break it. And quite possibly because they have a personal desire to do so. But surely that's openly implied, not a damning revelation?
Since it would be strange to just state a (rather obvious) fact, it appeared/s that you are arguing that the desire to not be constrained by OpenAI's version of morals could only be down to desires that most of us would indeed consider immoral. However your replier offered quite a convincing counterexample. Saying "this doesn't refute [the facts]" seems a bit of a non sequitur
Now you have to apply in writing to Microsoft with a justification for having access to an uncensored API.
What an AI would almost certainly tell you is that building an atomic bomb is no joke, even if you have access to a nuclear reactor, have the budget of a nation-state, and can direct an entire team of trained nuclear physicists to work on the project for years.
Next thing you'll be concerned about toddlers launching lasers into orbit and dominating the Earth from space.
No.
It comes from the notion that YOU don't get to decide what MY morals should be. Nor do I get to decide what yours should be.
> But what happens when immoral people use the system?
Then the things happen that they want to happen. So what? Blasphemy or bad math is none of your business. Get out of people's lives.
Gotcha! We can both come up with absurd examples.
ChatGPT says "fuck" just fine.