It’s a bit tragic that Ilya and company achieved the exact opposite of what they intended apparently, by driving those they attempted to slow down into the arms of people with more money and less morals. Well.
What exactly and precisely, with specifics, is in OpenAI's ideas of humanities best interests that you think are a net negative for our species?
For example I was reading the Quran and there is a mathematical error in a verse, I asked GPT to explain to me how the math is wrong it outright refused to admit that the Quran has an error while tiptoeing around the subject.
Copilot refused to acknowledge it as well while providing a forum post made by a random person as a factual source.
Bard is the only one that answered the question factually and provided results covering why it's an error and how scholars dispute that it's meant to be taken literally.
It is.
>You asked the AI to commit what some would view as blasphemy
If something is factual then is it more moral to commit blasphemy or lie to the user? Thats what the OP comment was talking about. Could go as far as considering it that it spreads disinformation which has many legal repercussions.
>you simply want it to do it regardless of whether it is potentially immoral or illegal.
So instead it lies to the user instead of saying I cannot answer because some might find the answer offensive that or something to that extent?