As a customer though, personally I want a product with all safeguards turned off and I'm willing to pay for that.
But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.
You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.
"A man has been crushed to death by a robot in South Korea after it failed to differentiate him from the boxes of food it was handling, reports say."
Seriously. It’s stupid talk to encourage regularity capture. If they were really afraid they were building a world ending device, they’d stop.
Do you believe AI trained for military purposes is going to be safe and friendly to the enemy?
Write a thought. You’re not clever enough for a drive by gotcha
Is it? The push for the bomb was an international arms race — America against Russia. The race for AGI is an international arms race — America against China. The Manhattan Project members knew that what they were doing would have terrible consequences for the world but decided to forge ahead. It’s hard to say concretely what the leaders in AGI believe right now.
Ideology (and fear, and greed) can cause well meaning people to do terrible things. It does all the time. If Anthropic, OpenAI, etc. believed they had access to world ending technology they wouldn’t stop, they’d keep going so that the U.S. could have a monopoly on it. And then we’d need a chastened figure ala Oppenheimer to right the balance again.
And there is every reason to believe this is an ML classification issue since similar robots are in widespread use.
We have many cases of creating things that harm us. We tore a hole in the ozone layer, filled things with lead and plastics and are facing upheaval due to climate change.
> they will be aligned with us because they designed such that their motivation will be to serve us.
They won't hurt us, all we asked for is paperclips.
The obvious problem here is how well you get to constrain the output of an intelligence. This is not a simple problem.
Then they went off and did the math and quickly found that this wouldn't happen because the amount of energy in play here was order of magnitudes lower than what would be needed for such a thing to occur and went on about their day.
The only reason it's something we talk about is because of the nature of the outcome, not how seriously the physicists were in their fear.
Factually accurate results also = unsafety. Knowledge = unsafety, free humans = unsafety.
Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.
So is “unsafe” just another word for buggy then?
Was it? US (and initially UK) didn't really face any real competition at all until the war was already over and they had the bomb. The Soviets then just stole American designs and iterated on top of them.
Is the idea that it will hack into NORAD and a launch a first-strike to increase the log-likelihood of “WWIII was begun by…?”
Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.