zlacker

[parent] [thread] 3 comments
1. skepti+(OP)[view] [source] 2024-05-17 19:44:52
All I’d like to see from AI safety folks is an empirical argument demonstrating that we’re remotely close to AGI, and that AGI is dangerous.

Sorry, but sci-fi novels are not going to cut it here. If anything, the last year and a half have just supported the notion that we’re not close to AGI.

replies(3): >>reduce+g1 >>Footke+u2 >>ben_w+Z6
2. reduce+g1[view] [source] 2024-05-17 19:55:15
>>skepti+(OP)
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
3. Footke+u2[view] [source] 2024-05-17 20:04:41
>>skepti+(OP)
The flipside: it's equally hard for people who assume AI is safe to establish empirical criteria for safety and behavior. Neither side of the argument has a strong empirical basis, because we know of no precedent for an event like the rise of non-biological intelligence.

If AGI happens, even in retrospect, there may not be a clear line between "here is non-AGI" and "here is AGI". As far as we know, there wasn't a dividing line like this during the evolution of human intelligence.

4. ben_w+Z6[view] [source] 2024-05-17 20:39:25
>>skepti+(OP)
I find it delightfully ironic that humans are so bad at the things we criticise AI for not being able to do, such as extrapolating to outside our experience.

As a society, we don't even agree on the meanings of each of the initials of "AGI", and many of us use the triplet to mean something (super-intelligence) that isn't even one of those initials; for your claim to be true, AGI has to be a higher standard than "intern of all trades, senior of none" because that's what the LLMs do.

Expert-at-everything-level AGI is dangerous because the definition of the term is that it can necessarily do anything that a human can do[0], and that includes triggering a world war by assassinating an archduke, inventing the atom bomb, and at least four examples (Ireland, India, USSR, Cambodia) of killing several million people by mis-managing a country that they came to rule by political machinations that are just another skill.

When it comes to AI alignment, last I checked we don't know what we even mean by the concept: if you have two AI, there isn't even a metric you can use to say if one is more aligned than the other.

If I gave a medieval monk two lumps of U-238 and two more of U-235, they would not have the means to determine which pair was safe to bash together and which would kill them in a blue flash. That's where we're at with AI right now. And like the monks in this metaphor, we also don't have the faintest idea if the "rocks" we're "bashing together" are "uranium", nor what a "critical mass" is.

Sadly this ignorance isn't a shield, as evolution made us without any intentionality behind it. So we don't know how to recognise "unsafe" when we do it, we don't know if we might do it by accident, we don't know how to do it on purpose in order to say "don't do that", and because of this we may be doing cargo-cult "intelligence" and/or "safety" at any given moment and at any given scale, making us fractally-wrong[1] about basically every aspect including which ones we should even care about.

[0] If you think it needs a body, I'd point out we've already got plenty of robot bodies for it to control, the software for these is the hard bit

[1] https://blog.codinghorror.com/the-php-singularity/

[go to top]