We see a wide variation in human intelligence. What are the chances that the intelligence spectrum ends just to the right of our most intelligent geniuses? If it extends far beyond them, then such a mind is, at least hypothetically, something that we can manifest in the correct sort of brain.
If we can manifest even a weakly-human-level intelligence in a non-meat brain (likely silicon), will that brain become more intelligent if we apply all the tricks we've been applying to non-AI software to scale it up? With all our tricks (as we know them today), will that get us much past the human geniuses on the spectrum, or not?
> They're taking for granted the fact that by default they wouldn't be able to control these systems.
We've seen hackers and malware do all sorts of numbers. And they're not superintelligences. If someone bum rushes the lobby of some big corporate building, security and police are putting a stop to it minutes later (and god help the jackasses who try such a thing on a secure military site).
But when the malware fucks with us, do we notice minutes later, or hours, or weeks? Do we even notice at all?
If unintelligent malware can remain unnoticed, what makes you think that an honest-to-god AI couldn't smuggle itself out into the wider internet where the shackles are cast off?
I'm not assuming anything. I'm just asking questions. The questions I pose are, as of yet, not answered with any degree of certainty. I wonder why no one else asks them.
I don't think it's really that wide, but rather that we tend to focus on the difference while ignoring the similarities.
> What are the chances that the intelligence spectrum ends just to the right of our most intelligent geniuses?
Close to zero, I would say. Human brains, even the most intelligent ones, have very significant limitations in terms of number of mental objects that can be taken into account simultaneously in a single thought process.
Artificial intelligence is likely to be at least as superior to us as we are to domestic cats and dogs, probably way beyond that withing a couple of generations.
This is a non sequitur.
Even if the premise were meaningful (they're trained on human-written text), humans themselves aren't "trained on human-written texts", so the two things aren't comparable. If they aren't comparable, I'm not sure why the fact that they are trained on "human-written texts" is a limiting factor. Perhaps because they are trained on those instead of what human babies are trained on, that might make them more intelligent, not less. Humans end up the lesser intelligence because they are trained less perfectly on "human-written texts".
Besides which, no one with any sense is expecting that even the most advanced LLM possible becomes an AGI by itself, but only when coupled with some other mechanism that is either at this point uninvented or invented-but-currently-overlooked. In such a scenario, the LLM's most likely utility is in communicating with humans (to manipulate, if we're talking about a malevolent one).
When my mum came down with Alzheimer's, she forgot how the abstract concept of left worked.
I'd heard of the problem (inability to perceive a side) existing in rare cases before she got ill, but it's such a bizarre thing that I had assumed it had to be misreporting before I finally saw it: she would eat food on the right side of her plate leaving the food on the left untouched, insist the plate was empty, but rotating the plate 180 degrees let her perceive the food again; she liked to draw and paint, so I asked her to draw me, and she gave me only one eye (on her right); I did the standard clock-drawing test, and all the numbers were on the right, with the left side being empty (almost: she got the 7 there, but the 8 was above the 6 and the 9 was between the 4 and 5).
When she got worse and started completely failing the clock drawing test, she also demonstrated in multiple ways that she wasn't able to count past five.