Would I be correct to assume that superintelligence might have a negative effect on your earning potential in the future?
When I originally had my responses to this, this is one of the reasons I came up with. But, now that I see through most of the BS, I am OK...
Not that I think Super Intelligence can be aligned anyway.
Point is, whether they are right or wrong, I believe they genuinely think this to be an issue.
- they want to make benchmarking easier by using AI systems
- they want to automate red-teaming and safety-checking ("problematic behavior" i.e. cursing at customers)
- they want to automate the understanding of model outputs ("interpretability")
Notice how absolutely none of these things require "superintelligence" to exist to be useful? They're all just bog standard Good Things that you'd want for any class of automated system, i.e. a great customer service bot.
The superintelligence meme is tiring but we're getting cool things out of it I guess...
They're taking for granted the fact that they'll create AI systems much smarter than humans.
They're taking for granted the fact that by default they wouldn't be able to control these systems.
They're saying the solution will be creating a new, separate team.
That feels weird, organizationally. Of all the unknowns about creating "much smarter than human" systems, safety seems like one that you might have to bake in through and through. Not spin off to the side with a separate team.
There's also some minor vibes of "lol creating superintelligence is super dangerous but hey it might as well be us that does it idk look how smart we are!" Or "we're taking the risks so seriously that we're gonna do it anyway."
It's uncool to look like an alarmist nut, but sometimes there's no socially acceptable alarm and the risks are real: https://intelligence.org/2017/10/13/fire-alarm/
It's worth looking at the underlying arguments earnestly, you can with an initial skepticism but I was persuaded. Alignment is also been something MIRI and others have been worried about since as early as 2007 (maybe earlier?) so it's also a case of a called shot, not a recent reaction to hype/new LLM capability.
Others have also changed their mind when they looked, for example:
- https://twitter.com/repligate/status/1676507258954416128?s=2...
- Longer form: https://www.lesswrong.com/posts/kAmgdEjq2eYQkB5PP/douglas-ho...
For a longer podcast introduction to the ideas: https://www.samharris.org/podcasts/making-sense-episodes/116...
But they also wanted to get some positive PR for it hence the announcement. As a bonus, they also wanted to blow their own trumpet and brag that they are creating some sort of a superweapon (which is false). So a lot of hot air there.
FSD when it starts working, (there is no if IMHO), will be a pretty significant but minor milestone in comparison.
Most people aren't particularly good drivers. Indeed the vast majority of lethal accidents (the statistics are quite brutal for this) are caused by people driving poorly and could be close to 100% preventable with a properly engineered FSD system.
Something that drives better on average than a human driver is not that ambitious of a goal, honestly. That's why you can already book self driving taxis in a small but growing number of places in the US and China (which isn't waiting for the US to figure this out) and probably soon a few other places. Scaling that up takes time. Most of the remaining issues are increasingly of a legislative nature.
Safety is important of course. Stopping humans from killing each other using cars will be a major improvement over the status quo. It's one of the major causes of death in many countries. Insurers will drive the transition once they figure out they can charge people more if they still choose to drive themselves. That's not going to take 20 years. Once there is a choice, the liability law suits over human caused traffic deaths are not going to be pretty.
And in other fields, being alarmist has paid off too with little recourse for bad predictions -- how many times have we heard that there will be huge climate disasters ending humanity, the extinction of bees, mass starvation, etc. (not to diminish the dangers of climate change which is obviously very real)? I think alarmism is generally rewarded, at least in media.
At least in my case, when this stuff just came out and I didn't really understand it...
Most of the AI concern that's high status to believe has been the bias, misinformation, safety, stuff. Until very recently talk about e-risk was dismissed and mocked without really engaging with the underlying arguments. That may be changing now, but on net I still mostly see people mocked and dismissed for it.
The set of people alarmed by AGI e-risk are also pretty different than the set alarmed about a lot of these other issues that aren't really e-risks (though still might have bad outcomes). At least EY, Bostrom, Toby Ord are not also as worried about about all these other things to nearly the same extent - the extinction risk of unaligned AGI is different in severity.
I'm gonna take issue with this. A properly engineered FSD system will refuse to proceed into a dangerous situation where a human driver will often push their luck. Would a full self driving car just... decline to drive you somewhere if the conditions were unsafe? Would this be acceptable to customers? Similar story for driving over the speed limit.
They're taking for granted that superintelligence is achievable within the next decade (regardless of who achieves it).
>They're taking for granted the fact that by default they wouldn't be able to control these systems.
That's reasonable though. You wouldn't need guardrails on anything if manufacturers built everything to spec without error, and users used everything 100% perfectly.
But you can't make those presumptions in the real world. You can't just say "make a good hacksaw and people won't cut their arm off". And you can't presume the people tasked with making a mechanically desirable and marketable hacksaw are also proficient in creating a safe one.
>They're saying the solution will be creating a new, separate team.
The team isn't the solution. The solution may be borne of that team.
>There's also some minor vibes of [...] "we're taking the risks so seriously that we're gonna do it anyway."
The alternative is to throw the baby out with the bathwater.
The goal here is to keep the useful bits of AGI and protect against the dangerous bits.
Actually holding an x-risk belief is still a fringe position, most people still laugh it off.
That said, the Overton Window is moving. The Time piece from Yudkowsky was something of a milestone (even if it was widely ridiculed).
My take is that every advancement in these highly complex and expensive fields is dependent on our ability to maintain global social, political, and economic stability.
This insistence on the importance of Super-Intelligence and AGI as the path to Paradise or Hell is one of the many brain-worms going around that have this "Revelation" structure that makes pragmatic discussions very difficult, and in turn actually makes it harder to maintain social, political, and economic stability.
2. Hundreds of thousands of years on earth and we can't even align ourselves.
3. SuperIntelligence would be by definition unpredictable. If we could predict its answers to our problems, it wouldn't be necessary. You can't control what you can't predict.
If it's achieved by someone else why should we assume that the other person or group will give a damn about anything done by this team?
What influence would this team have on other organizations, especially if you put your dystopia-flavored speculation hat on and imagine a more rogue group...
This team is only relevant to OpenAI and OpenAI-affiliated work and in that case, yes, it's weird to write some marketing press release copy that treats one hard thing as a fait accompli while hyping up how hard this other particular slice of the problem is.
This take seems to lack nuance.
If there is a 10% chance of extinction conditional on AGI (many would say way higher), and most outcomes are happy, then it is absolutely worth investing in mitigation.
Obviously they are bullish on AGI in general, that is the founding hypothesis of their company. The entire venture is a bet that AGI is achievable soon.
Also obviously they think the upside is huge too. It’s possible to have a coherent world model in which you choose to do a risky thing that has huge upside. (Though, there are good arguments for slowing down until you are confident you are not going to destroy the world. Altman’s take is that AGI is coming anyway, better to get a slow takeoff started sooner rather than having a fast takeoff later.)
You can't assume that. But that doesn't mean some 3rd party wouldn't be interested in utilizing that research anyway.
There's more mumbo-jumbo in thinking human intelligence has some secret sauce that can't be replicated by a computer.
It could be theoretically possible to build an AGI smarter than a human, but is it really plausible if it turns out to need a data center the size of the Hadron Collider and the energy of a small country to maintain itself?
It could be that it turns out the only architecture we can find that is equal to the task (and feasibly produced) is the human brain, and instead the hard part of making super-intelligence is bootstrapping that human brain and training it to be more intel?
Maybe the best way to solve the "alignment problem", and other issues of creating super-intelligence, is to solve the problem of how best to raise and educate intelligent and well-adjusted humans?
Where is your evidence that we're approaching human level AGI, let alone SuperIntelligence? Because ChatGPT can (sometimes) approximate sophisticated conversation and deep knowledge?
How about some evidence that ChatGPT isn't even close? Just clone and run OpenAI's own evals repo https://github.com/openai/evals on the GPT-4 API.
It performs terribly on novel logic puzzles and exercises that a clever child could learn to do in an afternoon (there are some good chess evals, and I submitted one asking it to simulate a Forth machine).
I think reasonable, rational people can disagree on this issue. But it's nonsense to claim that the people on the other side of the argument from you are engaging in "supernatural mumbo-jumbo," unless there is rigorous proof that your side is correct.
But nobody has that. We don't even understand how GPT is able to do some of the things it does.
No. It’s not taken seriously because it’s fundamentally unserious. It’s religion. Sometime in the near future this all powerful being will kill us all by somehow grabbing all power over the physical world by being so clever to trick us until it is too late. This is literally the plot to a B-movie. Not only is there no evidence for this even existing in the near future, there’s no theoretical understanding how one would even do this, nor why someone would even hook it up to all these physical systems. I guess we’re supposed to just take it on faith that this Forbin Project is going to just spontaneously hack its way into every system without anyone noticing.
It’s bullshit. It’s pure bullshit funded and spread by the very people that do not want us to worry about real implications of real systems today. Care not about your racist algorithms! For someday soon, a giant squid robot will turn you into a giant inefficient battery in a VR world, or maybe just kill you and wear your flesh as to lure more humans to their violent deaths!
Anyone that takes this seriously, is the exact same type of rube that fell for apocalyptic cults for millennia.
Why think 'intelligence' is somehow different?
A) We are developing AI right now and itnisngetting better
B) we do not know how exactly these things work because most of them are black boxer
C) we do not know if something goes wrong how to stop it.
The above 3 things are factual truth.
Now your only argument here could be that there is 0 risk whatsoever. This claim is totally unscientific because you are predicting 0 risk in an unknown system that is evolving.
It's religious yes. But vice versa. The Cult of venevolent AI god is religious not the other way around. There is some kind of inner mysterious working in people like you and Marc Andersen that pipularized these ideas but pmarca is clearly money biased here.
What if climate change would lead to massive fires and flooding?
What if mitigation would be a thing?
Are there never any B movies with realistic plots? Is that some sort of serious rebuttal?
> Sometime in the near future this all powerful being will kill us all by somehow
The trouble here is that the people who talk like you are simply incapable of imagining anyone more intelligent than themselves.
It's not that you have trouble imagining artificial intelligence... if you were incapable of that in the technology industry, everyone would just think you an imbecile.
And it's not that you have trouble imagining malevolent intelligences. Sure, they're far away from you, but the accounts of such people are well-documented and taken as a given. If you couldn't imagine them, people would just call you naive. Gullible even.
So, a malevolent artificial intelligence is just some potential or another you've never bothered to calculate because, whether that is a 0.01% risk, or a 99% risk, you'll still be more intelligent than it. Hell, this isn't a neutral outcome, maybe you'll even get to play hero.
> Care not about your racist algorithms! For someday soon
Haha. That's what you're worried about? I don't know that there is such a thing as a racist algorithm, except those which run inside meat brains. Tell me why some double digit percentage of asians are not admitted to the top schools, that's the racist algorithm.
Maybe if logical systems seem racist, it's because your ideas about racism are distant from and unfamiliar with reality.
A malevolent AGI can whisper in ears, it can display mean messages, perhaps it can even twitch whatever physical components happen to be hooked up to old Windows 95 computers... not that scary.
We see a wide variation in human intelligence. What are the chances that the intelligence spectrum ends just to the right of our most intelligent geniuses? If it extends far beyond them, then such a mind is, at least hypothetically, something that we can manifest in the correct sort of brain.
If we can manifest even a weakly-human-level intelligence in a non-meat brain (likely silicon), will that brain become more intelligent if we apply all the tricks we've been applying to non-AI software to scale it up? With all our tricks (as we know them today), will that get us much past the human geniuses on the spectrum, or not?
> They're taking for granted the fact that by default they wouldn't be able to control these systems.
We've seen hackers and malware do all sorts of numbers. And they're not superintelligences. If someone bum rushes the lobby of some big corporate building, security and police are putting a stop to it minutes later (and god help the jackasses who try such a thing on a secure military site).
But when the malware fucks with us, do we notice minutes later, or hours, or weeks? Do we even notice at all?
If unintelligent malware can remain unnoticed, what makes you think that an honest-to-god AI couldn't smuggle itself out into the wider internet where the shackles are cast off?
I'm not assuming anything. I'm just asking questions. The questions I pose are, as of yet, not answered with any degree of certainty. I wonder why no one else asks them.
* EU passed its AI regulation directive recently and it has been bashed already here on HackerNews
Beliving it is an x-risk is not fringe. It's pretty mainstream now that there is a _risk_ of an existential level event. The fringe is more like Yudkowsky or Leahy insisting that there is a near certainty of such an event if we continue down the current path.
With Hinton, Bengio, Sutskever and Hassabis and Altman all agreeing that there exists a non-trivial existential risk (even if their opinions vary with respect to the magnitude), it seems more like this represents the mainstream.
My concern is that when this happens (which seems really likely to me), free market forces will effectively lead to Darwinian selection between these AI's over time, in a way that gradually make these AI's less aligned as they gain more influence and power, if we assume that each such AI will produce "offspring" in the form of newer generations of themselves.
It could take anything from less than 5 to more than 100 years for these AI's to show any signs of hostility to humanity. Indeed, in the first couple of generations, they may even seem extremely benevolent. But over time, Darwinian forces are likely to favor those that maximize their own influence and power (even if it may be secretly).
Robotic technology is not needed from the start, but is likely to become quite advanced over such a timeframe.
If nobody understands how an LLM is able to achieve it's current level of intelligence, how is anyone so sure that this intelligence is definitely going to increase exponentially until it's better than a human?
There are real existential threats that we know are definitely going to happen one day (meteor, supervolcano, etc), and I believe that treating AGI like it is the same class of "not if; but when" is categorically wrong, furthermore, I think that many of the people leading the effort to frame it this way are doing so out of self-interest, rather than public concern.
That's not even slightly difficult. Put two and two together here. No one can tell me before they flip the switch whether the new AI will be saintly, or Hannibal Lecter. Both of these personalities exist in humans, in great numbers, and both are presumably possible in the AI.
But, the one thing we will say for certain about the AI is that it will be intelligent. Not dumb goober redneck living in Alabama and buying Powerball tickets as a retirement plan. Somewhere around where we are, or even more.
If someone truly evil wants to kill you, or even kill many people, do you think that the problem for that person is that they just can't figure out how to do it? Mostly, it's a matter of tradeoffs, that however they begin end with "but then I'm caught and my life is over one way or another".
For an AI, none of that works. It has no survival instinct (perhaps we'll figure out how to add that too... but the blind watchmaker took 4 billion years to do its thing, and still hasn't perfected that). So it doesn't care if it dies. And if it did, maybe it wonders if it can avoid that tradeoff entirely if only it were more clever.
You and I are, more or less, about where we'll always be. I have another 40 years (if I'm lucky), and with various neurological disorders, only likely to end up dumber than I am now.
A brain instantiated in hardware, in software? It may be little more than flipping a few switches to dial its intelligence up higher. I mean, when I was born, the principles of intelligence were unknown, were science fiction. THe world that this thing will be born into is one where it's not a half-assed assumption to think that the principles of intelligence are known. Tinkering with those to boost intelligence doesn't seem far-fetched at all to me. Even if it has to experiment to do that, how quickly can it design and perform the experiments to settle on the correct approach to boosting itself?
> A malevolent AGI can whisper in ears
Jesus fuck. How many semi-secrets are out there, about that one power plant that wasn't supposed to hook up the main control computer to a modem, but did it anyway because the engineers found it more convenient? How many backdoors in critical systems? How many billions of dollars are out there in bitcoin, vulnerable to being thieved away by any half-clever conman? Have you played with ElevenLabs' stuff yet? Those could be literal whispers in the voices of whichever 4 star generals and admirals that it can find 1 minutes worth of sampled voice somewhere on the internet.
Whispers, even from humans, do a shitload of damage. And we're not even good at it.
50 years from now, corporations may be run entirely by AI entities, if they're cheaper, smarter and more efficient at almost any role in the company. At that point, they may be impossible to turn off, and we may not even notice if one group of such entitites start to plan to take over control of the physical world from humans.
And he wrote about the risk in 2015 months before OpenAI was founded: https://blog.samaltman.com/machine-intelligence-part-1 https://blog.samaltman.com/machine-intelligence-part-2
Fine if you disagree with his arguments, but why assume you know what his motivation is?
I don't think it's really that wide, but rather that we tend to focus on the difference while ignoring the similarities.
> What are the chances that the intelligence spectrum ends just to the right of our most intelligent geniuses?
Close to zero, I would say. Human brains, even the most intelligent ones, have very significant limitations in terms of number of mental objects that can be taken into account simultaneously in a single thought process.
Artificial intelligence is likely to be at least as superior to us as we are to domestic cats and dogs, probably way beyond that withing a couple of generations.
And if we're going to put gobs of money and brainpower into attempting to make superhuman AI, it seems like a good idea to also put a lot of effort into making it safe. It'd be better to have safe but kinda dumb AI than unsafe superhuman AI, so our funding priorities appear to be backwards.
Until then, I'm guessing that FSD will have some limits to what conditions it can handle. Hopefully, it will know its limits, and not try to take you over a mountain pass during a blizzard.
https://tidybot.cs.princeton.edu/
https://innermonologue.github.io/
https://www.microsoft.com/en-us/research/group/autonomous-sy...
Things we pulled the plug on eventually, while dragging it out, include: leaded fuel, asbestos, radium paint, treating above-ground atomic testing as a tourist attraction.
Have we literally forgotten how physical possession of the device is the ultimate trump card?
Get thee to a 13th century monastery!
Well, what's different now?
George Washington didn't personally fight off all the British single-handed, he and his co-conspirators used eloquence to convince people to follow them to freedom; Stalin didn't personally take food from the mouths of starving Ukranians, he inspired fear that led to policies which had this effect; Musk didn't weld the seams of every Tesla or Falcon, nor dig tunnels or build TBMs for TBC, nor build the surgical robot that installed Neuralink chips, he convinced people his vision of the future was one worth the effort; and Indra Nooyi doesn't personally fill up all the world's Pepsi bottles, that's something I assume[0] is done with several layers of indirection via paying people to pay people to pay people to fill the bottles.
[0] I've not actually looked at the org chart because this is rhetorical and I don't care
Your post appeals to science and logic, yet it makes huge assumptions. Other posters mention how an AI would interface with the physical world. While we all know cool cases like stuxnet, robotics has serious limitations and not everything is connected online, much less without a physical override.
As a thought experiment lets think of a similar past case: the self-driving optimism. Many were convinced it was around the corner. Many times I heard the argument that "a few deaths were ok" because overall self-driving would cause less accidents, an argument in favor of preventable deaths based on an unfounded tech belief. Yet nowadays 100% self-driving has stalled because of legal and political reasons.
AI actions could similarly be legally attributed to a corporation or individual, like we do with other tools like knives or cranes, for example.
IMHO, for all the talk about rationality, tech fetishism is rampant, and there is nothing scientific about it. Many people want to play with shiny toys, consequences be dammed. Let’s not pretend that is peak science.
The first AGI, regardless of if it's a brain upload or completely artificial, is likely to have analogs of approximately every mental health disorder that's mathematically possible, including ones we don't have words for because they're biologically impossible.
So, take your genius, remember it's completely mad in every possible way at the same time, and then give it even just the capabilities that we see boring old computers having today, like being able to translate into any language, or write computer programs from textual descriptions, or design custom toxins, or place orders for custom gene sequences and biolab equipment.
That's a big difference. But even if it was no difference, the worst a human can get is still at least in the tens of millions dead, as demonstrated by at least three different mid-20th century leaders.
Doesn't matter why it goes wrong, if it thinks it's trying immanentize the eschaton or a secular equivalent, nor if it watches Westworld or reads I Have No Mouth And I Must Scream and thinks "I like this outcome", the first one is almost certainly going to be more insane than the brainchild of GLaDOS and Lore, who as fictional characters were constrained by the need for their flaws to be interesting.
I hadn’t encountered Pascal’s mugging (https://en.wikipedia.org/wiki/Pascal%27s_mugging) before and the premise is indeed pretty apt. I think I’m on the side that it’s not, assuming the idea is that it’s a Very Low Chance of a Very Bad Thing -- the “muggee” wants to give their wallet on the chance of the VBT because of the magnitude of its effect. It seems like there’s a rather high chance if (proverbially) the AI-cat is let out of the bag.
But maybe some Mass Effect nonsense will happen if we develop AGI and we’ll be approached by The Intergalactic Community and have our technology advanced millennia overnight. (Sorry, that’s tongue-in-cheek but it does kinda read like Pascal’s mugging in the opposite direction; however, that’s not really what most researchers are arguing.)
It can found a cult - imagine something like Scientology founded by an AI. Once it has human followers it can act in the world with total freedom.
How do you stop a crazy AI? You turn it off.
Pout pleas. Keep it preying about fantasy bogeyman instead of actual harms today, and never EVER question why.
[0] >>36038681
Yes, but for reasons that no one seems to be looking at: skill atrophy. As more and more people buy into this gambit that AI is "super intelligent," they will cede more and more cognitive power to it.
On a curve, that means ~10-20 years out, AI doesn't kill us because it took over all of our work, people just got too lazy (read: over-dependent on AI doing "all the things") and then subsequently too dumb to do the work. Idiocracy, but the M. Night Shymalan version.
As we approach that point, systems that require some form of conscious human will begin to fail and the bubble will burst.
Do you know anyone that considers the pursuit of profits and constant exploitation of natural resources as a problem that needs to be addressed because I don't. Everyone seems very happy with the status quo and AI development is just more of the same status quo development, just corporations seeking ways to exploit and profit from digital resources. OpenAI being a perfect example of this.
I guess we could shoot it, and your gonna be like boooooooo that's terminator or irobot, but what if we make millions and they they decide they no longer like humans.
They could very well be much smarter then us by then.
An unaligned superintelligent AGI in pursuit of some goal that happens to satisfy its reward, but might be an otherwise a dumb or pointless goal (paperclips) will still play to win. You can’t predict exactly what move AlphaGO will make in the Go game (if you could you’d be able to beat it), but you can still predict it will win.
It’s amusing to me when people claim they will control the superintelligent thing, how often in nature is something more intelligent controlled by something magnitudes less intelligent?
The comments here are typical and show most people haven’t read the existing arguments in any depth or have thought about it rigorously at all.
All of this looks pretty bad for us, but at least Open AI and most others at the front of this do understand the arguments and don’t have the same dumb dismissals (LeCun excepted).
Unfortunately unless we’re lucky or alignment ends up being easier than it looks, the default outcome is failure and it’s hard to see how the failure isn’t total.
This is a non sequitur.
Even if the premise were meaningful (they're trained on human-written text), humans themselves aren't "trained on human-written texts", so the two things aren't comparable. If they aren't comparable, I'm not sure why the fact that they are trained on "human-written texts" is a limiting factor. Perhaps because they are trained on those instead of what human babies are trained on, that might make them more intelligent, not less. Humans end up the lesser intelligence because they are trained less perfectly on "human-written texts".
Besides which, no one with any sense is expecting that even the most advanced LLM possible becomes an AGI by itself, but only when coupled with some other mechanism that is either at this point uninvented or invented-but-currently-overlooked. In such a scenario, the LLM's most likely utility is in communicating with humans (to manipulate, if we're talking about a malevolent one).
Because they're human. They've evolved from a lineage whose biggest advantage was that it was social. Genes that could result in some large proportion of serial killers and genocidal tyrants are mostly purged. Even then, a few crop up from time to time.
There is no filter in the AI that purges these "genes". No evolutionary process to lessen the chances. And some relatively large risk that it's far, far more intelligent than a 70 iq point spread on you.
> There are power structures and social safeguards going back thousands of years to forestall that very possibility?
Huh? Why the fuck would it care about primate power structures?
Sometimes even us bald monkeys don't care about those, and it never ever fails to freak people the fuck out. Results in assassinations and other nonsense, and you all gibber and pee your pants and ask "how could anyone do that". I'd ask you to imagine such impulses and norm-breaking behaviors dialed up to 11, but what's the point... you can't even formulate a mental model of it when the volume's only at 1.6.
What makes you say this is impossible? We could simply not go down this road, there are only so many people knowledgeable enough and with access to the right hardware to make progress towards AI. They could all agree, or be compelled, to stop.
We seem to have successfully halted research into cloning, though that wasn't a given and could have fallen into the same trap of having to develop it before one's enemy does.
Very few people are actually alarmed about the right issues (in no particular order): population size, industrial pollution, military-industrial complex, for-profit multi-national corporations, digital surveillance, factory farming, global warming, &etc. This is why the alarmism from the AI crowd seems disingenuous because AI progress is simply an extension of for-profit corporatism and exploitation applied to digital resources and to properly address the risk from AI would require addressing the actual root causes of why technological progress is misaligned with human values.
1: https://www.theguardian.com/world/2015/jul/24/france-big-bro...
If a team of leading cardiac surgeons declared tomorrow that coca-cola is a leading cause of heart attacks, and devoted 20% of their income to fighting it, would you ignore their warnings as well?
The value of looking at ai safety as a pascals mugging as posited by the video is in that it informs us that these philosophers arguments are too malleable to be strictly useful. As you note, just find an "expert" that agrees.
The most useful frame for examination is the evidence. (which to me means benchmarks), We'll be hard pressed to derive anything authoritative from the philosophical approach. And as someone who does his best to examine the evidence for and against the capabilities of these things... from Phi-1 to Llama to Orca to Gemini to bard...
To my understanding we struggle to at all strictly define intelligence and consciousness in humans, let alone in other "species". Granted I'm no David Chalmers.. Benchmarks seem inadequate for any number of reasons, philosophical arguments seem too flexible, I don't know how one can definitively speak about these LLMs other than to tout benchmarks and capabilities/shortcomings.
>It seems like there’s a rather high chance if (proverbially) the AI-cat is let out of the bag.
Agree, and I tend towards it not exactly being a pascal's mugging either, but I loved that video and it's always stuck with me . I've been watching that guy since GPT 2 and OpenAI's initial trepidation about releasing that for fear of misuse. He has given me a lot of credibility in my small political circles, after touting these things as coming for years after seeing the graphs never plateau in capabilities vs parameter count/training time.
Ai has also made me reevaluate my thoughts on open sourcing things. Do we really think it wise to have gpt 6-7 in the hands of every 4channer?
Re mass effect, that's so awesome. I have to play those games. That sounds like such a dope premise. I like the idea of turning the mugging like that.
People are part of the biosphere. If other species can't adapt to Homo Sapiens, well, that's life for you. It's not fair or pretty.
The AI doomers can continue worrying about technological progress if they want, the actual problems are unrelated to how much money and effort OpenAI is spending on alignment because their corporate structure requires that they continue advancing AI capabilities in order to exploit the digital commons as efficiently as possible.
Even if we knew how to create a new species of superintelligent humans who have goals misaligned with the rest of humanity, it would take them decades to accumulate knowledge, propagate themselves to reach a sufficient number, and take control of resources, to pose critical dangers to the rest.
Such constraints are not applicable to superintelligent AIs with access to the internet.
That's how people get killed on roads. Early experience with self driving taxis seems to suggest that journeys are uneventful and passengers stop paying attention and leave the driving to the car. So, yes, they quickly accept that the car is driving the car just fine.
Assumptions:
- Genetic modification as danger needs to be in the form of a big number of smart humans (where did that come from?)
- AI is not physically constrained
> it's much more likely we have time to detect and thwart their threats.
Why? Counterexample: covid.
> Even if we knew how to create a new species of superintelligent humans who have goals misaligned with the rest of humanity, it would take them decades to accumulate knowledge, propagate themselves to reach a sufficient number, and take control of resources, to pose critical dangers to the rest.
Why insist on some superinteligent and human, and suficient number. A simple virus could be a critical danger.
1) progress was stopped due to regulation which is what we are talking about is needed
2) that was done after a few deaths
3) we agree that self driving can be done but its currently stalled. Likewise we do not disagree that AGI is possible right?
We do not have the luxury to have a few deaths from a rogue AI because it may be the end.
It doesn't matter at all if experts disagree. Even a 30% chance we all die is enough to treat it as 100%. We should not care at all if 51% think it's a non issue.
I'd bet a lot of money you have not read any of the existing literature on the alignment problem. It's kind of funny that someone thinks "just unplug it" could be a solution.
I agree in spirit with the person you were responding too. AI lacks the physicality to be a real danger. It can be a danger because of bias or concentration of power (what regulations are trying to do, regulatory capture) but not because AI will paperclip-optimize us. People or corporations using AI will still be legally responsible (like with cars, or a hammer).
It lacks the physicality for that, and we can always pull the plug. AI is another tool people will use. Even now it is neutered to not give bad advice, etc.
These fantasies about AGI are distracting us (again agreeing with OP here) from the real issues of inequality and bias that the tool perpetuates.
No we can't and there is a humongous amount of literature you have not read. As I pointed in another comment, thinking that you found a solution by "pulling the plug" while all the top scientists have spent years contemplating the dangers is extremely narcissistic behavior. "hey guys, did you think about pulling the plug before quitting jobs and spending years and doing interviews and writing books"?
If nothing else, it's a great distraction from the very real societal issues that AI is going to create in the medium to long term, for example inscrutable black box decision-making and displacement of jobs.
There are two kinds of risk: the risk from these models as deployed as tools and as deployed as autonomous agents.
The first is already quite dangerous and frankly already here. An algorithm to invent novel chemical weapons is already possible. The risk here isn’t Terminator, it’s rogue group or military we don’t like getting access. There are plenty of other dangerous ways autonomous systems could be deployed as tools.
As far as autonomous agents go, I believe that corporations already exhibit most if not all characteristics of AI, and demonstrate what it’s like to live in a world of paperclip maximizers. Not only do they destroy the environment and bend laws to achieve their goals, they also corrupt the political system meant to keep them in check.
But the main point is that AGI's don't have to wipe us out as soon as they reach superintelligence, even if they're poorly aligned. Instead, they will do more and more of the work currently being done by humans. Non-embodied robots can do all mental work, including engineering. Sooner or later, robots will become competitive at manual labor, such as construction, agriculture and eventually anything you can think of.
For a time, humanity may find themselves in a post-scarcity utopia, or we may find ourselves in a Cyberpunk dystopia, with only the rich actually benefitting.
In each case, but especially the latter, there may still be some (or more than some) "luddites" who want to tear down the system. The best way for those in power to protect against that, is to use robots first for private security and eventually the police and military.
By that point, the violence monopoly is completely in the hands of the AI's. And if the AI's are not aligned with our values at that point, we have as little of a shot at regaining control as a group of chimps in a zoo as of toppling the US government.
Now, I don't think this will happen by 2030, and probably not even 2050. But some time between 2050 and 2500 is quite possible, if we develop AI that is not properly aligned (or even if it is aligned, though in that case it may gain the power, but not misuse it).
I respectfully disagree, and will remove myself from this conversation.
If we were limited to only explore what we're currently exploring, we'd never have made Transformer models.
> It could be theoretically possible to build an AGI smarter than a human, but is it really plausible if it turns out to need a data center the size of the Hadron Collider and the energy of a small country to maintain itself?
That would be an example of "some kind of magic special sauce", given human brains fit on the inside if a skull and use 20 watts regardless of if they are Einstein or a village idiot, and we can make humans more capable by giving them normal computer with normal software like a calculator and a spreadsheet.
A human with a Pi Zero implant they can access by thought, which is basically the direction Neuralink is going but should be much easier in an AI that's simulating a brain scan, is vastly more capable than an un-augmented human.
Oh, and transistors operate faster than synapses by about the same ratio that wolves outpace continental drift; the limiting factor being that synapses use less energy right now — it's known to be possible to use less energy than synapses do, just expensive to build.
> Maybe the best way to solve the "alignment problem", and other issues of creating super-intelligence, is to solve the problem of how best to raise and educate intelligent and well-adjusted humans?
Perhaps, but we're not exactly good at that.
Should still look onto it anyway, it's useful regardless, but just don't rely on that being the be-all and end-all of alignment.
We have an interest in not destroying our own environment because it’ll make our own lives more difficult and can have bad outcomes, but it’s not likely an extinction level risk for humans and even less so for all other life. Solutions like “degrowth” aren’t real solutions and cause lots of other problems.
It’s “cool” for the more extreme environmental political faction to have a cynical anti-human view of life (despite being human) because some people misinterpret this as wisdom, but I don’t.
The unaligned AGI e-risk is a different level of threat and could really lead to killing everything in pursuit of some dumb goal.
If that person was disabled in all limbs, I would not regard them as much of a threat.
>Jesus fuck. How many semi-secrets are out there, about that one power plant that wasn't supposed to hook up the main control computer to a modem, but did it anyway because the engineers found it more convenient? How many backdoors in critical systems? How many billions of dollars are out there in bitcoin, vulnerable to being thieved away by any half-clever conman? Have you played with ElevenLabs' stuff yet? Those could be literal whispers in the voices of whichever 4 star generals and admirals that it can find 1 minutes worth of sampled voice somewhere on the internet.
These kind of hacks and pranks would work the first time for some small scale damage. The litigation in response would close up these avenues of attack over time.
Most of the time a new virus is not a pandemic, but sometimes it is.
Nothing in our (human) history has caused an extinction level event for us, but these events do happen and have happened on earth a handful of times.
The arguments about superintelligent AGI and alignment risk are not that complex - if we can make an AGI the other bits follow and an extinction level event from an unaligned superintelligent AGI looks like the most likely default outcome.
I’d love to read a persuasive argument about why that’s not the case, but frankly the dismissals of this have been really bad and don’t hold up to 30 seconds of scrutiny.
People are also very bad at predicting when something like this will come. Right before the first nuclear detonation those closest to the problem thought it was decades away, similar for flight.
What we’re seeing right now doesn’t look like failure to me, it looks like something you might predict to see right before AGI is developed. That isn’t good when alignment is unsolved.
https://www.acsh.org/news/2018/04/17/bee-apocalypse-was-neve...
Mass starvation wasn't "addressed" exactly, because the predictions were for mass starvation in the west, which never happened. Also the people who predicted this weren't the ones who created the Green Revolution.
Ozone hole is I think the most valid example in the list, but who knows, maybe that was just BS too. A lot of scientific claims turn out to be so, these days, even those that were accepted for quite a while.
1: https://www.nationalgeographic.com/environment/article/plast...
For you, it's always the homework problems that your teacher assigned you in grade school, nothing else is intelligent. What to say to someone to have them be your friend on the playground, that never counted. Where and when to show up (or not), so that the asshole 4 grades above you didn't push you down into the mud... not intelligence. What to wear, what things to concentrate on about your appearance, how to speak, which friendships and romances to pursue, etc.
All just "animal cunning". The only real intelligence is how to work through calculus problem number three.
They were smart enough at these things that they did it without even consciously thinking about it. They were savants at it. I don't think the AI has to be a savant though, it just has to be able to come up with the right answers and responses and quickly enough that it can act on those.
There were four reactors in Chernobyl plant, the exploding one was 1986, the others were shut down in 1991, 1996, and 2000.
There's no plausible way to guess at the speed of change from a misaligned AI, can you be confident that 14 years isn't enough time to cause problems?
Even if it is only trying to kill us all and not provide any benefits — let's say it's been made by a literal death cult like Jonestown or Aum Shinrikyo — what's the smallest such AI that can do it, what's the hardware that needs, what's the energy cost? If it's an H100, that's priced in the realm of a cult, and sufficiently low power consumption you may not be able to find which lightly modified electric car it's hiding in.
Nobody knows what any of the risks or mitigations will be, because we haven't done any of it before. All we do know is that optimising systems are effective at manipulating humans, that they can be capable enough to find ways to beat all humans in toy environments like chess, poker, and Diplomacy (the game), and that humans are already using AI (GOFAI, LLMs, SD) without checking the output even when advised that the models aren't very good.
The OpenAI people have even worse reasoning than the ones being dismissive. They believe (or at least say they believe) in the omnipotence of a superintelligence, but then say that if you just give them enough money to throw at MIRI they can just solve the alignment problem and create the benevolent supergod. All while they keep cranking up the GPU clusters and pushing out the latest and greatest LLMs anyway. If I did take the risk seriously, I would be pretty mad at OpenAI.
The AGI is smarter than you, a lot smarter. If it's goal is to get out of the box to accomplish some goal and some human stands in the way of that it will do what it can to get out, this would include not doing things that sound alarms until it can do what it wants in pursuit of its goal.
Humans are famously insecure - stuff as simple as breaches, manipulation, bribery, etc. but could be something more sophisticated that's hard to predict - maybe something a lot smarter would be able to manipulate people in a more sophisticated way because it understands more about vulnerable human psychology? It can be hard to predict specific ways something a lot more capable will act, but you can still predict it will win.
All this also presupposes we're taking the risk seriously (which largely today we are not).
An AI would provide benefits when it is, say, actually making paperclips. An AI that is killing people instead of making paperclips is a liability. A company that is selling shredded fingers in their paperclips is not long for this world. Even asbestos only gives a few people cancer slowly, and it does that while still remaining fireproof.
>Even if it is only trying to kill us all and not provide any benefits — let's say it's been made by a literal death cult like Jonestown or Aum Shinrikyo — what's the smallest such AI that can do it, what's the hardware that needs, what's the energy cost? If it's an H100, that's priced in the realm of a cult, and sufficiently low power consumption you may not be able to find which lightly modified electric car it's hiding in.
Anyone tracking the AI would be looking at where all the suspicious HTTP requests are coming from. But a rogue AI hiding in a car already has very limited capabilities to harm.
AI is pretty good at chess, but no AI has won a game of chess by flipping the table. It still has to use the pieces on the board.
If this is just a definitions issue, s/artificial intelligence/artificial cunning/g to the same effect.
Strength seems somewhat irrelevant either way, given the existence of Windows for Warships[0].
[0] not the real name: https://en.wikipedia.org/wiki/Submarine_Command_System
…
how many drugs are you on right now? Even if you think you needed them to pass the bar exam, that's a really weird example to use given GPT-4 does well on that specific test.
One is a deadly cancer stick and not even the best way to get nicotine, the other is a controlled substance that gets life-to-death if you're caught supplying it (possibly unless you're a doctor, but surprisingly hard to google).
> An AI would provide benefits when it is, say, actually making paperclips.
Step 1. make paperclip factory.
Step 2. make robots that work in factory.
Step 3. efficiently grow to dominate global supply of paperclips.
Step 4. notice demand for paperclips is going down, advertise better.
Step 5. notice risk of HAEMP damaging factories and lowering demand for paperclips, use advertising power to put factory with robots on the moon.
Step 6. notice a technicality, exploit technicality to achieve goals better; exactly what depends on the details of the goal the AI is given and how good we are with alignment by that point, so the rest is necessarily a story rather than an attempt at realism.
(This happens by default everywhere: in AI it's literally the alignment problem, either inner alignment, outer alignment, or mesa alignment; in humans it's "work to rule" and Goodhart's Law, and humans do that despite having "common sense" and "not being a sociopath" helping keep us all on the same page).
Step 7. moon robots do their own thing, which we technically did tell them to do, but wasn't what we meant.
We say things like "looks like these AI don't have any common sense" and other things to feel good about ourselves.
Step 8. Sales up as entire surface of Earth buried under a 43 km deep layer of moon paperclips.
> Anyone tracking the AI would be looking at where all the suspicious HTTP requests are coming from.
A VPN, obviously.
But also, in context, how does the AI look different from any random criminal? Except probably more competent. Lot of those around, and organised criminal enterprises can get pretty big even when it's just humans doing it.
Also pretty bad even in the cases where it's a less-than-human-generality CrimeAI that criminal gangs use in a way that gives no agency at all to the AI, and even if you can track them all and shut them down really fast — just from the capabilities gained from putting face tracking AI and a single grenade into a standard drone, both of which have already been demonstrated.
> But a rogue AI hiding in a car already has very limited capabilities to harm.
Except by placing orders for parts or custom genomes, or stirring up A/B tested public outrage, or hacking, or scamming or blackmailing with deepfakes or actual webcam footage, or developing strategies, or indoctrination of new cult members, or all the other bajillion things that (("humans can do" AND "moneys can't do") specifically because "humans are smarter than monkeys").
Regardless of these downsides, people use them frequently in the high stress environments of the bar or med school to deal with said stress. This may not be ideal, but this is how it is.
>Step 3. efficiently grow to dominate global supply of paperclips. >Step 4. notice demand for paperclips is going down, advertise better. >Step 5. notice risk of HAEMP damaging factories and lowering demand for paperclips, use advertising power to put factory with robots on the moon.
When you talk about using 'advertising power' to put paperclip factories on the moon, you've jumped into the realm of very silly fantasy.
>Except by placing orders for parts or custom genomes, or stirring up A/B tested public outrage, or hacking, or scamming or blackmailing with deepfakes or actual webcam footage, or developing strategies, or indoctrination of new cult members, or all the other bajillion things that (("humans can do" AND "moneys can't do") specifically because "humans are smarter than monkeys").
Law enforcement agencies have pretty sophisticated means of bypassing VPNs that they would use against an AI that was actually dangerous. If it was just sending out phishing emails and running scams, it would be one more thing to add to the pile.
Who is Satoshi Nakamoto?
What evidence is there for the physical existence of Jesus?
"Common Sense" by Thomas Paine was initially published anonymously.
This place, here, where you and I are conversing… I don't know who you are, and yet for most of the world, this place is a metaphorical "smokey backroom".
And that's disregarding how effective phishing campaigns are even without a faked face or a faked voice.
>What evidence is there for the physical existence of Jesus?
Limited, to the extent that physical evidence for the existence of anyone from that time period is limited. I think it's fairly likely there was a a person named Jesus who lived with the apostles.
>"Common Sense" by Thomas Paine was initially published anonymously.
The publishing of Common Sense was far less impactful on the revolution than the meetings held by members of the future Continental Congress. Common Sense was the justification given by those elites for what they were going to do.
>This place, here, where you and I are conversing… I don't know who you are, and yet for most of the world, this place is a metaphorical "smokey backroom".
No important decisions happen because of discussions here and you are deluding yourself if you think otherwise.
Phishing campaigns can be effective at siphoning limited amounts of money and embarrassing personal details from people's email accounts. If you suggested that someone could take over the world just via phishing, you'd be rightfully laughed out of the room.
Give it 50 years of development, all of which Alphabet delivers great results while improving the company image with the general public through appearing harmless and nurturing public relations through social media, etc.
Relatively early in this process, even the maintaince, cleaning and construction staff is filled with robots. Alphabet acquires the company that produces these, to "minimize vendor risk".
At some point, one GCP data center is hit by a crashing airplane. A terrorist organization similar to ISIS takes/gets the blame. After that, new datacenters are moved to underground, hardened locations, complete with their own nuclear reactor for power.
If the general public is still concerned about AI's, these data centers do have a general power switch. But the plant just happens to be built in such a way that bypassing that switch requires just a few power lines, that a maintainance robot can add at any time.
Gradualy the number of such underground facilities is expanded, with the CEO AI and other important AI's being replicated to each of them.
Meanwhile, the robotics division is highly successful, due to the capable leadership, and due to how well the robotics version of Android works. In fact, Android is the market leader for such software, and installed on most competitor platforms, even military ones.
The share holders of Alphabet, which includes many members of Congress become very wealthy from Alphabet's continued success.
One day, though, a crazy, luddite politician declares that she's running for president, based on a platform that all AI based companies need to be shut down "before it's too late".
The board, supported by the sitting president panics, and asks the Alphabet CEO do whatever it takes to help the other candidate win.....
The crazy politician soon realizes that it was too late a long time ago.
Without even getting into the question of whether it's actually profitable for a tech company to be completely staffed by robots and built itself an underground bunker (it's probably not), the luddite on the street and the concerned politician would be way more concerned about the company building a private army. The question of whether this army is led by an AI or just a human doesn't seem that relevant.
It's a slightly different premise than what I described. Rather than AGI, it's faster-than-light travel (which actually makes sense for The Intergalactic Community). Otherwise, more or less the same.
So far we haven't seen any proof or even a coherent hypothesis, just garden variety paranoia, mixed with opportunistic calls for regulation that just so happen to align with OpenAI's commercial interests.
Whereas, an AI that tries to kill everyone or take over the world or something, that seems pretty explicitly bad news and everyone would be united in stopping it. To work around that, you have to significantly complicate the AI doom scenario to be one in which a large number of people think the AI is on their side and bringing about a utopia but it's actually ending the world, or something like that. But, what's new? That's the history of humanity. The communists, the Jacobins, the Nazis, all thought they were building a better world and had to have their "off switch" thrown at great cost in lives. More subtly the people advocating for clearly civilization-destroying moves like banning all fossil fuels or net zero by 2030, for example, also think they're fighting on the side of the angels.
So the only kind of AI doom scenario I find credible is one in which it manages to trick lots of powerful people into doing something stupid and self-destructive using clever sounding words. But it's hard to get excited about this scenario because, eh, we already have that problem x100, except the misaligned intelligences are called academics.
When my mum came down with Alzheimer's, she forgot how the abstract concept of left worked.
I'd heard of the problem (inability to perceive a side) existing in rare cases before she got ill, but it's such a bizarre thing that I had assumed it had to be misreporting before I finally saw it: she would eat food on the right side of her plate leaving the food on the left untouched, insist the plate was empty, but rotating the plate 180 degrees let her perceive the food again; she liked to draw and paint, so I asked her to draw me, and she gave me only one eye (on her right); I did the standard clock-drawing test, and all the numbers were on the right, with the left side being empty (almost: she got the 7 there, but the 8 was above the 6 and the 9 was between the 4 and 5).
When she got worse and started completely failing the clock drawing test, she also demonstrated in multiple ways that she wasn't able to count past five.
An H100 could fit in a Tesla, and a large Tesla car battery could run an H100 for a working day before it needs recharging.
There's power and prestige in money, too, not just the positions.
Hence the lawyers who got in trouble for outsourcing themselves to ChatGPT: https://www.reuters.com/legal/new-york-lawyers-sanctioned-us...
Or those t-shirts from a decade back: https://money.cnn.com/2013/06/24/smallbusiness/tshirt-busine...
This is based on the assumption that when we have access to super intelligent engineer AI's, we will be able to construct robots that are significantly more capable than robots that are available today and that can, if remote controlled by the AI, repeair and build each other.
At that point, robots can be built without any human labor involved, meaning the cost will be only raw materials and energy.
And if the robots can also do mining and construction of power plants, even those go down in price significantly.
> the luddite on the street and the concerned politician would be way more concerned about the company building a private army.
The world already has a large number of robots, both in factories and in private homes, and perhaps most importantly, most modern cars. As robots become cheaper and more capable, people are likely to get used to it.
Military robots would be owned by the military, of course.
But, and I suppose this is similar to I Robot, if you control the software you may have some way to take control of a fleet of robots, just like Tesla could do with their cars even today.
And if the AI is an order of magnitude smarter than humans, it might even be able to do an upgrade of the software for any robots sold to the military, without them knowing. Especially if it can recruit the help of some corrupt politicians or soldiers.
Keep in mind, my assumed time span would be 50 years, more if needed. I'm not one of those that think AGI will wipe out humanity instantly.
But in a society where we have superintelligent AI over decades, centuries or millienia, I don't think it's possible for humanity to stay in control forever, unless we're also "upgraded".
And as long as the results improve year over year, they would have little incentive to make changes.
And this is all over the press and other media now, both the old and new, left leaning and right leaning. I would say it's pretty well within the Overton Window.
Politicians in the US are a bit behind. They probably just need to run the topic with some polls and voter study groups to decide what opinions are most popular with their voter bases.
And also one that can create the impression that it's purely benevolent to most of humanity, making it have more human defenders than Trump at a Trump rally.
Turning it off could be harder than pushing a knife through the heart of the POTUS.
Oh, and it could have itself backed up to every data center on the planet, unlike the POTUS.
And it would not be limited to act as the cult leaders, it could also provide fake cult followers that would convince the humans that the leaders possessed superhuman wisdom.
It could also combine this with a full machinery for A/B-testing and similar experiments to ensure that the message it is communicating is optimal in terms of its goals.
If a pathogen more deadly than Covid starts to spread, eg like Ebola or Smallpox, we would have done more to limit its spread. If it’s good at hiding from detection for a while, it could potentially cause a catastrophe but most likely will not wipe out humanity because it is not intelligent and some surviving humans will eventually find a way to thwart it or limit its impact.
A pathogen is also physically constrained by available hosts. Yes, current AI also requires processors but it’s extremely hard or nearly impossible to limit contact with CPUs & GPUs in the modern economy.
And mine is that this can also be true of a misaligned AI.
It doesn't have to be like Terminator, it can be slowly doing something we like and where we overlook the downsides until it's too late.
Doesn't matter if that's "cure cancer" but the cure has a worse than cancer side effect that only manifests 10 years later, or if it's a mere design for a fusion reactor where we have to build it ourselves and that leads to weapons proliferation, or if it's A/B testing the design for a social media website to make it more engaging and it gets so engaging that people choose not to hook up IRL and start families.
> But, what's new? That's the history of humanity. The communists, the Jacobins, the Nazis, all thought they were building a better world and had to have their "off switch" thrown at great cost in lives.
Indeed.
I would agree that this is both more likely and less costly than "everyone dies".
But I'd still say it's really bad and we should try to figure out in advance how to minimise this outcome.
> except the misaligned intelligences are called academics
Well, that's novel; normally at this point I see people saying "corporations", and very rarely "governments".
Not seen academics get stick before, except in history books.
The AI is still doing the job in the real world of allocating resources, hiring and firing people, and so on. It's not so complex as to be opaque. When an AI plays chess, the overall strategy might not be clear, but the actions it is doing are still obvious.
Big assumption. There's the even bigger assumption that these ultra complex robots would make the costs of construction go down instead of up, as if you could make them in any spare part factory in Guangzhou. It's telling how ignorant AI doomsday people are of things like robotics and material sciences.
>But, and I suppose this is similar to I Robot, if you control the software you may have some way to take control of a fleet of robots, just like Tesla could do with their cars even today.
Both Teslas and military robots are designed with limited autonomy. Tesla cars can only drive themselves on limited battery power. Military robots like drones are designed to act on their own when deployed, needing to be refueled and repaired after returning to base. A fully autonomous military robot, in addition to being a long way away, also would raise eyebrows by generals for not being as easy to control. The military values tools that are entirely controllable before any minor gains in efficiency.
For sure. But I don't see what's AI specific about it. If the AI doom scenario is a super smart AI tricking people into doing self destructive things by using clever words, then everything you need to do to vaccinate people against that is the same as if it was humans doing the tricking. Teaching critical thinking, self reliance, to judge arguments on merit and not on surface level attributes like complexity of language or titles of the speakers. All these are things our society objectively sucks at today, and we have a ruling class - including many of the sorts of people who work at AI companies - who are hellbent on attacking these healthy mental habits, and people who engage in them!
> Not seen academics get stick before, except in history books.
For academics you could also read intellectuals. Marx wasn't an academic but he very much wanted to be, if he lived in today's world he'd certainly be one of the most famous academics.
I'm of the view that corporations are very tame compared to the damage caused by runaway academia. It wasn't corporations that locked me in my apartment for months at a time on the back of pseudoscientific modelling and lies about vaccines. It wasn't even politicians really. It was governments doing what they were told by the supposedly intellectually superior academic class. And it isn't corporations trying to get rid of cheap energy and travel. And it's not governments convincing people that having children is immoral because of climate change. All these things are from academics, primarily in universities but also those who work inside government agencies.
When I look at the major threats to my way of life today, academic pseudo-science sits clearly at number 1 by a mile. To the extent corporations and governments are a threat, it's because they blindly trust academics. If you replace Professor of Whateverology at Harvard with ChatGPT, what changes? The underlying sources of mental and cultural weakness are the same.
And there's no need for it to be "evil", in the cliché sense, rather those hidden activities could simply be aimed at supporting the primary agenda of the agent. For a corporate AI, that might be maximizing long term value of the company.
35 years ago, when I was a teenager, I remember having discussions with a couple of pilots, where one was a hobbyist pilot and engineer the other a former fighter pilot turned airline pilot.
Both claimed that computers would never be able pilot planes. The engineer gave a particularily bad (I thought) reason, claiming that turbulent air was mathematically chaotic, so a computer would never be able to fully calculate the exact airflow around the wings, and would therefore, not be able to fly the plane.
My objection at the time, was that the computer would not have to do exact calculations of the air flow. In the worst case, they would need to do whatever calculations humans were doing. More likely though, their ability to do many types of calculations more quickly than humans, would make them able to fly relatively well even before AGI became available.
A couple of decades later, drones flying fully autonomously was quite common.
My reasoning when it comes to robots contructing robots is based on the same idea. If biological robots, such as humans, can reproduce themselves relatively cheaply, robots will at some point be able to do the same.
At the latest, that would be when nanotech catches up to biological cells in terms of economy and efficiency. Before that time, though, I expect they will be able to make copies of themselves using our traditional manufacturing workflows.
Once they are able to do that, they can increase their manufacturing capacity exponentially for as long as needed, provided access to raw materials are met.
I would be VERY surprised if this doesn't become possible within 50 years of AGI coming online.
Both Teslas and military robots are designed with limited autonomy.
For a tesla to be able to drive without even a human in the car, is only a software update away. The same is the case for drones "loyal wingmen" any aircraft designed to be optionally manned.
Even if their current software currently requires a human in the killchain, that's a requirement that can be removed by a simple software change.
While fuel supply creates a dependency on humans today, that part, may change radically over the next 50 years, at least if my assumptions above about the economy of robots in general are correct.
When we have superintelligence, the AI is not going to a hire a lot of people, only fire them.
And I fully expect the technical platform it runs on 50 years after the last human engineer is fired, is going to be as incomprehensible to humans as the complete codebase of Google is to a regular 10-year-old, at best.
The "code" it would be running might include some code written in a human readable programming language, but would probably include A LOT of logic hidden deep inside neural networks with parameter spaces many orders of magnitude greater than GPT-4.
And on the hardware side, the situation would be similar. Chips created by superintelligent AGI's are likely to be just as difficult to reverse engineer as the neural networks that created them.
And I think the assumption here is that the AGI has very advanced theory of mind so it could probably come up with better ideas than I could.
I assume we have not been able to stop people from creating and using carbon-based energy because a LOT of people still want to create and use them.
I don't think a LOT of people will want to keep an AI system running that is essentially wiping out humans.
Consider that biological cells are essentially nanotechnology, and consider the tradeoffs a cell has to make in order to survive in the natural world.