We've all seen the proposed pricing for GPT4. So clearly a whole lot of very smart people who know an awful lot about this have absolutely no fear of being undercut.
Pandora's box spews knowledge onto the world. By contrast, Microsoft's draw bridge only allows the very wealthy to cross into the walled city. The masses will have to use the facilities of the crappy villages with no draw bridge.
"AI Divide"
Get used to hearing that term. The only difference between the AI Divide and the Digital Divide is that this time around, most of us are going to be on the wrong side of it.
The two ways I'm aware AI can go rogue are the Skynet way and the paperclip maximizer way. Eg, Skynet becomes self-aware, realizes humanity can unplug it and is thus a threat, and tries to destroy humanity before we can turn it off. Alternatively, it is programmed with optimizing a specific task, like making paperclips, so it marshals all the world's resources into that one single task.
Are there any others?
Their prompts would differ, depending on their use case. For ChatGPT, even a few words can effect a huge change in the personality it shows.
> Are there any others?
Both scenarios are vague enough for lots of uncertainty. If many AIs are around, perhaps they would see each other as bigger threats and ignore mankind. And different optimizing tasks might conflict with each other. There could be a paperclip recycler for every paperclip maker.
At least with a single AI, there's a chance that it will leave something for humans.
Off the top of my head:
* AI could fall in love with someone/thing and devote everything to pursuing them
* AI could be morbidly fixated and think of death as some kind of goal unto itself
* AI could use all of the world's resources making itself bigger/more of itself
* AI could formulate an end goal which is perfection and destroy anything that doesn't fit that definition
So many scenarios. You lack imagination.
>because the AIs will compete with each other and neutralize each other to some extent.
I wonder if the people in Vietnam or Afghanistan thought like this when the US and USSR fought proxy wars on their soil...
The most likely motivation for an AI to decide to wipe out humanity is one that doesn't even have an English word associated with it, except as a faint trace.
In my opinion, this is actually the greatest danger of AIs, one we can already see manifesting in a fairly substantial way with the GPT-line of transformer babble-bots. We can't help but model them as human. They aren't. There's a vast space of intelligent-but-not-even-remotely-human behaviors out there, and we have a collective gigantic blindspot about that because the only human-level intelligences we've ever encountered are humans. For all the wonderful and fascinating diversity of being human, there's also an important sense in which the genius and the profoundly autistic and the normal guy and the whole collection of human intelligence is all just a tiny point in the space of possibilities, barely distinguishable from each other. AIs are not confined to it in the slightest. They already live outside of there by quite a ways and the distance they can diverge from us only grows larger as their capabilities improve.
In fact people like to talk about how alien aliens could be, but even other biological aliens would be confined by the need to survive in the physical universe and operate on it via similar processes in physically-possible environments. AIs don't even have those constraints. AIs can be far more alien then actual biological aliens.