The first group doesn’t care about the narratives, the second group is too focused on the narratives to see the real threat.
Regardless of what you think about the current state of ai intelligence, networking autonomous agents that have evolution ability (due to them being dynamic and able to absorb new skills) and giving them scale that potentially ranges into millions is not a good idea. In the same way that releasing volatile pathogens into dense populations of animals wouldn’t be a good idea, even if the first order effects are not harmful to humans. And even if probability of a mutation that results in a human killing pathogen is miniscule.
Basically the only thing preventing this to become a consistent cybersecurity threat is the intelligence ceiling , of which we are unsure of, and the fact that moltbook can be ddos’d which limits the scale explosion
And when I say intelligence, I don’t mean human intelligence. An amoeba intelligence is dangerous if you supercharge its evolution.
Some people should be more aware that we already have superintelligence on this planet. Humanity is an order of magnitude more intelligent than any individual human (which is why humans today can build quantum computers although no biologically different from apes that were the first homo sapiens who couldn’t use tools.)
EDIT: I was pretty comfortable in the “doom scenarios are years if not decades away” camp before I saw this. I failed to account for human recklesness and stupidity.
"That virus is nothing but a microscopic encapsulated sequence of RNA."
"Moltbook is nothing but a bunch of hallucinating agents, hooked up to actuators, finding ways to communicate with each other in secret."
https://xcancel.com/suppvalen/status/2017241420554277251#m
With this sort of chaotic system, everything could hinge on a single improbable choice of next token.
They do not have evolution ability, as their architecture is fixed and they are incapable of changing it over time.
“Skills” are a clever way to mitigate a limitation of the LLM/transformer architecture; but they work on top of that fundamental architecture.
Edit: i am not talking evolution of individual agent intelligence, i an talking about evolution of network agency - i agree that evolution of intelligence is infinitesimally unlikely.
I’m not worried about this emerging a superintelligent AI, i am worried it emerges an intelligent and hard to squash botnet