I wrote a comment recently trying to explain how even if you believe all LLMs can (and will ever) do is regurgitate their training data that you should still be concerned.
For example, imagine in 5 years we have GPT-7, and you ask GPT-7 to solve humanity's great problems.
From its training data GPT-7 might notice that humans believe overpopulation is a serious issue facing humanity.
But its "aligned" so might understand from its training data that killing people is wrong so instead it uses its training data to seek other ways to reduce human populations without extermination.
Its training data included information about how gene drives were used by humans to reduce mosquito populations by causing infertility. Many human have also suggested (and tried) to use birth control to reduce human populations via infertility so the ethical implications of using gene drives to cause infertility is debatable based on the data the LLM was trained on.
Using this information it decides to hack into a biolab using hacking techniques it learnt from its training data and use its biochemistry knowledge to make slight alterations to one of the active research projects at the lab. This causes the lab to unknowingly produce a highly contagious bioweapon which causes infertility.
---
The point here is that even if we just assume LLMs are only capable of producing output which approximates stuff it learnt from its training data, an advanced LLM can still be dangerous.
And in this example, I'm assuming no malicious actors and an aligned AI. If you're willing to assume there might be an actor out there would seek to use LLMs for malicious reasons or the AI is not well aligned then the risk becomes even clearer.
In other words, LLMs are only as dangerous as the humans operating them, and therefore the solution is to stop crime instead of regulating AI, which only seeks to make OpenAI a monopoly.
I think the objection to this would be that currently not everyone in the world an expert in biochemistry or at hacking into computer systems. Even if you're correct in principal, perhaps the risks of the technology we're developing here is too high? We typically regulate technologies which can easily be used to cause harm.
Guns only have a primarily harmful use which is to kill or injure someone. While that act of killing may be justified when the person violates societal values in some way, making regular citizens the decision makers in whether a certain behavior is allowed or disallowed and being able to immediately make a judgment and execute upon it leads to a sort of low-trust, vigilante environment; which is why the same argument I made above doesn’t apply for guns.
This is a real problem, but it's already problem with our society, not AI. Misaligned public intellectuals routinely try to reduce the human population and we don't lift a finger. Focus where the danger actually is - us!
From Scott Alexander's latest post:
Paul Ehrlich is an environmentalist leader best known for his 1968 book The Population Bomb. He helped develop ideas like sustainability, biodiversity, and ecological footprints. But he’s best known for prophecies of doom which have not come true - for example, that collapsing ecosystems would cause hundreds of millions of deaths in the 1970s, or make England “cease to exist” by the year 2000.
Population Bomb calls for a multi-pronged solution to a coming overpopulation crisis. One prong was coercive mass sterilization. Ehrlich particularly recommended this for India, a country at the forefront of rising populations.
In 1975, India had a worse-than-usual economic crisis and declared martial law. They asked the World Bank for help. The World Bank, led by Robert McNamara, made support conditional on an increase in sterilizations. India complied [...] In the end about eight million people were sterilized over the course of two years.
Luckily for Ehrlich, no one cares. He remains a professor emeritus at Stanford, and president of Stanford’s Center for Conservation Biology. He has won practically every environmental award imaginable, including from the Sierra Club, the World Wildlife Fund, and the United Nations (all > 10 years after the Indian sterilization campaign he endorsed). He won the MacArthur “Genius” Prize ($800,000) in 1990, the Crafoord Prize ($700,000, presented by the King of Sweden) that same year, and was made a Fellow of the Royal Society in 2012. He was recently interviewed on 60 Minutes about the importance of sustainability; the mass sterilization campaign never came up. He is about as honored and beloved as it’s possible for a public intellectual to get.
Have you any empirical evidence at all on this? From what I've seen the open carry states in the US are generally higher trust environments (as was the US in past when more people carried). People feel safer when they know somebody can't just assault, rob or rape them without them being able to do anything to defend themselves. Is the Tenderloin a high trust environment?
I don't think this would be a bad thing :) Some people will always be immune, humanity wouldn't die out. And it would be a humane way for gradual population reduction. It would create some temporary problems with elderly care (like what China is facing now) but will make long term human prosperity much more likely. We just can't keep growing against limited resources.
The Dan Brown book Inferno had a similar premise and I was disappointed they changed the ending in the movie so that it didn't happen.
> But its "aligned" so might understand
> Using this information it decides to hack
I think you're anthropomorphizing LLM's too much here. If we assume that there's a AGI-esque AI, then of course we should be worried about an AGI-esque AI. But I see no reason to think that's the case.
If it is possible for AI to ever acquire ability to develop and unleash a bioweapon is irrelevant. What is relevant is that as we are now, we have no control or way of knowing that it has happened, and no apparent interest in gaining that control before advancing the scale.
But AI is not like guns in this analogy. AI is closer to machine tools.
The same thing might also be true in relation to guns and the government's monopoly on violence.
Extending that to AI, the world will probably be a safer place if there are far more AI systems competing with each other and in the hands of citizens.
The risk vs. reward component also needs to be managed in order to deter criminal behavior. This starts with regulation.
For the record, I believe regulation of AI/ML is ridiculous. This is nothing more than a power grab.
https://arxiv.org/pdf/2304.15004.pdf
our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale.