In other words, LLMs are only as dangerous as the humans operating them, and therefore the solution is to stop crime instead of regulating AI, which only seeks to make OpenAI a monopoly.
I think the objection to this would be that currently not everyone in the world an expert in biochemistry or at hacking into computer systems. Even if you're correct in principal, perhaps the risks of the technology we're developing here is too high? We typically regulate technologies which can easily be used to cause harm.
Guns only have a primarily harmful use which is to kill or injure someone. While that act of killing may be justified when the person violates societal values in some way, making regular citizens the decision makers in whether a certain behavior is allowed or disallowed and being able to immediately make a judgment and execute upon it leads to a sort of low-trust, vigilante environment; which is why the same argument I made above doesn’t apply for guns.
Have you any empirical evidence at all on this? From what I've seen the open carry states in the US are generally higher trust environments (as was the US in past when more people carried). People feel safer when they know somebody can't just assault, rob or rape them without them being able to do anything to defend themselves. Is the Tenderloin a high trust environment?
But AI is not like guns in this analogy. AI is closer to machine tools.
The same thing might also be true in relation to guns and the government's monopoly on violence.
Extending that to AI, the world will probably be a safer place if there are far more AI systems competing with each other and in the hands of citizens.
The risk vs. reward component also needs to be managed in order to deter criminal behavior. This starts with regulation.
For the record, I believe regulation of AI/ML is ridiculous. This is nothing more than a power grab.