Those here who don't believe AI should be regulated, do you not believe AI can be dangerous? Is that you believe a dangerous AI is so far away that we don't need to start regulating now?
Do you accept that if someone develops a dangerous AI tomorrow there's no way to travel back in time and retroactively regulate development?
It just seems so obvious to me that there should be oversight in the development of a potentially dangerous technology that I can't understand why people would be against it. Especially for arguments as weak as "it's not dangerous yet".
AI needs to be regulated and controlled, the alternative is chaos.
Unfortunately the current demented fossile and greedy monopolist lead system is most likely incapable of creating a sane & fair environment for the development of AI. I can only hope I'm wrong.
Very soon they'll be addicted to AI like every other major change.