Those here who don't believe AI should be regulated, do you not believe AI can be dangerous? Is that you believe a dangerous AI is so far away that we don't need to start regulating now?
Do you accept that if someone develops a dangerous AI tomorrow there's no way to travel back in time and retroactively regulate development?
It just seems so obvious to me that there should be oversight in the development of a potentially dangerous technology that I can't understand why people would be against it. Especially for arguments as weak as "it's not dangerous yet".
I think the discussion has a thick veil of anxiety and sci-fi movies. It's very strong on hypotheticals and very light on actual evidence of harm committed by AI and actors. "We need to act before there is evidence" is hard to argue with, thankfully human imagination is not regulated yet.