- Lord Altman
Predictable. Disappointing, but predictable.
We've seen this duck so many times before.
No need to innovate when you can regulate.
And it raises First Amendment issues as well. I think it's morally wrong to prohibit the development of software, which is what AI models are, especially if it's done in a personal capacity.
How do they even know that the author is based in the US anyway. Just use a Russian or Chinese Git hosting provider, where these laws don't exist?
And by the way foreign developers won't even have to jump through these hoops in the first place, so this law will only put the US at a disadvantage compared to the rest of the world.
If these lobbyists get their way, by restricting AI development in both the US and the EU, it will be hilarious to see that out of all places, Russia might be one of the few large countries where it's development will remain unrestricted.
Even better, is that if Russia splits up we will have a new wild west for this kind of thing....
For anyone who really believes that AI is dangerous, having some reasonable regulations on it is logical. It's a good start on not being doomed. It goes against everyone's egalitarian/libertarian impulses, though.
The thing is, AI doesn't seem nearly as dangerous as a fully-automatic machine gun. For now. It's just generating text (and video) for fun, right?
It’s funny how all Microsoft properties are in dominant position on their market.
The shared point of both AI alarmists and advocates is that AI will be highly resistant to being subject to regulation, ultimately. As dictated by the market for it. They won't want to regulate something, assuming they could, for which its free operation underlies everyone's chance of survival against competing systems.
I only find that danger is inherent in the effort of people that casually label things as "dangerous".
I'm still exploring whether its the laziness aspect, itself, of the alarmist vocabulary in the absence of required explanation. Or whether my issue lies with the suspicion of emotional manipulation and an attempt to circumvent having to actually explain one's reasoning, using alarmist language absent required explanation.
Already, AI pessimists are well on their way to losing any window where their arguments will be heard and meaningful. We can tell by their parroting the word "dangerous" as the total substance of their arguments. Which will soon be a laughable defense. They'd better learn more words.