What we'd need to see:
* A breakthrough that either decouples us from parameter count or allows parameter count increases with smaller training sets.
* Any evidence that it's doing anything more than asymptotically crawling towards human-comparable evidence.
The entire ZOMG SUPERAI faction suffers from the AI that somehow thinking more and faster is thinking better. It's not. There's no evidence pointing in that direction.
We currently have ~8B human-level intelligences. They haven't managed to produce anything above human level intelligence. Where's the indication that emulation of their mode of thinking at scale will result in something breaking the threshold?
If anything, machine intelligence is doing worse, because any slight increase in capacity is paid for in large amounts of "hallucination".
The mistake of course was assuming we were stuck with CNNs. And we will probably also not keep using LLMs. We already know there are more effective architectures, as animals implement one of them.
Thinking more does not equate thinking better. Or even thinking well.
As for "animals implement them", it's worth noting that we mostly qualify for an award in impressive lack of understanding in that area. Even with exponential improvements, that is not going to change within the next five years.
The "but we just don't know" argument is useless. That also applies to aliens landing on this planet next week and capturing the government. Theoretically possible, but not a pressing concern.
Should we think about what AI regulations look like? Yes. Should we enact regulations on something that doesn't even really exist, without deeply understanding it, at the behest of the party that stands to gain financially from it? Fuck no.