zlacker

[return to "Governance of Superintelligence"]
1. groby_+D7[view] [source] 2023-05-22 18:19:31
>>davidb+(OP)
There is not a single shred of evidence we'll reach superintelligence any time soon. There's however a lot of evidence that regulation benefits incumbents.

You can do the math from there.

◧◩
2. cubefo+Jv[view] [source] 2023-05-22 20:27:29
>>groby_+D7
AI progress keeps accelerating more and more --> "not a single shred of evidence"
◧◩◪
3. groby_+MY[view] [source] 2023-05-22 23:38:51
>>cubefo+Jv
Does it? Does it really? Yes, we've seen huge improvements. We're still not close to human intelligence level. (No, test taking doesn't count). Worse, we've got pretty clear evidence that the training material increase for larger LLMs is basically reaching an upper bound.

What we'd need to see:

* A breakthrough that either decouples us from parameter count or allows parameter count increases with smaller training sets.

* Any evidence that it's doing anything more than asymptotically crawling towards human-comparable evidence.

The entire ZOMG SUPERAI faction suffers from the AI that somehow thinking more and faster is thinking better. It's not. There's no evidence pointing in that direction.

We currently have ~8B human-level intelligences. They haven't managed to produce anything above human level intelligence. Where's the indication that emulation of their mode of thinking at scale will result in something breaking the threshold?

If anything, machine intelligence is doing worse, because any slight increase in capacity is paid for in large amounts of "hallucination".

[go to top]