zlacker

[return to "Governance of Superintelligence"]
1. groby_+D7[view] [source] 2023-05-22 18:19:31
>>davidb+(OP)
There is not a single shred of evidence we'll reach superintelligence any time soon. There's however a lot of evidence that regulation benefits incumbents.

You can do the math from there.

◧◩
2. cubefo+Jv[view] [source] 2023-05-22 20:27:29
>>groby_+D7
AI progress keeps accelerating more and more --> "not a single shred of evidence"
◧◩◪
3. groby_+MY[view] [source] 2023-05-22 23:38:51
>>cubefo+Jv
Does it? Does it really? Yes, we've seen huge improvements. We're still not close to human intelligence level. (No, test taking doesn't count). Worse, we've got pretty clear evidence that the training material increase for larger LLMs is basically reaching an upper bound.

What we'd need to see:

* A breakthrough that either decouples us from parameter count or allows parameter count increases with smaller training sets.

* Any evidence that it's doing anything more than asymptotically crawling towards human-comparable evidence.

The entire ZOMG SUPERAI faction suffers from the AI that somehow thinking more and faster is thinking better. It's not. There's no evidence pointing in that direction.

We currently have ~8B human-level intelligences. They haven't managed to produce anything above human level intelligence. Where's the indication that emulation of their mode of thinking at scale will result in something breaking the threshold?

If anything, machine intelligence is doing worse, because any slight increase in capacity is paid for in large amounts of "hallucination".

◧◩◪◨
4. cubefo+s31[view] [source] 2023-05-23 00:21:19
>>groby_+MY
That's like arguing 10 years ago that ChatGPT was impossible in the next ten years because AlexNet could only recognize objects in photos with some medium reliability, and there was no path to scale CNNs to something like ChatGPT.

The mistake of course was assuming we were stuck with CNNs. And we will probably also not keep using LLMs. We already know there are more effective architectures, as animals implement one of them.

◧◩◪◨⬒
5. groby_+2K2[view] [source] 2023-05-23 14:29:57
>>cubefo+s31
None of the architectures we know imply in any way giant leaps in intelligence. That's the main sticking point.

Thinking more does not equate thinking better. Or even thinking well.

As for "animals implement them", it's worth noting that we mostly qualify for an award in impressive lack of understanding in that area. Even with exponential improvements, that is not going to change within the next five years.

The "but we just don't know" argument is useless. That also applies to aliens landing on this planet next week and capturing the government. Theoretically possible, but not a pressing concern.

Should we think about what AI regulations look like? Yes. Should we enact regulations on something that doesn't even really exist, without deeply understanding it, at the behest of the party that stands to gain financially from it? Fuck no.

[go to top]