zlacker

[return to "Elon Musk sues Sam Altman, Greg Brockman, and OpenAI [pdf]"]
1. okhuma+3k[view] [source] 2024-03-01 12:41:48
>>modele+(OP)
AI is going to continue to have incremental progress, particularly now in hardware gains. No one can even define what AGI is or what it will look like, let alone be something that OpenAI would own? Features progress is too incremental to suddenly pop out with "AGI". Fighting about it seems a distraction.
◧◩
2. root_a+kO[view] [source] 2024-03-01 16:07:37
>>okhuma+3k
There's also no reason to believe that incremental progress in transformer models will eventually lead to "AGI".
◧◩◪
3. snapca+qQ[view] [source] 2024-03-01 16:17:26
>>root_a+kO
Yes, but I think everyone would agree that the chance isn't 0%
◧◩◪◨
4. root_a+sR[view] [source] 2024-03-01 16:23:34
>>snapca+qQ
I don't agree, I think many people would argue the chance is 0%.
◧◩◪◨⬒
5. snapca+bS[view] [source] 2024-03-01 16:26:25
>>root_a+sR
Are you one of those people? how can you be so confident? I think everyone should have updated their priors after how surprising the emergent behavior in GPT3+ are
◧◩◪◨⬒⬓
6. root_a+dV[view] [source] 2024-03-01 16:40:48
>>snapca+bS
I don't think GPT3's "emergent behavior" was very surprising, it was a natural progression from GPT2, and the entire purpose of GPT3 was to test the assumptions about how much more performance you could gain by growing the size of the model. That isn't to say GPT3 isn't impressive, but its behavior was within the cone of anticipated possibilities.

Based on a similar understanding, the idea that transformer models will lead to AGI seems obviously incorrect, as impressive as they are, they are just statistical pattern matchers of tokens, not systems that understand the world from first principles. And just in case you're among those that believe "humans are just pattern matchers", that might be true, but humans are modeling the world based on real time integrated sensory input, not on statistical patterns of a selection of text posted online. There's simply no reason to believe that AGI can come out of that.

[go to top]