zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. james-+V4[view] [source] 2022-05-23 21:17:30
>>kevema+(OP)
Metacalculus, a mass forecasting site, has steadily brought forward the prediction date for a weakly general AI. Jaw-dropping advances like this, only increase my confidence in this prediction. "The future is now, old man."

https://www.metaculus.com/questions/3479/date-weakly-general...

◧◩
2. tpmx+V9[view] [source] 2022-05-23 21:46:22
>>james-+V4
I don't see how this gets us (much) closer to general AI. Where is the reasoning?
◧◩◪
3. 6gvONx+8i[view] [source] 2022-05-23 22:34:15
>>tpmx+V9
Big pretrained models are good enough now that we can pipe them together in really cool ways and our representations of text and images seem to capture what we “mean.”
◧◩◪◨
4. tpmx+oi[view] [source] 2022-05-23 22:36:11
>>6gvONx+8i
Yeah, it seems like it. But it's still just complicated statistical models. Again, where is the reasoning?
◧◩◪◨⬒
5. 6gvONx+8k[view] [source] 2022-05-23 22:47:39
>>tpmx+oi
I don’t care whether it reasons its way from “3 teddy bears below 7 flamingos” to a picture of that or if it gets there some other way.

But also, some of the magic in having good enough pretrained representations is that you don’t need to train them further for downstream tasks, which means non-differentiable tasks like logic could soon become more tenable.

[go to top]