zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. james-+V4[view] [source] 2022-05-23 21:17:30
>>kevema+(OP)
Metacalculus, a mass forecasting site, has steadily brought forward the prediction date for a weakly general AI. Jaw-dropping advances like this, only increase my confidence in this prediction. "The future is now, old man."

https://www.metaculus.com/questions/3479/date-weakly-general...

◧◩
2. tpmx+V9[view] [source] 2022-05-23 21:46:22
>>james-+V4
I don't see how this gets us (much) closer to general AI. Where is the reasoning?
◧◩◪
3. 6gvONx+8i[view] [source] 2022-05-23 22:34:15
>>tpmx+V9
Big pretrained models are good enough now that we can pipe them together in really cool ways and our representations of text and images seem to capture what we “mean.”
◧◩◪◨
4. tpmx+oi[view] [source] 2022-05-23 22:36:11
>>6gvONx+8i
Yeah, it seems like it. But it's still just complicated statistical models. Again, where is the reasoning?
◧◩◪◨⬒
5. london+ml[view] [source] 2022-05-23 22:58:18
>>tpmx+oi
All it takes is one 'trick' to give these models the ability to do reasoning.

Like for example the discovery that language models get far better at answering complex questions if asked to show their working step by step with chain of thought reasoning as in page 19 of the PaLM paper [1]. Worth checking out the explanations of novel jokes on page 38 of the same paper. While it is, like you say, all statistics, if it's indistinguishable from valid reasoning, then perhaps it doesn't matter.

[1]: https://arxiv.org/pdf/2204.02311.pdf

[go to top]