zlacker

[return to "Imagen, a text-to-image diffusion model"]
1. james-+V4[view] [source] 2022-05-23 21:17:30
>>kevema+(OP)
Metacalculus, a mass forecasting site, has steadily brought forward the prediction date for a weakly general AI. Jaw-dropping advances like this, only increase my confidence in this prediction. "The future is now, old man."

https://www.metaculus.com/questions/3479/date-weakly-general...

◧◩
2. tpmx+V9[view] [source] 2022-05-23 21:46:22
>>james-+V4
I don't see how this gets us (much) closer to general AI. Where is the reasoning?
◧◩◪
3. 6gvONx+8i[view] [source] 2022-05-23 22:34:15
>>tpmx+V9
Big pretrained models are good enough now that we can pipe them together in really cool ways and our representations of text and images seem to capture what we “mean.”
◧◩◪◨
4. tpmx+oi[view] [source] 2022-05-23 22:36:11
>>6gvONx+8i
Yeah, it seems like it. But it's still just complicated statistical models. Again, where is the reasoning?
◧◩◪◨⬒
5. marvin+8l[view] [source] 2022-05-23 22:55:09
>>tpmx+oi
I still think we're missing some fundamental insights on how layered planning/forecasting/deducting/reasoning works, and that figuring this out will be necessary in order to create AI that we could say "reasons".

But with the recent advances/demonstrations, it seems more likely today than in 2019 that our current computational resources are sufficient to perform magnificantly spooky stuff if they're used correctly. They are doing that already already, and that's without deliberately making the software do anything except draw from a vast pool of examples.

I think it's reasonable, based on this, to update one's expectations of what we'd be able to do if we figured out ways of doing things that aren't based on first seeing a hundred million examples of what we want the computer to do.

Things that do this can obviously exist, we are living examples. Does figuring it out seem likely to be many decades away?

◧◩◪◨⬒⬓
6. tpmx+1n[view] [source] 2022-05-23 23:10:23
>>marvin+8l
That's a well-balanced response that I can agree with.

I'm an not AGI-skeptic. I'm just a bit skeptical that the topic of this thread is the path forward. It seems to me like an exotic detour.

And, of course intelligence isn't magic. We're producing new intelligent entities at rate of a about ~5 per second globally, every day.

> Does figuring it out seem likely to be many decades away?

1-7?

[go to top]