zlacker

[parent] [thread] 11 comments
1. tpmx+(OP)[view] [source] 2022-05-23 21:46:22
I don't see how this gets us (much) closer to general AI. Where is the reasoning?
replies(3): >>_joel+65 >>quirin+Y7 >>6gvONx+d8
2. _joel+65[view] [source] 2022-05-23 22:14:37
>>tpmx+(OP)
Perhaps the confluence of NLP and something generative?
replies(2): >>astran+g8 >>Semant+p8
3. quirin+Y7[view] [source] 2022-05-23 22:32:31
>>tpmx+(OP)
I think this serves at least as a clear demonstration of how advanced the current state of AI is. I had played with GPT-3 and that was very impressive but I couldn't even dream something as good as D-ALLE 2 was already possible.
4. 6gvONx+d8[view] [source] 2022-05-23 22:34:15
>>tpmx+(OP)
Big pretrained models are good enough now that we can pipe them together in really cool ways and our representations of text and images seem to capture what we “mean.”
replies(1): >>tpmx+t8
◧◩
5. astran+g8[view] [source] [discussion] 2022-05-23 22:34:31
>>_joel+65
That doesn’t even lead in the direction of an AGI. The larger and more expensive a model is the less like an “AGI” it is - an independent agent would be able to learn online for free, not need millions in TPU credits to learn what color an apple is.
◧◩
6. Semant+p8[view] [source] [discussion] 2022-05-23 22:35:10
>>_joel+65
Yes metaculus mostly bet a magic number based on perhaps and tbh why not, the interaction of NLP and vision is mysterious and has potential. However those magic numbers should still be considered magic numbers. I agree that in 2040 the interactions will have extensively been studied though but the conclusion of wether we czn go much further on cross-models synergies is totally unknown or pessimist.
◧◩
7. tpmx+t8[view] [source] [discussion] 2022-05-23 22:36:11
>>6gvONx+d8
Yeah, it seems like it. But it's still just complicated statistical models. Again, where is the reasoning?
replies(4): >>6gvONx+da >>renewi+Ka >>marvin+db >>london+rb
◧◩◪
8. 6gvONx+da[view] [source] [discussion] 2022-05-23 22:47:39
>>tpmx+t8
I don’t care whether it reasons its way from “3 teddy bears below 7 flamingos” to a picture of that or if it gets there some other way.

But also, some of the magic in having good enough pretrained representations is that you don’t need to train them further for downstream tasks, which means non-differentiable tasks like logic could soon become more tenable.

◧◩◪
9. renewi+Ka[view] [source] [discussion] 2022-05-23 22:51:04
>>tpmx+t8
A belief oft shared is that sufficiently complicated statistical models are indistinguishable from reasoning.
◧◩◪
10. marvin+db[view] [source] [discussion] 2022-05-23 22:55:09
>>tpmx+t8
I still think we're missing some fundamental insights on how layered planning/forecasting/deducting/reasoning works, and that figuring this out will be necessary in order to create AI that we could say "reasons".

But with the recent advances/demonstrations, it seems more likely today than in 2019 that our current computational resources are sufficient to perform magnificantly spooky stuff if they're used correctly. They are doing that already already, and that's without deliberately making the software do anything except draw from a vast pool of examples.

I think it's reasonable, based on this, to update one's expectations of what we'd be able to do if we figured out ways of doing things that aren't based on first seeing a hundred million examples of what we want the computer to do.

Things that do this can obviously exist, we are living examples. Does figuring it out seem likely to be many decades away?

replies(1): >>tpmx+6d
◧◩◪
11. london+rb[view] [source] [discussion] 2022-05-23 22:58:18
>>tpmx+t8
All it takes is one 'trick' to give these models the ability to do reasoning.

Like for example the discovery that language models get far better at answering complex questions if asked to show their working step by step with chain of thought reasoning as in page 19 of the PaLM paper [1]. Worth checking out the explanations of novel jokes on page 38 of the same paper. While it is, like you say, all statistics, if it's indistinguishable from valid reasoning, then perhaps it doesn't matter.

[1]: https://arxiv.org/pdf/2204.02311.pdf

◧◩◪◨
12. tpmx+6d[view] [source] [discussion] 2022-05-23 23:10:23
>>marvin+db
That's a well-balanced response that I can agree with.

I'm an not AGI-skeptic. I'm just a bit skeptical that the topic of this thread is the path forward. It seems to me like an exotic detour.

And, of course intelligence isn't magic. We're producing new intelligent entities at rate of a about ~5 per second globally, every day.

> Does figuring it out seem likely to be many decades away?

1-7?

[go to top]