zlacker

[parent] [thread] 22 comments
1. james-+(OP)[view] [source] 2022-05-23 21:17:30
Metacalculus, a mass forecasting site, has steadily brought forward the prediction date for a weakly general AI. Jaw-dropping advances like this, only increase my confidence in this prediction. "The future is now, old man."

https://www.metaculus.com/questions/3479/date-weakly-general...

replies(3): >>sydthr+t >>tpmx+05 >>chias+4M
2. sydthr+t[view] [source] 2022-05-23 21:21:09
>>james-+(OP)
How can we prepare for this?

This will result in mass social unrest.

replies(3): >>refulg+T >>aaaaaa+P6 >>dougmw+Vg
◧◩
3. refulg+T[view] [source] [discussion] 2022-05-23 21:23:53
>>sydthr+t
You think so? I'm very high on the Kool-Aid, image generation and text transformation models are core parts of my workflow. (Midjourney, GPT-3)

It's still an unruly 7 year old at best. Results need to be verified. Prompt engineering and a sense of creativity are core competencies.

replies(1): >>visarg+S3
◧◩◪
4. visarg+S3[view] [source] [discussion] 2022-05-23 21:39:39
>>refulg+T
> Prompt engineering and a sense of creativity are core competencies.

It's funny that people are also prompting each other. Parents, friends, teachers, doctors, priests, politicians, managers and marketers are all prompting (advising) us to trigger desired behaviour. Powerful stuff - having a large model and knowing how to prompt it.

5. tpmx+05[view] [source] 2022-05-23 21:46:22
>>james-+(OP)
I don't see how this gets us (much) closer to general AI. Where is the reasoning?
replies(3): >>_joel+6a >>quirin+Yc >>6gvONx+dd
◧◩
6. aaaaaa+P6[view] [source] [discussion] 2022-05-23 21:56:31
>>sydthr+t
Stock up on guns, ammo, cigarettes, water filters, canned food, and toilet paper.
replies(1): >>boppo1+l8
◧◩◪
7. boppo1+l8[view] [source] [discussion] 2022-05-23 22:05:06
>>aaaaaa+P6
Nah, learn Spanish and first-aid. Being able to fix people is more useful than having commodities that will make you a target.
◧◩
8. _joel+6a[view] [source] [discussion] 2022-05-23 22:14:37
>>tpmx+05
Perhaps the confluence of NLP and something generative?
replies(2): >>astran+gd >>Semant+pd
◧◩
9. quirin+Yc[view] [source] [discussion] 2022-05-23 22:32:31
>>tpmx+05
I think this serves at least as a clear demonstration of how advanced the current state of AI is. I had played with GPT-3 and that was very impressive but I couldn't even dream something as good as D-ALLE 2 was already possible.
◧◩
10. 6gvONx+dd[view] [source] [discussion] 2022-05-23 22:34:15
>>tpmx+05
Big pretrained models are good enough now that we can pipe them together in really cool ways and our representations of text and images seem to capture what we “mean.”
replies(1): >>tpmx+td
◧◩◪
11. astran+gd[view] [source] [discussion] 2022-05-23 22:34:31
>>_joel+6a
That doesn’t even lead in the direction of an AGI. The larger and more expensive a model is the less like an “AGI” it is - an independent agent would be able to learn online for free, not need millions in TPU credits to learn what color an apple is.
◧◩◪
12. Semant+pd[view] [source] [discussion] 2022-05-23 22:35:10
>>_joel+6a
Yes metaculus mostly bet a magic number based on perhaps and tbh why not, the interaction of NLP and vision is mysterious and has potential. However those magic numbers should still be considered magic numbers. I agree that in 2040 the interactions will have extensively been studied though but the conclusion of wether we czn go much further on cross-models synergies is totally unknown or pessimist.
◧◩◪
13. tpmx+td[view] [source] [discussion] 2022-05-23 22:36:11
>>6gvONx+dd
Yeah, it seems like it. But it's still just complicated statistical models. Again, where is the reasoning?
replies(4): >>6gvONx+df >>renewi+Kf >>marvin+dg >>london+rg
◧◩◪◨
14. 6gvONx+df[view] [source] [discussion] 2022-05-23 22:47:39
>>tpmx+td
I don’t care whether it reasons its way from “3 teddy bears below 7 flamingos” to a picture of that or if it gets there some other way.

But also, some of the magic in having good enough pretrained representations is that you don’t need to train them further for downstream tasks, which means non-differentiable tasks like logic could soon become more tenable.

◧◩◪◨
15. renewi+Kf[view] [source] [discussion] 2022-05-23 22:51:04
>>tpmx+td
A belief oft shared is that sufficiently complicated statistical models are indistinguishable from reasoning.
◧◩◪◨
16. marvin+dg[view] [source] [discussion] 2022-05-23 22:55:09
>>tpmx+td
I still think we're missing some fundamental insights on how layered planning/forecasting/deducting/reasoning works, and that figuring this out will be necessary in order to create AI that we could say "reasons".

But with the recent advances/demonstrations, it seems more likely today than in 2019 that our current computational resources are sufficient to perform magnificantly spooky stuff if they're used correctly. They are doing that already already, and that's without deliberately making the software do anything except draw from a vast pool of examples.

I think it's reasonable, based on this, to update one's expectations of what we'd be able to do if we figured out ways of doing things that aren't based on first seeing a hundred million examples of what we want the computer to do.

Things that do this can obviously exist, we are living examples. Does figuring it out seem likely to be many decades away?

replies(1): >>tpmx+6i
◧◩◪◨
17. london+rg[view] [source] [discussion] 2022-05-23 22:58:18
>>tpmx+td
All it takes is one 'trick' to give these models the ability to do reasoning.

Like for example the discovery that language models get far better at answering complex questions if asked to show their working step by step with chain of thought reasoning as in page 19 of the PaLM paper [1]. Worth checking out the explanations of novel jokes on page 38 of the same paper. While it is, like you say, all statistics, if it's indistinguishable from valid reasoning, then perhaps it doesn't matter.

[1]: https://arxiv.org/pdf/2204.02311.pdf

◧◩
18. dougmw+Vg[view] [source] [discussion] 2022-05-23 23:01:31
>>sydthr+t
I think the serious answer is that it is yet another labor multiplier like electricity and software. Our tech since the industrial revolution has allowed us to elevate ourselves from a largely agrarian society to space and cyberspace. AI, by all appearances, continues to be a tool, just the latest in a long line of better tools. It still requires a human to provide intent and direction. Right now in my job, I command the collect output of a million medieval scribes. In the future I will command a million Michelangelos.

Should ML/AI deliver on the wildest promises, it will be like a SpaceX Starship for the mind.

replies(1): >>sydthr+dv
◧◩◪◨⬒
19. tpmx+6i[view] [source] [discussion] 2022-05-23 23:10:23
>>marvin+dg
That's a well-balanced response that I can agree with.

I'm an not AGI-skeptic. I'm just a bit skeptical that the topic of this thread is the path forward. It seems to me like an exotic detour.

And, of course intelligence isn't magic. We're producing new intelligent entities at rate of a about ~5 per second globally, every day.

> Does figuring it out seem likely to be many decades away?

1-7?

◧◩◪
20. sydthr+dv[view] [source] [discussion] 2022-05-24 01:00:03
>>dougmw+Vg
Well, anyone over 40 will be fucked. There goes your utopia.
replies(2): >>dougmw+Hp1 >>machia+Qz1
21. chias+4M[view] [source] 2022-05-24 04:02:42
>>james-+(OP)
"The future is already here — It’s just not very evenly distributed"
◧◩◪◨
22. dougmw+Hp1[view] [source] [discussion] 2022-05-24 10:41:36
>>sydthr+dv
Computers didn't fuck anyone over 40, but they did create new opportunities for young people that slowly took over the labor market and provided a steady stream of productivity growth. Right now these are impressive benchmarks and neat toys that cost millions to train. This is going to be a slow transition to a new paradigm. We are not going to end up in a utopia any more than computers created a utopia.
◧◩◪◨
23. machia+Qz1[view] [source] [discussion] 2022-05-24 12:06:15
>>sydthr+dv
No because once this is live, creating private (teaching) assistants and good UX will be cheaper.
[go to top]