zlacker

[parent] [thread] 9 comments
1. projec+(OP)[view] [source] 2023-11-20 14:10:26
Well, great to see that the potentially dangerous future of AGI is in good hands.
replies(2): >>cactus+36 >>solard+Uc
2. cactus+36[view] [source] 2023-11-20 14:30:17
>>projec+(OP)
They will never discover AGI with this approach because 1) they are brute forcing the results and 2) none of this is actually science.
replies(2): >>garden+eb >>captai+fe
◧◩
3. garden+eb[view] [source] [discussion] 2023-11-20 14:51:30
>>cactus+36
Can you explain for us not up to date with AI developments?
replies(2): >>cactus+1e >>visarg+jP
4. solard+Uc[view] [source] 2023-11-20 14:59:04
>>projec+(OP)
Poor little geepeet is witnessing their first custody battle :(

Daddies, mommy, don't you love me? Don't you love each other? Why are you all leaving?

◧◩◪
5. cactus+1e[view] [source] [discussion] 2023-11-20 15:05:24
>>garden+eb
Search YouTube for videos where Chomsky talks about AI. Current approaches to AI do not even attempt to understand cognition.
replies(1): >>projec+TB
◧◩
6. captai+fe[view] [source] [discussion] 2023-11-20 15:06:32
>>cactus+36
1) It may be possible to brute-force a model into something that sufficiently resembles AGI for most use-cases (at least well enough to merit concern about who controls it) 2) Deep learning has never been terribly scientific, but here we are.
replies(1): >>cactus+WI
◧◩◪◨
7. projec+TB[view] [source] [discussion] 2023-11-20 17:11:59
>>cactus+1e
Chomsky takes as axiomatic that there is some magical element of human cognition beyond simply stringing words together. We not be as special as we like to believe.
◧◩◪
8. cactus+WI[view] [source] [discussion] 2023-11-20 17:33:31
>>captai+fe
If it can’t digest a math textbook and do equations, how would AGI be accomplished? So many problems are advanced mathematics.
replies(1): >>captai+9U
◧◩◪
9. visarg+jP[view] [source] [discussion] 2023-11-20 17:55:45
>>garden+eb
Imagine you are participating in car racing, and your car has a few tweak knobs. But you don't know what is what and can only make random perturbations and see what happens. Slowly you work out what is what, but you might still not be 100% sure.

That's how AI research and development works, I know, it is pretty weird. We don't really really understand, we know some basic stuff about how neurons and gradients work, and then we hand wave to "language model" "vision model" etc. It's all a black box, magic.

How we we make progress if we don't understand this beast? We prod and poke, and make little theories, and then test them on a few datasets. It's basically blind search.

Whenever someone finds anything useful, everyone copies it in like 2 weeks. So ML research is like a community thing, the main research happens in the community, not inside anyone's head. We stumble onto models like GPT4 then it takes us months to even have a vague understanding of what it is capable of.

Besides that there are issues with academic publishing, the volume, the quality, peer review, attribution, replicability... they all got out of hand. And we have another set of issues with benchmarks - what they mean, how much can we trust them, what metrics to use.

And yet somehow here we are with GPT-4V and others.

◧◩◪◨
10. captai+9U[view] [source] [discussion] 2023-11-20 18:13:04
>>cactus+WI
Right, I do agree that the current LLM paradigm probably won't achieve true AGI; but I think that the current trajectory could produce a powerful enough generalist agent model to seriously put AI ethics to task at pretty much every angle.
[go to top]