zlacker

[parent] [thread] 3 comments
1. garden+(OP)[view] [source] 2023-11-20 14:51:30
Can you explain for us not up to date with AI developments?
replies(2): >>cactus+N2 >>visarg+5E
2. cactus+N2[view] [source] 2023-11-20 15:05:24
>>garden+(OP)
Search YouTube for videos where Chomsky talks about AI. Current approaches to AI do not even attempt to understand cognition.
replies(1): >>projec+Fq
◧◩
3. projec+Fq[view] [source] [discussion] 2023-11-20 17:11:59
>>cactus+N2
Chomsky takes as axiomatic that there is some magical element of human cognition beyond simply stringing words together. We not be as special as we like to believe.
4. visarg+5E[view] [source] 2023-11-20 17:55:45
>>garden+(OP)
Imagine you are participating in car racing, and your car has a few tweak knobs. But you don't know what is what and can only make random perturbations and see what happens. Slowly you work out what is what, but you might still not be 100% sure.

That's how AI research and development works, I know, it is pretty weird. We don't really really understand, we know some basic stuff about how neurons and gradients work, and then we hand wave to "language model" "vision model" etc. It's all a black box, magic.

How we we make progress if we don't understand this beast? We prod and poke, and make little theories, and then test them on a few datasets. It's basically blind search.

Whenever someone finds anything useful, everyone copies it in like 2 weeks. So ML research is like a community thing, the main research happens in the community, not inside anyone's head. We stumble onto models like GPT4 then it takes us months to even have a vague understanding of what it is capable of.

Besides that there are issues with academic publishing, the volume, the quality, peer review, attribution, replicability... they all got out of hand. And we have another set of issues with benchmarks - what they mean, how much can we trust them, what metrics to use.

And yet somehow here we are with GPT-4V and others.

[go to top]