zlacker

[return to "OpenAI staff threaten to quit unless board resigns"]
1. projec+p8[view] [source] 2023-11-20 14:10:26
>>skille+(OP)
Well, great to see that the potentially dangerous future of AGI is in good hands.
◧◩
2. cactus+se[view] [source] 2023-11-20 14:30:17
>>projec+p8
They will never discover AGI with this approach because 1) they are brute forcing the results and 2) none of this is actually science.
◧◩◪
3. garden+Dj[view] [source] 2023-11-20 14:51:30
>>cactus+se
Can you explain for us not up to date with AI developments?
◧◩◪◨
4. visarg+IX[view] [source] 2023-11-20 17:55:45
>>garden+Dj
Imagine you are participating in car racing, and your car has a few tweak knobs. But you don't know what is what and can only make random perturbations and see what happens. Slowly you work out what is what, but you might still not be 100% sure.

That's how AI research and development works, I know, it is pretty weird. We don't really really understand, we know some basic stuff about how neurons and gradients work, and then we hand wave to "language model" "vision model" etc. It's all a black box, magic.

How we we make progress if we don't understand this beast? We prod and poke, and make little theories, and then test them on a few datasets. It's basically blind search.

Whenever someone finds anything useful, everyone copies it in like 2 weeks. So ML research is like a community thing, the main research happens in the community, not inside anyone's head. We stumble onto models like GPT4 then it takes us months to even have a vague understanding of what it is capable of.

Besides that there are issues with academic publishing, the volume, the quality, peer review, attribution, replicability... they all got out of hand. And we have another set of issues with benchmarks - what they mean, how much can we trust them, what metrics to use.

And yet somehow here we are with GPT-4V and others.

[go to top]