zlacker

[parent] [thread] 1 comments
1. adamsm+(OP)[view] [source] 2023-05-22 14:16:20
>It can't translate whale song, or an extraterrestrial language, though it may opine on how to do so.

Ok guys pack it up, LLM's can't be intelligent because they can't translate Whale Song. GG.

I mean of all the AI Goalposts to be moved this one really takes the cake.

replies(1): >>srslac+Cd
2. srslac+Cd[view] [source] 2023-05-22 15:18:54
>>adamsm+(OP)
It was just an example, I saw some stupid MSNBC video a month ago about some organization specifically using ChatGPT to translate whale song. So again, you misunderstand my point. The model "fits the data." Much like you train for segmentation tasks on images, the models do not just work on the images they're trained on, ideally, it's an approximated function. But that doesn't mean that the segmentation can magically work on a concept it's never seen (let alone the failure cases it already has.) These are just approximated functions. They're biased towards what we deem as "intelligent language" pulled from the web, have a few nuggets of "understanding" if you want to call it that in there to fit the data, but are fundamentally stateless and not really capable of understanding anything outside of its corpus, if that, it it doesn't help it minimize the loss during training.

It's a human language calculator. You're imparting magical qualities of general understanding to regression based function approximation. They "fit" the data. It's not generalizable, nor adaptable. But that's why they're powerful, the ability to bias them towards that subset of language. No one said it's not an amazing technology, and no one said it was a stochastic parrot. I'm saying that it's fitting the data, and is not, and cannot, be a general or adaptable intelligence.

[go to top]