zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. srslac+I7[view] [source] 2023-05-16 12:00:15
>>vforgi+(OP)
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.

◧◩
2. tgv+jd[view] [source] 2023-05-16 12:32:32
>>srslac+I7
I'm squarely in the "stochastic parrot" camp (I know it's not a simple markov model, but still, ChatGPT doesn't think), and it's clearly possible to interpret this as a grifting, but your argumentation is too simple.

You're leaving out the essentials. These models do more than fitting the data given. They can output it in a variety of ways, and through their approximation, can synthesize data as well. They can output things that weren't in the original data, tailored to a specific request in the tiniest of fractions of the time it would take a normal person to look up and understand that information.

Your argument is almost like saying "give me your RSA keys, because it's just two prime numbers, and I know how to list them."

◧◩◪
3. adamsm+BI[view] [source] 2023-05-16 15:09:03
>>tgv+jd
Please explain how Stochastic Parrots can perform chain of reasoning and answer out of distribution questions from exams like the GRE or Bar.
◧◩◪◨
4. srslac+Sy2[view] [source] 2023-05-17 00:56:20
>>adamsm+BI
Probably because it fits the data. CoT and out of order questions from exams says nothing about whether it can generalize and adapt to things outside of its corpus.
◧◩◪◨⬒
5. adamsm+b34[view] [source] 2023-05-17 14:30:25
>>srslac+Sy2
It's incredibly easy to show that you are wrong and the models perform at high levels on questions that are clearly not in their training data.

Unless you think OpenAI is blatantly lying about this:

"A.1 Sourcing. We sourced either the most recent publicly-available official past exams, or practice exams in published third-party 2022-2023 study material which we purchased. We cross-checked these materials against the model’s training data to determine the extent to which the training data was not contaminated with any exam questions, which we also report in this paper."

"As can be seen in tables 9 and 10, contamination overall has very little effect on the reported results."

They also report results on uncontaminated data which shows basically no statistical difference.

https://cdn.openai.com/papers/gpt-4.pdf

◧◩◪◨⬒⬓
6. srslac+3L8[view] [source] 2023-05-18 20:44:08
>>adamsm+b34
You seem to misunderstand my point.

I'm saying that the "intelligence" is specialized, not generalized and adaptable.

It's an approximated function. We're talking about regression based function approximation. This is a model of language.

"Emergent behavior", when it's not just a mirage of wishful researchers and if it even exists, is only a side effect of the regression based function approximation to generate a structure that encapsulates all substantive chains of words (a model).

We then guide the model further towards a narrow portion of the language latent space that aligns with our perception of intelligent behavior.

It can't translate whale song, or an extraterrestrial language, though it may opine on how to do so.

The underpinning technology of language models holds more importance than general and adaptable intelligence. It holds more importance than something that is going to, or is capable of, escaping the box and killing us all. It functions as a universal induction machine, capable of modeling - and "comprehending" - the latent structure within any form of signal.

The output of that function approximation though, is simply a model. A specialized intelligence. A non-adaptable intelligence, outside of its corpus. Outside of the data that it "fits."

The approximated function does not magically step outside of its box. Nor is it capable. It fits the data.

◧◩◪◨⬒⬓⬔
7. adamsm+XCi[view] [source] 2023-05-22 14:16:20
>>srslac+3L8
>It can't translate whale song, or an extraterrestrial language, though it may opine on how to do so.

Ok guys pack it up, LLM's can't be intelligent because they can't translate Whale Song. GG.

I mean of all the AI Goalposts to be moved this one really takes the cake.

◧◩◪◨⬒⬓⬔⧯
8. srslac+zQi[view] [source] 2023-05-22 15:18:54
>>adamsm+XCi
It was just an example, I saw some stupid MSNBC video a month ago about some organization specifically using ChatGPT to translate whale song. So again, you misunderstand my point. The model "fits the data." Much like you train for segmentation tasks on images, the models do not just work on the images they're trained on, ideally, it's an approximated function. But that doesn't mean that the segmentation can magically work on a concept it's never seen (let alone the failure cases it already has.) These are just approximated functions. They're biased towards what we deem as "intelligent language" pulled from the web, have a few nuggets of "understanding" if you want to call it that in there to fit the data, but are fundamentally stateless and not really capable of understanding anything outside of its corpus, if that, it it doesn't help it minimize the loss during training.

It's a human language calculator. You're imparting magical qualities of general understanding to regression based function approximation. They "fit" the data. It's not generalizable, nor adaptable. But that's why they're powerful, the ability to bias them towards that subset of language. No one said it's not an amazing technology, and no one said it was a stochastic parrot. I'm saying that it's fitting the data, and is not, and cannot, be a general or adaptable intelligence.

[go to top]