zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. srslac+I7[view] [source] 2023-05-16 12:00:15
>>vforgi+(OP)
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.

◧◩
2. tgv+jd[view] [source] 2023-05-16 12:32:32
>>srslac+I7
I'm squarely in the "stochastic parrot" camp (I know it's not a simple markov model, but still, ChatGPT doesn't think), and it's clearly possible to interpret this as a grifting, but your argumentation is too simple.

You're leaving out the essentials. These models do more than fitting the data given. They can output it in a variety of ways, and through their approximation, can synthesize data as well. They can output things that weren't in the original data, tailored to a specific request in the tiniest of fractions of the time it would take a normal person to look up and understand that information.

Your argument is almost like saying "give me your RSA keys, because it's just two prime numbers, and I know how to list them."

◧◩◪
3. adamsm+BI[view] [source] 2023-05-16 15:09:03
>>tgv+jd
Please explain how Stochastic Parrots can perform chain of reasoning and answer out of distribution questions from exams like the GRE or Bar.
◧◩◪◨
4. srslac+Sy2[view] [source] 2023-05-17 00:56:20
>>adamsm+BI
Probably because it fits the data. CoT and out of order questions from exams says nothing about whether it can generalize and adapt to things outside of its corpus.
◧◩◪◨⬒
5. adamsm+b34[view] [source] 2023-05-17 14:30:25
>>srslac+Sy2
It's incredibly easy to show that you are wrong and the models perform at high levels on questions that are clearly not in their training data.

Unless you think OpenAI is blatantly lying about this:

"A.1 Sourcing. We sourced either the most recent publicly-available official past exams, or practice exams in published third-party 2022-2023 study material which we purchased. We cross-checked these materials against the model’s training data to determine the extent to which the training data was not contaminated with any exam questions, which we also report in this paper."

"As can be seen in tables 9 and 10, contamination overall has very little effect on the reported results."

They also report results on uncontaminated data which shows basically no statistical difference.

https://cdn.openai.com/papers/gpt-4.pdf

[go to top]