zlacker

[return to "Sam Altman goes before US Congress to propose licenses for building AI"]
1. srslac+I7[view] [source] 2023-05-16 12:00:15
>>vforgi+(OP)
Imagine thinking that regression based function approximators are capable of anything other than fitting the data you give it. Then imagine willfully hyping up and scaring people who don't understand, and because it can predict words you take advantage of the human tendency to anthropomorphize, so it follows that it is something capable of generalized and adaptable intelligence.

Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.

So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.

◧◩
2. chaxor+hB[view] [source] 2023-05-16 14:33:08
>>srslac+I7
What do you think about the papers showing mathematical proofs that GNNs (i.e. GATs/transformers) are dynamic programmers and therefore perform algorithmic reasoning?

The fact that these systems can extrapolate well beyond their training data by learning algorithms is quite different than what has come before, and anyone stating that they "simply" predict next token is severely shortsighted. Things don't have to be 'brain-like' to be useful, or to have capabilities of reasoning, but we have evidence that these systems have aligned well with reasoning tasks, perform well at causal reasoning, and we also have mathematical proofs that show how.

So I don't understand your sentiment.

◧◩◪
3. rdedev+3F[view] [source] 2023-05-16 14:51:03
>>chaxor+hB
To be fair LLMs are predicting the next token. It's just that to get better and better predictions it needs to understand some level of reasoning and math. However it feels to me that a lot of this reasoning is brute forced from the training data. Like chatgpt gets some things wrong when adding two very large numbers. If it really knew the algorithm for adding two numbers it shouldn't be making them in the first place. I guess same goes for issues like hallucinations. We can keep pushing the envelope using this technique but I'm sure we will hit a limit somewhere
◧◩◪◨
4. visarg+yR[view] [source] 2023-05-16 15:45:00
>>rdedev+3F
> If it really knew the algorithm for adding two numbers it shouldn't be making them in the first place.

You're using it wrong. If you asked a human to do the same operation in under 2 seconds without paper, would the human be more accurate?

On the other hand if you ask for a step by step execution, the LLM can solve it.

◧◩◪◨⬒
5. catchn+f81[view] [source] 2023-05-16 16:49:36
>>visarg+yR
am i bad at authoring inputs?

no, it’s the LLMs that are wrong.

◧◩◪◨⬒⬓
6. throwu+Vd1[view] [source] 2023-05-16 17:14:47
>>catchn+f81
Create two random 10 digit numbers and sit down and add them up on paper. Write down every bit of inner monologue that you have while doing this or just speak it out loud and record it.

ChatGPT needs to do the same process to solve the same problem. It hasn’t memorized the addition table up to 10 digits and neither have you.

◧◩◪◨⬒⬓⬔
7. ahoya+us2[view] [source] 2023-05-17 00:11:28
>>throwu+Vd1
This is so far off from how they really work. It’s not reasoning anything, And even less human it has not memorize multiplication tables at all, it can’t “do” math. It is just memorizing everything anyone has ever said and miming as best It can what a human would say in that situation.
◧◩◪◨⬒⬓⬔⧯
8. throwu+Cx2[view] [source] 2023-05-17 00:47:39
>>ahoya+us2
Sorry, you’re wrong. Go read about how deep neural nets work.
[go to top]