zlacker

[parent] [thread] 4 comments
1. Diogen+(OP)[view] [source] 2024-05-16 04:40:10
What's the difference between responding logically and giving answers that are identical to how one would answer if one were to apply logic?
replies(1): >>carom+vw4
2. carom+vw4[view] [source] 2024-05-17 18:08:04
>>Diogen+(OP)
The logic does not generalize to things outside of the training set. It cannot reason about code very well, but it can write you functions with memorized docs.
replies(1): >>Diogen+BS4
◧◩
3. Diogen+BS4[view] [source] [discussion] 2024-05-17 20:40:40
>>carom+vw4
Unless you're saying that my exact prompt is already in ChatGPT's training set, the above is an example of successful generalization.
replies(1): >>carom+xM6
◧◩◪
4. carom+xM6[view] [source] [discussion] 2024-05-18 19:19:26
>>Diogen+BS4
>All Xs have Ys.

>A Z is an X.

>Therefore a Z has Ys.

I am fairly certain variations of this are in the training set. The tokens following that about "in reality Zs not having Ys" are due to X, Y, and Z being incongruous in the rest of the data.

It is not not performing a logical calculation, it is predicting the next token.

Explanations of simple logical chains are also in the training data.

Think of it instead as really good (and flexible) language templates. It can fill in the template for different things.

replies(1): >>Diogen+967
◧◩◪◨
5. Diogen+967[view] [source] [discussion] 2024-05-18 22:10:05
>>carom+xM6
> It is not not performing a logical calculation, it is predicting the next token.

Those two things are not in any way mutually exclusive. Understanding the logic is an effective way to accurately predict the next token.

> I am fairly certain variations of this are in the training set.

Yes, which is probably how ChatGPT learned that logical principle. It has now learned to correctly apply that logical principle to novel situations. I suspect that this is very similar to how human beings learn logic as well.

[go to top]