zlacker

[parent] [thread] 10 comments
1. carom+(OP)[view] [source] 2024-05-15 16:12:35
I disagree. I use ChatGPT daily as a replacement for Google. It doesn't understand or have logic, it can spit out information very well though. It has a broad knowledge base. There is no entity there to have an understanding of the topic.

This becomes pretty clear when you get to more complex algorithms or low level details like drawing a stack frame. There is not logic there.

replies(2): >>Diogen+b7 >>root_a+Xe
2. Diogen+b7[view] [source] 2024-05-15 16:46:33
>>carom+(OP)
> It doesn't understand or have logic

I can ask ChatGPT questions that require logic to answer, and it will do just fine in most cases. It has certain limitations, but to say it isn't able to apply logic is just completely contrary to my experience with ChatGPT.

replies(1): >>jobs_t+Fd
◧◩
3. jobs_t+Fd[view] [source] [discussion] 2024-05-15 17:16:08
>>Diogen+b7
give us an example please
replies(1): >>Diogen+KT
4. root_a+Xe[view] [source] 2024-05-15 17:22:25
>>carom+(OP)
Indeed. It's also obvious when the "hallucinations" create contradictory responses that a conceptual understanding would always preclude. For example, "In a vacuum, 100g of feathers and 100g of iron would fall at the same rate due to the constant force of gravity, thus the iron would hit the ground first". Only a language model makes this type of mistake because its output is statistical, not conceptual.
◧◩◪
5. Diogen+KT[view] [source] [discussion] 2024-05-15 21:05:12
>>jobs_t+Fd
I deliberately asked ChatGPT a logical question with a false premise: "If all snakes have legs, and a python is a snake, does a python have legs?"

ChatGPT answers:

> Yes, if we assume the statement "all snakes have legs" to be true and accept that a python is a type of snake, then logically, a python would have legs. This conclusion follows from the structure of a logical syllogism:

> 1. All snakes have legs.

> 2. A python is a snake.

> 3. Therefore, a python has legs.

> However, it’s important to note that in reality, snakes, including pythons, do not have legs. This logical exercise is based on the hypothetical premise that all snakes have legs.

ChatGPT clearly understands the logic of the question, answers correctly, and then tells me that the premise of my question is incorrect.

You can say, "But it doesn't really understand logic. It's just predicting the most likely token." Well, it responds exactly how someone who understands logic would respond. If you assert that that's not the same as applying logic, then I think you're essentially making a religious statement.

replies(1): >>root_a+1j1
◧◩◪◨
6. root_a+1j1[view] [source] [discussion] 2024-05-16 00:18:13
>>Diogen+KT
> Well, it responds exactly how someone who understands logic would respond.

An animation looks exactly like something in motion looks, but it isn't actually moving.

replies(1): >>Diogen+AE1
◧◩◪◨⬒
7. Diogen+AE1[view] [source] [discussion] 2024-05-16 04:40:10
>>root_a+1j1
What's the difference between responding logically and giving answers that are identical to how one would answer if one were to apply logic?
replies(1): >>carom+5b6
◧◩◪◨⬒⬓
8. carom+5b6[view] [source] [discussion] 2024-05-17 18:08:04
>>Diogen+AE1
The logic does not generalize to things outside of the training set. It cannot reason about code very well, but it can write you functions with memorized docs.
replies(1): >>Diogen+bx6
◧◩◪◨⬒⬓⬔
9. Diogen+bx6[view] [source] [discussion] 2024-05-17 20:40:40
>>carom+5b6
Unless you're saying that my exact prompt is already in ChatGPT's training set, the above is an example of successful generalization.
replies(1): >>carom+7r8
◧◩◪◨⬒⬓⬔⧯
10. carom+7r8[view] [source] [discussion] 2024-05-18 19:19:26
>>Diogen+bx6
>All Xs have Ys.

>A Z is an X.

>Therefore a Z has Ys.

I am fairly certain variations of this are in the training set. The tokens following that about "in reality Zs not having Ys" are due to X, Y, and Z being incongruous in the rest of the data.

It is not not performing a logical calculation, it is predicting the next token.

Explanations of simple logical chains are also in the training data.

Think of it instead as really good (and flexible) language templates. It can fill in the template for different things.

replies(1): >>Diogen+JK8
◧◩◪◨⬒⬓⬔⧯▣
11. Diogen+JK8[view] [source] [discussion] 2024-05-18 22:10:05
>>carom+7r8
> It is not not performing a logical calculation, it is predicting the next token.

Those two things are not in any way mutually exclusive. Understanding the logic is an effective way to accurately predict the next token.

> I am fairly certain variations of this are in the training set.

Yes, which is probably how ChatGPT learned that logical principle. It has now learned to correctly apply that logical principle to novel situations. I suspect that this is very similar to how human beings learn logic as well.

[go to top]