zlacker

[parent] [thread] 6 comments
1. Jensso+(OP)[view] [source] 2023-11-21 00:48:46
> All apples are red. All apples are fruit. My car is red, therefore my car is a fruit.

I Googled that exact phrase and got solutions. A logical problem that can be solved by a search engine isn't a valid example, the LLM knows that it is a logical puzzle just by how you phrased it just like Google knows that it is a logical puzzle.

And no, doing tiny alterations to that until you no longer get any Google hits isn't a proof ChatGPT can do logic, it is proof that ChatGPT can parse general structure and find patterns better than a search engine can. You need to do logical problems that can't easily be translated to standard problems that there are tons of examples of in the wild.

replies(3): >>cmrdpo+eA >>kaoD+G51 >>mister+az1
2. cmrdpo+eA[view] [source] 2023-11-21 04:58:26
>>Jensso+(OP)
Exactly. And when you realize how weak GPT is on this is by giving it complicated type system programming problems and watch it fall over and get stuck in circular, illogical patterns, and then get even crazier as you try to correct it.

It can't "reason things through", it just builds logic-like patterns based on the distillation of the work of other minds which did reason -- which works about 80% of the time, but when it fails it can't retrace its steps.

Even a really "stupid" human (c'est moi) can be made to work through and find their errors when given guidance by a patient teacher. In my experience, dialectical guidance actually makes ChatGPT worse.

3. kaoD+G51[view] [source] 2023-11-21 09:32:08
>>Jensso+(OP)
Yeah I didn't say this was a good example (I'm not OP, was just adding info), but you're moving the goalposts from "you pointed its error" to "that is in its training data" (which is fair, just not what I was replying to, I was addressing your specific point).

Could you provide an actual example that you can't Google verbatim and would test this properly?

replies(1): >>wizzwi+hb1
◧◩
4. wizzwi+hb1[view] [source] [discussion] 2023-11-21 10:22:54
>>kaoD+G51
Roses are red. Violets are blue. Roses are hot. Therefore, violets are cold.
replies(1): >>kaoD+le1
◧◩◪
5. kaoD+le1[view] [source] [discussion] 2023-11-21 10:50:46
>>wizzwi+hb1
The poem you've written follows a structure often used in humorous or nonsensical verses. The first two lines, "Roses are red, violets are blue," are a classic opening for many poems and are typically followed by lines that rhyme and make sense together. However, the next lines, "Roses are hot. Therefore, violets are cold," playfully break this expectation by using a logical structure (a "therefore" statement) but reaching a conclusion that is nonsensical. This twist creates a humorous effect.
replies(1): >>wizzwi+Pk1
◧◩◪◨
6. wizzwi+Pk1[view] [source] [discussion] 2023-11-21 11:45:44
>>kaoD+le1
Are you sure it's nonsensical? Red is to blue as hot is to cold.
7. mister+az1[view] [source] 2023-11-21 13:20:11
>>Jensso+(OP)
> And no, doing tiny alterations to that until you no longer get any Google hits isn't a proof ChatGPT can do logic

Can you show "the" implementation of "can do logic"?

Is it possible to demonstrate that it can do logic?

[go to top]