zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. pton_x+zB[view] [source] 2023-11-20 22:09:33
>>koie+(OP)
I've also noticed LLMs seem to lack conviction on the correctness of their answers. As the paper notes, you can easily convince the transformer that a correct answer is wrong, and needs adjustment. Ultimately they're just trying to please you. For example with ChatGPT 3.5 (abbreviated):

me: what is sin -pi/2

gpt: -1

me: that's not right

gpt: I apologize, let me clarify, the answer is 1

◧◩
2. muzani+xC[view] [source] 2023-11-20 22:15:31
>>pton_x+zB
gpt-4: Actually, the value of \(\sin(-\pi/2)\) is indeed \(-1\). The sine function represents the y-coordinate of a point on the unit circle corresponding to a given angle. At \(-\pi/2\) radians, which is equivalent to 270 degrees or a quarter circle in the negative direction, the point on the unit circle is at the bottom with coordinates (0, -1). Therefore, the sine of \(-\pi/2\) is \(-1\).

=====

The smarter it is, the more conviction it has. GPT-3.5 has a lot of impostor syndrome and it's probably deserved lol. But GPT-4 starts to stutter when you give it enough math questions, which aren't its forte.

[go to top]