zlacker

[parent] [thread] 0 comments
1. fennec+(OP)[view] [source] 2024-10-24 00:40:13
Yeah, it does teach me more about how LLMs work on the inside when it can't answer a plain English logic question like that, but I can provide it a code example and it can execute it step by step and get a correct answer; it's clearly been trained on enough JS that even a complex reduce + arrow function I watched kunoichi (am RP model nonetheless!) imaginary execute it step by step and arrive at a correct answer.

I think it's something like the counting parts of problems that current models are shaky with, and I imagine it's a training data problem.

[go to top]