zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. kromem+UE[view] [source] 2023-11-20 22:29:21
>>koie+(OP)
Stop doing self-correction within the context of the model's own generation.

The previous paper on self correction told the model "you previously said X - are there errors with this?"

This one has the mistakes statically added to the prompt in a task prompt and response without additional context immediately before asking if it has any errors.

Think about the training data.

How often does the training data of most of the Internet reflect users identifying issues with their own output?

How often does the training data reflect users identifying issues with someone else's output?

Try doing self-correction by setting up the context of "this was someone else's answer". It is still technically self-correction if a model is reviewing its own output in that context - it just isn't set up as "correct your own answer."

This may even be part of why the classifier did a better job at identifying issues - less the fine tuning and more the context (unfortunately I don't see the training/prompts for the classifier in their GitHub repo).

It really seems like the aversion to anthropomorphizing LLMs is leading people to ignore or overlook relevant patterns in the highly anthropomorphic training data fed into them. We might not want to entertain that a LLM has a concept of self vs other or a bias between critiques based on such a differentiation, and yet the training data almost certainly reflects such a concept and bias.

I'd strongly encourage future work on self-correction to explicitly define the thing being evaluated as the work of another. (Or ideally even compare self-correction rates between critiques in the context of their own output vs another's output.)

◧◩
2. Poigna+VQ[view] [source] 2023-11-20 23:34:33
>>kromem+UE
> Think about the training data.

> How often does the training data of most of the Internet reflect users identifying issues with their own output?

> How often does the training data reflect users identifying issues with someone else's output?

I wouldn't put too much weight into just-so theories like this.

We still don't understand too much about how LLMs process information internally; it could be that their understanding of the concept of "correcting a previous mistake" is good enough that they can access it without prompt engineering to mimic the way it happens in training data. Or maybe not (after all, there's an entire management concept called "pre-mortems" which is basically doing what you suggest, as a human).

◧◩◪
3. galaxy+hr1[view] [source] 2023-11-21 03:41:03
>>Poigna+VQ
> We still don't understand too much about how LLMs process information internally

I admit I personally don't know too much about how "LLMs process information internally". But, I would find it curious if programmers who created the system wouldn't understand what it is doing. Is there any evidence that the LLM programmers don't understand how the program they created works?

◧◩◪◨
4. kromem+SG1[view] [source] 2023-11-21 05:36:41
>>galaxy+hr1
LLMs aren't programmed and it's why the neural network working as it does is black box to everyone, developers included.

Imagine a billion black boxes with hamsters put in them. You put in a bag of equally mixed Skittles in one end of each box and then rate each box based on how well it does to get rid of the yellow and green Skittles but push out the others. The ones that do the best at this you mate the hamsters and go again, over and over. Eventually you should have hamsters in boxes that almost always get rid of yellow and green Skittles and output the rest.

But is it because you bred in a preference to eat those color Skittles? An aversion to the other colors? Are they using those colors for nesting? Do they find the red and blue and orange ones too stimulating so they push those out but leave the others alone?

There could be a myriad of reasons why your training was successful, and without the ability to introspect the result you just won't know what's correct.

This is a huge simplification by way of loose analogy for what's going on with training a transformer based LLM, but no one is sitting there 'programming' it. They are just setting up the conditions for it to self-optimize around the training goals given the data, and the 'programming' just has to do with improving the efficiency of the training process. Analyzing the final network itself is like trying to understand what each variable in a billion variable math equation is doing to the result.

◧◩◪◨⬒
5. galaxy+dM1[view] [source] 2023-11-21 06:30:33
>>kromem+SG1
When you train an LLM you do that by executing some computer code with some inputs. The programmers who wrote the code you execute know exactly what it does. Just like Google knows exactly how its search-algorithm works. An LLM uses statistics and Markov-chains and what have you to generate the output for a given input.

It's like with any optimization algorithm. You cannot predict what exactly will be the result of a given optimization-run. But you know how the optimization algorithm works. The (more or less) optimal solution you get back might surprise you, might be counter-intuitive. But programmers who wrote the code that did the optimization, and have the source-code, know exactly how it works.

When you get a result from LLM you don't say "I can't possibly understand why it came up with this result?". You can understand that, it's just following the rules it was programmed to follow. You might not know those rules, you might not understand them, but programmers who wrote them do.

◧◩◪◨⬒⬓
6. IanCal+cU1[view] [source] 2023-11-21 07:46:13
>>galaxy+dM1
You're mixing up what we mean by what rules it's following or how it's working.

If I ask how it's able to write a poem given a request and you tell me you know - it multiplies and adds this set of 1.8 trillion numbers together X times with this set of accumulators, I would argue you don't understand how it works enough to make any useful predictions.

Kind of like how you understand what insane spaghetti code is doing - it's running this code - but can have absolutely no idea what business logic it encodes.

◧◩◪◨⬒⬓⬔
7. galaxy+dm3[view] [source] 2023-11-21 16:50:34
>>IanCal+cU1
It is not "spaghetti-code" but well-engineered code I believe. The output of an LLM is based on billions of fine-tuned parameters but we know how those parameters came about, by executing the code of the AI-application in the training mode.

It doesn't really encode "business logic", it just matches your input with the best output it can come up with, based on how its parameters are fine-tuned. Saying that "We don't understand how it works" is just unnecessary AI-mysticism.

◧◩◪◨⬒⬓⬔⧯
8. IanCal+ov3[view] [source] 2023-11-21 17:22:40
>>galaxy+dm3
The spaghetti code comparison is not to the code but the parameters.

> It doesn't really encode "business logic"

Doesn't it? Gpt architectures can build world models internally while processing tokens (see Othello got).

> we know how those parameters came about, by executing the code of the AI-application in the training mode.

Sure. But that's not actually a very useful description when trying to figure out how to use and apply these models to solve problems or understand what their limitations are.

> Saying that "We don't understand how it works" is just unnecessary AI-mysticism.

We don't to the level we want to.

Tell you what, let's flip it around. If we know how they work just fine, why are smart researchers doing experiments with them? Why is looking at the code and billions or trillions of floats not enough?

[go to top]