zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. kromem+UE[view] [source] 2023-11-20 22:29:21
>>koie+(OP)
Stop doing self-correction within the context of the model's own generation.

The previous paper on self correction told the model "you previously said X - are there errors with this?"

This one has the mistakes statically added to the prompt in a task prompt and response without additional context immediately before asking if it has any errors.

Think about the training data.

How often does the training data of most of the Internet reflect users identifying issues with their own output?

How often does the training data reflect users identifying issues with someone else's output?

Try doing self-correction by setting up the context of "this was someone else's answer". It is still technically self-correction if a model is reviewing its own output in that context - it just isn't set up as "correct your own answer."

This may even be part of why the classifier did a better job at identifying issues - less the fine tuning and more the context (unfortunately I don't see the training/prompts for the classifier in their GitHub repo).

It really seems like the aversion to anthropomorphizing LLMs is leading people to ignore or overlook relevant patterns in the highly anthropomorphic training data fed into them. We might not want to entertain that a LLM has a concept of self vs other or a bias between critiques based on such a differentiation, and yet the training data almost certainly reflects such a concept and bias.

I'd strongly encourage future work on self-correction to explicitly define the thing being evaluated as the work of another. (Or ideally even compare self-correction rates between critiques in the context of their own output vs another's output.)

◧◩
2. bongod+A41[view] [source] 2023-11-21 01:10:18
>>kromem+UE
I see lots of people trying to prompt with incomplete sentences, not capitalizing, using slang, bad grammar, imprecise terminology etc. And it still works. However, I find that you get a noticable a quality boost if you use proper English and treat it more like a human.

"i want a python app that calculates a roadtrip for me"

vs

"Please write me a Python program using a map API that measures the distance between two locations as a car would drive. Think carefully about the program architecture and be sure to use a human readable Pythonic style. Please show me the complete program in it's entirety."

The former game me a high level overview with a ton of explanation and didn't write any code. You can try to walk it through the process of all the steps it needs, but it will write "confused", albeit working, code after a few prompts. The latter just wrote working code on the first response. Moving forward, the context is just so more concise and correct that everything after will be of much higher quality.

I rarely go past 5-10 responses due to what I'd call "context poisoning". If it makes a simple syntax error or something small, I'll shoot it the error and let it correct itself. But as soon as it invents a function or otherwise hallucinates, it gets copy pasted into a new prompt saying "here's some bad code, fix this" and it is far more likely to come up with an elegant solution rather that rewriting everything or making huge changes to solve a one off error or something it's previous context was preventing it from grasping.

What you're saying is almost the meta of using good grammer and context, and I completely agree.

◧◩◪
3. CtrlAl+FB1[view] [source] 2023-11-21 04:56:44
>>bongod+A41
Using a common search engine for "python app calculate roadtrip"

is way faster, free, doesn't require a phone number or login, and gives much better results.

◧◩◪◨
4. cosmoj+4G1[view] [source] 2023-11-21 05:28:42
>>CtrlAl+FB1
Not nearly as quickly or directly, though. LLMs augmented by search engines (or vice versa) seem to be an obvious and permanent innovation, especially for the general public who are notoriously awful at personally generating optimal keywords for a desired search query.
◧◩◪◨⬒
5. Roark6+0P1[view] [source] 2023-11-21 06:56:29
>>cosmoj+4G1
I'm not convinced. On these few occasions where an AI chat bot went out, did a Google search and responded with results the quality of that answer was always much worse than if it just replied from it's training data. This of course excludes things that happened after training data ends.

For example, ask chatgpt about writing a python script that does anything with AWS inspector 2. It will do very badly, it will hallucinate, etc. Even with Internet access. Ask about doing the same with some other API that was well represented in the training set and it's great.

This is why I think predicting death for sites like stackoverflow is very premature. What happens 10 years down the line once everything chatgpt knows is old tech? It can't be simply trained with more recrnt data, because unless stackoverflow regains it's popularity there will be very little training data. Of course various data generation techniques will be invented and tried, but no one will match the gold standard of human generated data.

Unfortunately I have to predict inevitable enshittification of general purpose chat bots.

◧◩◪◨⬒⬓
6. dwattt+w22[view] [source] 2023-11-21 08:54:44
>>Roark6+0P1
https://www.inf.ufpr.br/renato/profession.html
[go to top]