zlacker

[parent] [thread] 0 comments
1. alkona+(OP)[view] [source] 2025-05-22 09:22:21
I think this is the key. If you have a problem where it's slow to produce a plausible answer but quick to check if it's correct (writing a shell script, solving an equation, making up a verse for a song) then you have a good tool. It's the Prime-factorization category of problems. Recognizing when you have one and going to an LLM when you do, is key.

But what if you _don't_ have that kind of problem? Yes LLMs can be useful to solve the above. But for many problems you ask for a solution and what you get is a suggested solution which takes a long to verify. Meaning: unless you are somewhat sure it will solve the problem you don't want to do it. You need some estimate of confidence. LLMs are useless for this. As a developer I find my problems are very rarely in the first category and more often in the second.

Yes it's "using them wrong". It's doing what they struggle with. But it's also what I struggle with. It's hard to stop yourself when you have a difficult problem and you are weighing googling it for an hour or chatgpt-ing it for an hour. But I often regret going the ChatGPT route after several hours.

[go to top]