zlacker

[parent] [thread] 0 comments
1. miki12+(OP)[view] [source] 2025-04-09 08:07:18
This has always been the case with ML.

ML is good at fuzzy stuff, where you don't have a clear definition of a problem (what is spam? what is porn?), "I know it when I see it", or when you don't have a clear mathematical algorithm to solve the problem (think "distinguishing dogs and cats").

When you have both (think sorting arrays, adding numbers), traditional programming (and that includes Prolog and the symbolic AI stuff) is much better.

LLMs will always be much worse than traditional computer programs at adding large numbers, just as traditional computer programs will always be worse at telling whether the person in the image is wearing proper safety equipment.

For best results, you need to combine both. Use LLMs for the "fuzzy stuff", converting imprecise English or pictures into JSON, Python, Wolfram, Prolog or some other representation that a computer can understand and work with, and then use the computer to do the rest.

Let's say you're trying to calculate how much proteins there are per 100 grams of a product, you have a picture of the label, but the label only gives you proteins per serving and the serving size in imperial units. The naive way most programmers try is to ask an LLM to give them proteins per 100g, which is obviously going to fail in some cases. The correct way is to ask the LLM for whatever unit it likes, and then do the conversion on the backend.

[go to top]