zlacker

[return to "My AI skeptic friends are all nuts"]
1. ofjcih+21[view] [source] 2025-06-02 21:18:27
>>tablet+(OP)
I feel like we get one of these articles that addresses valid AI criticisms with poor arguments every week and at this point I’m ready to write a boilerplate response because I already know what they’re going to say.

Interns don’t cost 20 bucks a month but training users in the specifics of your org is important.

Knowing what is important or pointless comes with understanding the skill set.

◧◩
2. briand+w2[view] [source] 2025-06-02 21:26:08
>>ofjcih+21
> with poor arguments every week

This roughly matches my experience too, but I don't think it applies to this one. It has a few novel things that were new ideas to me and I'm glad I read it.

> I’m ready to write a boilerplate response because I already know what they’re going to say

If you have one that addresses what this one talks about I'd be interested in reading it.

◧◩◪
3. slg+L8[view] [source] 2025-06-02 22:01:54
>>briand+w2
>> with poor arguments every week

>This roughly matches my experience too, but I don't think it applies to this one.

I'm not so sure. The argument that any good programming language would inherently eliminate the concern for hallucinations seems like a pretty weak argument to me.

◧◩◪◨
4. simonw+5a[view] [source] 2025-06-02 22:10:06
>>slg+L8
Why does that seem weak to you?

It seems obviously true to me: code hallucinations are where the LLM outputs code with incorrect details - syntax errors, incorrect class methods, invalid imports etc.

If you have a strong linter in a loop those mistakes can be automatically detected and passed back into the LLM to get fixed.

Surely that's a solution to hallucinations?

It won't catch other types of logic error, but I would classify those as bugs, not hallucinations.

◧◩◪◨⬒
5. slg+5c[view] [source] 2025-06-02 22:23:21
>>simonw+5a
>It won't catch other types of logic error, but I would classify those as bugs, not hallucinations.

Let's go a step further, the LLM can produce bug free code too if we just call the bugs "glitches".

You are making a purely arbitrary decision on how to classify an LLM's mistakes based on how easy it is to catch them, regardless of their severity or cause. But simply categorizing the mistakes in a different bucket doesn't make them any less of a problem.

[go to top]