zlacker

[return to "Cursor IDE support hallucinates lockout policy, causes user cancellations"]
1. nerdjo+A84[view] [source] 2025-04-15 21:58:24
>>scared+(OP)
There is a certain amount of irony that people try really hard to say that hallucinations are not a big problem anymore and then a company that would benefit from that narrative gets directly hurt by it.

Which of course they are going to try to brush it all away. Better than admitting that this problem very much still exists and isn’t going away anytime soon.

◧◩
2. anonzz+7H4[view] [source] 2025-04-16 03:13:46
>>nerdjo+A84
Did anyone say that? They are an issue everywhere, including for code. But with code at least I can have tooling to automatically check and feed back that it hallucinated libraries, functions etc, but with just normal research / problems there is no such thing and you will spend a lot of time verifying everything.
◧◩◪
3. felipe+0R4[view] [source] 2025-04-16 05:02:54
>>anonzz+7H4
Yes, most people who have an incentive in pushing AI say that hallucinations aren't a problem, since humans aren't correct all the time.

But in reality hallucinations either make people using AI lose a lot of their time trying to stuck the LLMs from dead ends or render those tools unusable.

◧◩◪◨
4. Gormo+m96[view] [source] 2025-04-16 15:25:36
>>felipe+0R4
> Yes, most people who have an incentive in pushing AI say that hallucinations aren't a problem, since humans aren't correct all the time.

Humans often make factual errors, but there's a difference between having a process to validate claims against external reality, and occasionally getting it wrong, and having no such process, with all output being the product of internal statistical inference.

The LLM is engaging in the same process in all cases. We're only calling it a "hallucination" when its output isn't consistent with our external expectations, but if we regard "hallucination" as referring to any situation where the output for a wholly endogenous process is mistaken for externally validated information, then LLMs are only ever hallucinating, and are just designed in such a way that what they hallucinate has a greater than chance likelihood of representing some external reality.

[go to top]