zlacker

[return to "Cursor IDE support hallucinates lockout policy, causes user cancellations"]
1. AstroB+8Y3[view] [source] 2025-04-15 20:55:00
>>scared+(OP)
From cursor developer: "Hey! We have no such policy. You're of course free to use Cursor on multiple machines.

Unfortunately, this is an incorrect response from a front-line AI support bot. We did roll out a change to improve the security of sessions, and we're investigating to see if it caused any problems with session invalidation. We also do provide a UI for seeing active sessions at cursor.com/settings.

Apologies about the confusion here."

◧◩
2. acedTr+TY3[view] [source] 2025-04-15 21:00:38
>>AstroB+8Y3
lol this is totally the kind of company you should be giving money too
◧◩◪
3. idopms+7Z3[view] [source] 2025-04-15 21:02:26
>>acedTr+TY3
I mean to be fair, I like that they're putting their money where their mouth is so to speak - if you want to sell a product based on the idea that AI can handle complex tasks, you should probably have AI doing what should be simple, frontline support.
◧◩◪◨
4. AstroB+024[view] [source] 2025-04-15 21:18:45
>>idopms+7Z3
I don't agree with that at all. Hallucination is a very well known issue. Sure leverage AI to improve their productivity.. but not even having a human look over the responses shows they don't care about their customers
◧◩◪◨⬒
5. dylan6+H74[view] [source] 2025-04-15 21:52:50
>>AstroB+024
If you had a human support person feeding the support question into the AI to get a hint, do you think that support person is going to know that the AI response is made up and not actually a correct answer? If they knew the correct answer, they wouldn't have needed to ask the AI.
◧◩◪◨⬒⬓
6. _jonas+WW9[view] [source] 2025-04-17 19:42:31
>>dylan6+H74
Exactly, that's why my startup recommends all LLM outputs should come with trustworthiness scores:

https://cleanlab.ai/tlm/

[go to top]