zlacker

[return to "Cursor IDE support hallucinates lockout policy, causes user cancellations"]
1. mntrue+1F4[view] [source] 2025-04-16 02:52:24
>>scared+(OP)
(Cursor cofounder)

Apologies - something very clearly went wrong here. We’ve already begun investigating, and some very early results:

* Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support.

* We’ve made sure this user is completely refunded - least we can do for the trouble.

For context, this user’s complaint was the result of a race condition that appears on very slow internet connections. The race leads to a bunch of unneeded sessions being created which crowds out the real sessions. We’ve rolled out a fix.

Appreciate all the feedback. Will help improve the experience for future users.

◧◩
2. slotra+5x6[view] [source] 2025-04-16 17:32:15
>>mntrue+1F4
> We use AI-assisted responses as the first filter for email support.

Literally no one wants this. The entire purpose of contacting support is to get help from a human.

◧◩◪
3. fragme+ZD6[view] [source] 2025-04-16 18:07:02
>>slotra+5x6
Sorta? I mean I want my problem fixed, regardless of it it's a person or not. Having a person listen to me complain about my problems might sooth my conscience, but I can't pay my bill or why was it so high; having those answered by a system that is contextualized to my problem sand is empowered to fix it, and not just a talking to a brick wall? I wouldn't say totally fine, but at the end of the day, if my problem is solved or my query, even if it's weird, I can't say I really needed for the voice on the other end of the pHone to come from a human. If a companies business model isn't sustainable without using AI agents, it's not really my problem that it's not, but also if I'm using their product, presumably I don't want that to go away.
◧◩◪◨
4. conart+ydg[view] [source] 2025-04-20 18:04:49
>>fragme+ZD6
Isn't the real scary thing here that the AI agent is empowered to control your life?

You're imagining that if you get the answer you want from the AI you hang up the phone, and if you don't you're imagining a human will pick up and have the political power and will to overrule the AI. I think it's more realistic is the way things have played out here: nobody took or had any responsibility because "they made the AI responsible" and second-guessing that choice isn't second-guessing the AI, it's second-guessing the human leaders who decreed that human support had no value. This with the result that the humans would let the AI go about as far as setting fire to the building before some kind of human element imbued with any real accountability steps in. The evidence of this is all the ignored requests presented here.

[go to top]