Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?
I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.
I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"
But at the same time, they have been hiring folks to help with Non Profits, etc.
Based on their homepage, that doesn't seem to be true at all. Claude Code yes, focuses just on programming, but for "Claude" it seems they're marketing as a general "problem solving" tool, not just for coding. https://claude.com/product/overview
I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.
Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.
Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)
Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.
I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.
OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.
Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.
Use the top models and see what works for you.
Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.
Anthropic has claude code, it's a hit product, SWE's love claude models. Watching Anthropic rather than listening to them makes their goals clear.
and these are people are not junior developers working on trivial apps
It couldn't/shouldn't be responsible for the people management aspect but the decisions and planning? Honestly, no problem.
I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.
It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.
> I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.
Are there enough people who need support that it matters?
But do those frustrated customers matter?
The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.
There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.
IMO we can augment this criticism by asking which tasks the technology was demoed on that made them so excited in the first place, and how much of their own job is doing those same tasks--even if they don't want to admit it.
__________
1. "To evaluate these tools, I shall apply them to composing meeting memos and skimming lots of incoming e-mails."
2. "Wow! Look at them go! This is the Next Big Thing for the whole industry."
3. "Concerned? Me? Nah, memos and e-mails are things everybody does just as much as I do, right? My real job is Leadership!"
4. "Anyway, this is gonna be huge for replacing staff that have easier jobs like diagnosing customer problems. A dozen of them are a bigger expense than just one of me anyway."
At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.
It seems now they have a policy of
Warning on First Offense → Ban on Second Offense
The following behaviors will result in a warning.
Continued violations will result in a permanent ban:
Disrespectful or dismissive comments toward other members
Personal attacks or heated arguments that cross the line
Minor rule violations (off-topic posting, light self-promotion)
Behavior that derails productive conversation
Unnecessary @-mentions of moderators or Anthropic staff
I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.
AI, for a lot of support questions works quite well and does solve lots of problems in almost every field that needs support. The issue is this commonly removes the roadblocks from your users being cautious to doing something incredibly stupid that needs support to understand what they hell they've actually done. Kind of a Jeavons Paradox of support resources.
AI/LLMs also seem to be very good at pulling out information on trends in support and what needs to be sent for devs to work on. There are practical tests you can perform on datasets to see if it would be effective for your workloads.
The company I work at did an experiment on looking at past tickets in a quarterly range and predicting which issues would generate the most tickets in the next quarter and which issues should be addressed. In testing the AI did as well or better than the predictions we had made that the time and called out a number of things we deemed less important that had large impacts in the future.
In companies where your average ARR is 500k+ and large customers are in the millions, it may not be a bad strategy.
'Good' support agents may be cheaper than programmers, but not by that much. The issues small clients have can quite often be as complicated as and eat up as much time as your larger clients depending on what the industry is.
Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?
https://support.claude.com/en/collections/4078531-claude
> As a paid user of Claude or the Console, you have full access to:
> All help documentation
> Fin, our AI support bot
> Further assistance from our Product Support team
> Note: While we don't offer phone or live chat support, our Product Support team will gladly assist you through our support messenger.
With "legacy industries" in particular, their websites are usually so busted with short session timeouts/etc that it's worth spending a few minutes on hold to get somebody else to do it.
These people don't want the thing done, they want to talk to someone on the phone. The monthly payment is an excuse to do so. I know, we did the customer research on it.
Again, this is something my firm studied. Not UX "interviews," actual behavioral studies with observation, different interventions, etc. When you're operating at utility scale there are a non-negligible number of customers who will do more work to talk to a human than to accomplish the task. It isn't about work, ease of use, or anything else - they legitimately just want to talk.
There are also some customers who will do whatever they can to avoid talking to a human, but that's a different problem than we're talking about.
But this is a digression from my main point. Most of the "easy things" AI can do for customer support are things that are already easily solved in other places, people (like you) are choosing not to use those solutions, and adding AI doesn't reduce the number of calls that make it to your customer service team, even when it is an objectively better experience that "does the work."
These days, a human only gets involved when the business process wants to put some friction between the user and some action. An LLM can't really be trusted for this kind of stuff due to prompt injection and hallucinations.
If you don't offer support, reality meets expectations, which sucks, but not enough for the money machine to care.
My assumption is that Claude isn’t used directly for customer service because:
1) it would be too suggestible in some cases
2) even in more usual circumstances it would be too reasonable (“yes, you’re right, that is bad performance, I’ll refund your yearly subscription”, etc.) and not act as the customer-unfriendly wall that customer service sometimes needs to be.
I was banned two weeks ago without explanation and - in my opinion - without probable cause. Appeal was left without response. I refuse to join Discord.
I've checked bot support before but it was useless. Article you've linked mentions DSA chat for EU users. Invoking DSA in chat immediately escalated my issue to a human. Hopefully at least I'll get to know why Anthropic banned me.
Every company we talk to has been told "if you just connect openai to a knowledgebase, you can solve 80% of calls." Which is ridiculous.
The amount of work that goes in to getting any sort of automation live is huge. We often burn a billion tokens before ever taking a call for a customer. And as far as we can tell, there are no real frameworks that are tackling the problem in a reasonable way, so everything needs to be built in house.
Then, people treat customer support like everything is an open-and-shut interaction, and ignore the remaining company that operates around the support calls and actually fulfills expectations. Seeing other CX AI launches makes me wonder if the companies are even talking to contact center leaders.
The default we've seen is naive implementations are a wash. Bad AI agents cause more complex support cases to be created, and also make complex support cases the ones that reach reps (by virtue of only solving easy ones). This takes a while to truly play out, because tenured rep attrition magnifies the problem.
We've found that just a "Hey, how can I help?" will get many of these customers to dump every problem they've ever had on you, and if you can make turn two actually productive, then the odds of someone dropping out of the interaction is low.
The difference between "I need to cancel my subscription!" leading to "I can help with that! To find your subscription, what's your phone number?" or "The XYZ subscription you started last year?" is huge.
If their support is bad and you can get cut off from it with no recourse, is that a good reason to supply our fellow HN readers with misinformation based on rumor? We should just say false things to each other and it's OK as long as they're bad things about the right people? That is certainly how a lot of the internet works but I have higher hopes for us here.
We can just say "their support is bad and you can get cut off from it with no recourse" without also supporting misinformation.