As a reasonably technical user capable of using search, the only way this could really happen is if there was no web/app interface for something I wanted to do, but there was a chatbot/AI interface for it.
Perhaps companies will decide to go chatbot-first for these things, and perhaps customers will prefer that. But I doubt it to be honest - do people really want to use a fuzzy-logic CLI instead of a graphical interface? If not, why won't companies just get AI to implement the functionality in their other UIs?
Outside of customer service, I'm working on a website that has a huge amount of complexity to it, and would require a much larger interface than normal people would have patience for. So instead, those complex facets are exposed to an LLM as tools it can call, as appropriate based on a discussion with the user, and it can discuss the options with the user to help solve the UI discoverability problem.
I don't know yet if it's a good idea, but it does potentially solve one of the big issues with complex products - they can provide a simple interface to extreme complexity without overwhelming the user with an incredibly complex interface and demanding that they spend the time to learn it. Normally, designers handled this instead by just dumbing down every consumer facing product, and I'd love to see how users respond to this other setup.
If you need an LLM spin to convince management, maybe you can say something about "bring your own agent" and "openclaw", or something else along those lines?
I can see it working for complex products, for functionality I only want to use once in a blue moon. If it's something I'm doing regularly, I'd rather the LLM just tell me which submenu to find it in, or what command to type.
That said, I 100% left every call center job I had when I couldn’t put up with the bullshit middle manager crap anymore.
Nothing like having a “team leader” who knows literally nothing about the product who then has to come up with the most nitpicky garbage because they’re required to have criticism on call reviews. Meanwhile some other asshole starts yelling at him to yell at you for not being on the phones enough when the reason I’m not on the phone is because everyone on the team turns to me to ask questions to because, unlike our illustrious leader, I know what I’m doing.
I don't know if I would call it idealism. I feel like what we're discovering is that while the efficiency of communication is important, the efficacy of communication is more important. And chatbots are far less reliable at communicating the important/relevant information correctly. It doesn't really matter how easy it is to send an email if the email simply says the wrong thing.
To your point though, it's just rude. I've already seen a few cases where people have been chastised for checking out of a conversation and effectively letting their chatbot engage for them. Those conversations revolved around respect and good faith, not efficiency (or even efficacy, for that matter).
"Don't make me talk to your [customer support] chatbot" reads like "Don't make me go to an ATM for a cash withdrawal". If I can solve a thing quickly and effectively without waiting forever to speak to an overworked customer support agent on another contitent, I would very much like that!
Well, anyways, the post is not about that. It's about posting AI-generated text (blog posts, PR summaries). Which I agree with, although there are a bunch of holes in the argument, such as:
> 1. Figure out what you want to say. 2. Say it. That first figuring-out part is important.
Well, yeah, I can figure out what I want to say, then have the chatbot say it. So looks like the second part is important, too.
Though maybe people will start supplying context like "no em dashes, and occasionally misspell a word or two", and soon you won't even be able to tell that.
Who knows how much of the comments on any website are written by humans now; yeah there are plenty of tells so it can be obvious, but that might only be for the exceptionally bad posts.
Checked LinkedIn and found four posts in a row that had "here's the kicker".
LinkedIn has always been a place full of low-effort posts for people trying to self-promote, so I guess it makes sense to have a robot actually do the thinking for something that is and always has been inherently mindless.
When I worked at Microsoft, it cost over $20 to have a human customer support agent pick up the phone when someone called in for help. That was greater than our product margin. Every time someone called for help, we basically lost the entire profit on that sale, and then some.
Most common support calls where for things that were explained in the manual, the out of box experience, tutorial documents, FAQ pages, and so on and so forth.
Did we have actual support issues that needed fixing, yes of course. And the insanely high cost of customer support drove us to improve our first use experience. But holy cow people don't realize how expensive support calls are.
Edit: To explain some of the costs - This was back when people worked in physical call centers, so first off we were paying for physical office space. Next up training, each CSR had to be trained on our product. This took time and we had to pay for that training time. We also had to write support material, and update that support material for each new version that came out. All of this gets amortized into the cost of support. Because workers tend not to stay long, you pay for a lot of training.
Add in all the other costs associated with running a call center and the cost per call, even for off shore call centers, is not cheap.
In a reasonable world we'd just raise the price of the product by $x based on what % of people we expect to call in for support (ignore for a minute that estimating that number is hard), but the world isn't reasonable. Downwards price pressure comes from all sides, primarily VC backed competitors who are OK burning $$ to gain market share, and competitors at other FAANGs that are OK burning money to gain market share.
The result is that everyone is going to try and reduce support costs because holy cow per user margins are low now days for huge swaths of product categories (Apple's iPhone being a notable exception...)
obviously not a problem with the technology itself, it was like that with more primitive answering machines as well, often there only to either answer the obvious things, or stonewall people with real problems with the product or service hoping they'd just give up and take the loss
"We are experiencing an greater than usual call volume, please wait while an agent becomes available" only to be randomly disconnected has been a thing for most of my life.
Everyone seems to be hyping open claw at the moment soon its just going to be LLMs talking to LLMs.... I wonder if they will develop a short hand and start talking in wingdings.
I would think that's close to an hourly rate for first level support and calls are mostly resolved in ~10 mins?
Then you also have to pay them regardless of whether someone calls.
A rough rule of thumb is the full burdened cost of an hourly office knowledge worker is two to three times the gross hourly wage.
That being said. Your example of customers calling for support on things they shpuld be capable of figuring out themselves in is probably where AI is going to shine as first line support. Once (if?) AI voice chat is good enough to replace chatbots we may not even realize we're talking with an AI unless it tells us.
It certainly won't be cheap to run real-time AI voice chat, or any real-time AI chat. The AI costs that you currently see are heavily subsidized, just like OP's example of "VC backed competitors who are OK burning $$ to gain market share", it's the same. These AI companies are far from profitable, burning billions to insert themselves into customer support pipelines and everywhere else they can, and then the other foot will drop. Uber and Lyft are far more expensive today than when they started, and the price to run "AI" will also inflate when these companies have to pay off all the billions they've spent but didn't earn. I doubt it will end up costing much less if less at all than human support, with worse results.
All alternatives which are capable of actually serving the customer are systematically driven out of business.
Had they built a better, more intuitive product, they would get fewer support calls and wouldn't be struggling with costs.
As I mentioned, due to high support costs we worked to improve the UX and we ended up dropping our support costs dramatically.
Doesn't change the fact that everyone who did call cost us more than our profit on the sale.
Customer support is expensive.
Microsoft used to charge for customer support back in the day (90s). The way it worked was that if it was your fault, you paid, if it was a product bug, there was no cost for support. While not a perfect system, it at least aligned everyone's incentives in the right direction. (The huge glaring flaw being it was MS that decided if they were going to charge you for the support call or not...)
I doubt it. I suspect the number one tech support call is "I forgot my password" and everything else is a long way below that.
I'll slag on Microslop all day, but users are dumber than dumb.
We product makers get to think about our one little product all day, and it's our job to make our product work for the "dumb" users. It's not their job to adapt to us.
And if it turns out to be your mistake (faulty product or missing documentation) as opposed to something the user could have reasonably solved by themselves, refund the charge and possibly provide compensation for the inconvenience.
This doesn't seem like a bad thing when it comes to aligning incentives (assuming customers actually want a product they don't need help to use).
My brother used to work at tech support for XBox Live.
He said that 80% of his calls were for password resets, something users can easily self-service. There's literally an option on the login form for "Forgot Password", and people would rather spend time calling up support, waiting on hold, and verifying their identity to a support agent than click a button.
And it's not like the password reset flow was any easier going through support. He'd just trigger a password reset e-mail to be sent, exactly like the user hitting Forgot Password.
And this is after the phone tree tells them "If you forgot your password, click the Forgot Password link".
I always think about this when people demand they should be able to talk to a human. The overwhelming number of callers to tech support don't need a human. Giving everybody the ability to speak to a human just isn't feasible.
I have an uncle that works tech support for XFinity. Half his calls are resolved by just power cycling the modem/router. People shouldn't need a human to tell them to do that.
Comcast deserves every penny of customer service expenses they're incurring if their own purpose-built modem/routers are so flaky they're responsible for half the problems people experience with their service. Customers should not be expected to endure shitty products without even seeking help from the service provider that owes them better.
If the AI output was actually better than talking to a real human, more useful, more concise, serving the job to be done, then no one would have a problem with it. In fact they would appreciate it. That future is not here in many areas.
The problem is people are wielding AI right now and either [a] the models they are using are not good enough, [b] they aren't being given enough context, or [c] they are deployed in a way that makes it sloppy
(Insert joke about whether this comment is AI. It's not, but joke away)
LLMs won't add information to context, so if the output is larger than the input then it's slop. They're much better at picking information out of context. If I have a corpus of information and prompt an extraction, the result may well contain more information than the prompt. It's not necessarily feasible to transfer the entire context, and also I've curated that specific result as suitably conveying the message I intend to convey.
This does all take effort.
My take is also that I am interested in what people say: I have priors for how worthwhile I expect it to be to read stuff written by various people, and I will update my priors when they give me things to read. If they give me slop, that's going to affect what I think of them, and I expect the same in return. I'm willing to work quite hard to avoid asking my colleagues to read or review slop.
(I could have sworn there was a popular HN submission a while back of this or a similar blog post, but damned if I can find it now.)
This is so obviously true to intelligent people ... it's sad that you're getting downvoted.
The OP wrote
> When I talk to a person, I expect that they are telling me things out of their head — that they have developed a belief and are trying to communicate it to me.
But when I'm having a conversation about a subject (rather than with a friend, partner, or other person with whom I have a relationship and the conversation is part of the having of that relationship) I don't care what is in that person's head, I care about the truth of the matter, so I'm far more interested in their sources, their logic and the validity of same. Unless I'm a psychologist doing a survey, why should I care about some random person's beliefs? Since I'm a truth seeker, I care about their arguments, and of course the quality of their arguments is of paramount importance. I appreciate people who can back up their arguments, and LLM summaries that are chock full of facts gleaned from the massive training data that includes a vast amount of human knowledge are fully appreciated--while being aware that hallucination is possible so I often double check things regardless of the source. OTOH, the pushback to this is from people I consider worse than irrelevant--they not only are willfully ignorant but they reject knowledge seeking for irrational ideological reasons. (I myself see the LLM industry to be extremely problematic, but as long as LLMs exist and are capable of producing quality signal--which is the given here--then I will use them.)
This whole page is illustrative: so many people are telling us things out of their head ... that have nothing to do with the article because they didn't read it. So they blather about their beliefs and opinions about support--because that's how they interpreted the title. These comments are useless.
"Anything invented after you're thirty-five is against the natural order of things." Douglas Adams
But frankly LLMs suck at writing. It's not only formulaic, it's uninspired!! So I worry that we're entering an era of mediocre writing. I like the "Have you considered writing?" suggestion. I've been trying to make a habit of writing book reviews so I can counter some of the writing atrophy I've developed. Hopefully it will help me become a better thinker too. As Ray says here: "Understanding your own point of view is an enriching exercise."
Seems to me like you're doing fine so far. (I hope I haven't just been letting my standards go down the drain...)
> It's not only formulaic, it's uninspired!
Heh.
How many times at work have you been talking to someone else where they're using common words as jargon? Maybe it's something like "the online system" or "the platform". And it's perfectly clear to them what they mean, but everyone else in the company either doesn't know what that actually is, or they have a distorted idea based on the conventional definitions of the words. Even without LLMs in the mix, this can lead to people coming out of meetings with completely different understandings of what's going on.
My experience is few people are actually providing the relevant context to the LLM to explain what they mean in situations like this. Or they don't have the actual knowledge and are using the LLM in the hopes it'll fill in for their ignorance. The LLMs are RLHFed to sound confident, so they won't convey that they don't know what a piece of jargon means. Instead they'll use a combination of the common meaning and the rest of the context to invent something. When this gets copy/pasted and sent around, it causes everyone who isn't familiar to get the wrong idea. Hence "misunderstanding amplifier".
To the point of the article, this is soluble if people take the time to actually figure out what they are trying to convey. But if they did that, they wouldn't need the LLM in the first place.
I recently was dealing with the Amazon robot--after correctly identifying the items in the order it then proceeded to use short terms which were wrong, but make sense as what a classifier might have spit out. Instead of understanding being a shared thing it falls entirely on the user. Sufficiently adept user, this is fine. But a lot of users aren't sufficient adept.
Companies want people to spend as much as possible while doing the minimum work on the product.
Chatbots let companies spend almost nothing while pretending to provide long-term support.
I wonder if something similar to a copyleft license could help. What if there was a contractual "fair business" pledge that companies could add? I imagine that good enough lawyers could craft something that essentially said, "You can only display this contract if you legally guarantee that you do X, Y, Z and do not do A, B C."
The only way to deal with this is to make the implentation not worth it by constantly bypassing it to speak to a human and/or making it cost money by getting it to give you things you're not otherwise entitled to.
I really wonder how these things will handle prompt injection and similar things. I have no confidence any of this is secure.
Wait until this comes to healthcare and it'll be chatbots handling prior authorization calls, wasting even more physician time.
[1]: https://www.wired.com/story/air-canada-chatbot-refund-policy...
This article is not about support chatbots. It's about clearing up your writing/thoughts and communicating clearly even if you used a chatbot to get there.
When it comes to technical discussions, there are so many people on here just regurgitating what they read on an earlier thread. Maybe to test if what they heard was true. Maybe because they just want to sound smart. Not a lot of people actually trying things.
It's human nature to want to share your dreams, because they are fascinating to you.
However, it's also human nature to want to punch someone in the face when they start talking about this crazy dream they had last night ... because it has nothing to do with you, and doesn't interest you at all.
Similarly, when an AI says something useful to you, in response to your prompts, it's very particular to you. When you try to share it with others ... you get the article.
The bigger problem to me is "help" is always framed as my needing to be educated, not a problem with the service. This is especially prevalent for technical customers who are legitimately trying to draw attention to a bug in the platform only to get how-to help articles pasted back to them.
Forcing a customer anything beyond that is RUDE!
Don't make me talk to a chatbot while there is zero forward progress in solving the problem.
That's the aspect I don't understand. The information I want is almost always something some other customers have asked already. I'd much prefer to avoid their customer support maze entirely and help myself on a searchable wiki. Unfortunately, most company's online product support FAQs usually only contain answers to obvious shit on the order of RTFM and "is it plugged in." Why not just post the doc their advanced tier 3 support people share amongst themselves? It can be under a warning label like 'preliminary advanced info for engineers'.
I realize people like me represent only around 2-3% of the customers seeking support but it's 2-3% that is able to self-serve and takes more time than average because we invariably have to work through front-line support to get escalated to someone with the non-obvious info that's still been asked many times before. So maybe we're only ~2% but we suck up 4% of support bandwidth and we probably take up closer to ~20% of Tier 3 support - the most expensive, scarce type.
A warning label like you mention is a possibility if that is considered to be necessary, although I think it might be better to have a file that you can download and read (or request by mail or telephone or fax, if this becomes necessary in some circumstances; do not assume the computer always works and is compatible with your file), instead of a searchable wiki.
I don't want to contact customer support in the first place, if I'm forced to, it's because something is very wrong and in that case I don't want to be listening to elevator music and "your call is important to us, please hold" for an hour, and get my call disconnected forcing me to call again.
Issue is that I've yet to have a chatbot actually fix my issues, or most 1st contact human operators for that matter.
Molitor5901•1h ago
titanomachy•1h ago
knowaveragejoe•1h ago
Molitor5901•1h ago
morkalork•1h ago
The different accents and call center background noise are features in their product.
WD-42•1h ago