It’s entirely possible that LLMs could make it so that people expect superhuman patience from other people.
I think there was a post, here, a few days ago, about people being “lost” to LLMs.
I want to challenge this assumption. I think ChatGPT is good enough for the use cases of most of its users. However, for specialist/power user work (e.g. coding, enterprise AI, foundation models for AI tools) there is strong pressure to identify the models with the best performance/cost/latency/procurement characteristics.
I think most "vibe coding" enthusiasts are keenly aware of the difference between Claude 3.7/Gemini Pro 2.5 and GPT-4.1. Likewise, people developing AI chatbots quickly become aware of the latency difference between e.g. OpenAI's and Claude (via Bedrock)'s batch APIs.
This is similar to how most non-professionals can get away with Paint.NET, while professional photo/graphic design people struggle to jump from Photoshop to anything else.
I think that's the point the author made. If the big majority of users wants this, but software developers want that, they obviously focus on this. Its what recent history confirmed and its what's logic in a capitalistic standpoint.
To break it down, developers want intelligence and quality, users want patience and validation. ChatGPT is good at the latter and okay (in comparison to competitors) at the first.
I do agree that ChatGPT may just be good enough for casual users to not be worth exploring (I'm tired with constant churn of AI releases too - on that note, there should be a worldwide ban on multiple AI companies releasing similar tools at the same time; I don't have time to look into all of them at once!) - but they're definitely not getting the suboptimal deal here. At least not ones on the paid plan that are aware of the model switcher in the UI.
EDIT: Also, setting gpt-4o as the default model gives ChatGPT another stickiness point: it's (AFAIK still) unique image generator that qualitatively outclasses anything that came before.
Perhaps their use case is so unremarkable and unsophisticated that the quality of output is immaterial to it.
> Most good personal advice does not require substantial intelligence.
Is that what therapy is to this author? "Good advice given unintelligently?"
> They’re platitudes because they’re true!
And the appeal is you can get an LLM to repeat them to you? How exactly is that "appealing?"
> However, they are fundamentally a good fit for doing it because they are
...bad technology that can only solve a limited number of boring problems unreliably. Which is what saying platitudes to a person in trouble is and not at all what therapy is meant to be.
patience for answering technical/knowledge questions that i don't want to bother a human being with may be nice, but i get the same patience from a search engine. and the patience an AI provides is contrasted with the patience that i need to get the right answers.
i have endless patience when talking to a human being because i have empathy for them. but i don't have empathy for a machine, and therefore i have no patience at all for the potential mistakes and hallucinations that an AI might produce.
AI for therapy is even worse. the thought that i could receive bad/hallucinated advice from an AI outright scares me.
> the most salient quality of language models is their ability to be infinitely patient > humans have low tolerance when dealing with someone who changes their mind often, or needs something explained twenty different ways
Something I enjoy when working with language models is the ability to completely abandon days of work because I realize I am on the wrong track. This is difficult to do with humans because of the social element of sunk cost fallacy — it can be hard to let go of the work we invested our own time into.
And yes, despite what they might say, people were not seeking intelligence, which is under-defined and highly misunderstood. They were seeking answers.
I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?
Medium-term that may be the problem. The social aspect of having another person "see you" is important in therapy. But on the immediate term, LLMs are a huge positive in this space. Professional therapy is stupidly expensive in terms of time and money, and makes it unavailable to majority of people, even those rather well-off.
And then there's availability, which, beyond what the article discussed, matters also because many people have problems that don't fit well with typical 1h sessions 2-4 times a month. LLMs let one have a 2+h therapy sessions every day at random hours for as long it takes for one to unburden themselves completely; something that's neither available nor affordable for most people.
I can ask the LLM infinite "stupid questions".
For all the things I know a little about, it can push me in the direction of an average in that field.
I can do lots of little prototypes and find the gaps then think and come back or ask more, in turn I learn.
Whilst I do see your point and I do see the value for prototyping, I don't quite agree that you can learn very much from it. Not more than the many basic intro to..... Articles can teach.
I can talk to ChatGPT without authentication. The obligation to create a username/password is a step so high no current chatbot can overcome. And even if it's so good I would be willing to authenticate, how would I know? For me to know, they ask a username/password I'm unwilling to provide without knowing.
Technology almost definitionally reduces pain, and this view of LLMs can be seen as removing the "pain" of dealing with impatient, unempathetic humans. I think this might exacerbate the loneliness problems we're seeing.
BrenBarn•1mo ago
perrygeo•1mo ago
Converting a dictionary into a list of records when you known that's what you want ... easy, mechanical, boring af, and something we should almost obviously outsource to machines. LLMs are great at this.
Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.
skydhash•1mo ago
That’s when you learn vim or emacs. Instead of editing character wise, you move to bigger structures. Every editing task becomes a short list of commands and with the power of macros, repeatable. Then if you do it often, you add (easily) a custom command for it.
andyferris•1mo ago
Pressing TAB with copilot to cover use cases you’ve never needed to discover a command or write a macro for is actually kinda cool, IMO.
TeMPOraL•1mo ago
OTOH, unless I've been immersed in the larger problem space of streaming vs. batching and caching, and generally thinking on this level, there's a good chance LLMs will "think" of more critical edge cases and caveats than I will. I use scare quotes here not because of the "are LLMs really thinking?" question, but because this isn't really a matter of thinking - it's a matter of having all the relevant associations loaded in your mental cache. SOTA LLMs always have them.
Of course I'll get better results if I dive in fully myself and "do it right". But there's only so much time working adults have to "do it right", one has to be selective about focusing attention; for everything else, quick consideration + iteration are the way to go, and if I'm going to do something quick, well, it turns out I can do this much better with a good LLM than without, because the LLM will have all the details cached that I don't have time to think up.
(Random example from just now: I asked o3 to help me with power-cycling and preserving longevity of a screen in an IoT side project; it gave me good tips, and then mentioned I should also take into account the on-board SD card and protect it from power interruptions and wear. I haven't even remotely considered that, but it was a spot-on observation.)
This actually worries me a bit, too. Until now, I relied on my experience-honed intuition for figuring out non-obvious and second-order consequences of quick decisions. But if I start to rely on LLMs for this, what will it do to my intuition?
(Also I talked time, but it's also patience - and for those of us with executive functioning issues, that's often a difference between attempting a task or not even bothering with it.)
stuaxo•1mo ago