frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Patience too cheap to meter

https://www.seangoedecke.com/patience-too-cheap-to-meter/
76•swah•5mo ago

Comments

BrenBarn•5mo ago
When a person can't do something because it exhausts their patience, we usually describe it not by saying the task is difficult but that it is tedious, repetitive, boring, etc. So this article reinforces my view that the main impact of LLMs is their abilities at the low end, not the high end: they make it very easy to do a bad-but-maybe-adequate job at something that you're too impatient to do yourself.
perrygeo•5mo ago
I agree with this more daily.

Converting a dictionary into a list of records when you known that's what you want ... easy, mechanical, boring af, and something we should almost obviously outsource to machines. LLMs are great at this.

Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.

skydhash•5mo ago
> easy, mechanical, boring af, and something we should almost obviously outsource to machines

That’s when you learn vim or emacs. Instead of editing character wise, you move to bigger structures. Every editing task becomes a short list of commands and with the power of macros, repeatable. Then if you do it often, you add (easily) a custom command for it.

andyferris•5mo ago
Speaking of tedious and exhausting my patience… learning to use vim and emacs properly. I do like vim but I barely know how to use it and I’ve had well over a decade of opportunity to do so!

Pressing TAB with copilot to cover use cases you’ve never needed to discover a command or write a macro for is actually kinda cool, IMO.

TeMPOraL•5mo ago
> Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.

OTOH, unless I've been immersed in the larger problem space of streaming vs. batching and caching, and generally thinking on this level, there's a good chance LLMs will "think" of more critical edge cases and caveats than I will. I use scare quotes here not because of the "are LLMs really thinking?" question, but because this isn't really a matter of thinking - it's a matter of having all the relevant associations loaded in your mental cache. SOTA LLMs always have them.

Of course I'll get better results if I dive in fully myself and "do it right". But there's only so much time working adults have to "do it right", one has to be selective about focusing attention; for everything else, quick consideration + iteration are the way to go, and if I'm going to do something quick, well, it turns out I can do this much better with a good LLM than without, because the LLM will have all the details cached that I don't have time to think up.

(Random example from just now: I asked o3 to help me with power-cycling and preserving longevity of a screen in an IoT side project; it gave me good tips, and then mentioned I should also take into account the on-board SD card and protect it from power interruptions and wear. I haven't even remotely considered that, but it was a spot-on observation.)

This actually worries me a bit, too. Until now, I relied on my experience-honed intuition for figuring out non-obvious and second-order consequences of quick decisions. But if I start to rely on LLMs for this, what will it do to my intuition?

(Also I talked time, but it's also patience - and for those of us with executive functioning issues, that's often a difference between attempting a task or not even bothering with it.)

stuaxo•5mo ago
AI selling itself at the high end is much like car companies showing off shiny sports cars.
ChrisMarshallNY•5mo ago
It takes practice, skill, and self-actualization, to become a really good listener. I know I’m not there, yet, and I’ve been at it, a long time. I suspect most folks aren’t so good at it.

It’s entirely possible that LLMs could make it so that people expect superhuman patience from other people.

I think there was a post, here, a few days ago, about people being “lost” to LLMs.

th0ma5•5mo ago
That's the ultimate goal of these models, though, to exhaust you of any sass. They will eventually approach full hallucination I'd imagine for any eventually long enough context.
Centigonal•5mo ago
>However, there doesn’t seem to be a huge consumer pressure towards smarter models. Claude Sonnet had a serious edge over ChatGPT for over a year, but only the most early-adopter of software engineers moved over to it. Most users are happy to just go to ChatGPT and talk to whatever’s available.

I want to challenge this assumption. I think ChatGPT is good enough for the use cases of most of its users. However, for specialist/power user work (e.g. coding, enterprise AI, foundation models for AI tools) there is strong pressure to identify the models with the best performance/cost/latency/procurement characteristics.

I think most "vibe coding" enthusiasts are keenly aware of the difference between Claude 3.7/Gemini Pro 2.5 and GPT-4.1. Likewise, people developing AI chatbots quickly become aware of the latency difference between e.g. OpenAI's and Claude (via Bedrock)'s batch APIs.

This is similar to how most non-professionals can get away with Paint.NET, while professional photo/graphic design people struggle to jump from Photoshop to anything else.

wobfan•5mo ago
> ChatGPT is good enough for the use cases of most of its users

I think that's the point the author made. If the big majority of users wants this, but software developers want that, they obviously focus on this. Its what recent history confirmed and its what's logic in a capitalistic standpoint.

To break it down, developers want intelligence and quality, users want patience and validation. ChatGPT is good at the latter and okay (in comparison to competitors) at the first.

TeMPOraL•5mo ago
I don't know how "normies" use this, but ChatGPT has been steadily improving. Myself, I've been using it much more in the recent month than ever before (i.e. official webapp, as opposed to API or other providers), simply because o3 is just that good. The integration of search and thinking is something beautifully effective. It's definitely the smartest model around for any problem-solving queries, whether it's figuring out the pinout of some old electronic component you bought in Shenzhen a decade ago, or figuring out which product to buy to solve a problem and how it compares with alternatives.

I do agree that ChatGPT may just be good enough for casual users to not be worth exploring (I'm tired with constant churn of AI releases too - on that note, there should be a worldwide ban on multiple AI companies releasing similar tools at the same time; I don't have time to look into all of them at once!) - but they're definitely not getting the suboptimal deal here. At least not ones on the paid plan that are aware of the model switcher in the UI.

EDIT: Also, setting gpt-4o as the default model gives ChatGPT another stickiness point: it's (AFAIK still) unique image generator that qualitatively outclasses anything that came before.

timewizard•5mo ago
> Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?

Perhaps their use case is so unremarkable and unsophisticated that the quality of output is immaterial to it.

> Most good personal advice does not require substantial intelligence.

Is that what therapy is to this author? "Good advice given unintelligently?"

> They’re platitudes because they’re true!

And the appeal is you can get an LLM to repeat them to you? How exactly is that "appealing?"

> However, they are fundamentally a good fit for doing it because they are

...bad technology that can only solve a limited number of boring problems unreliably. Which is what saying platitudes to a person in trouble is and not at all what therapy is meant to be.

em-bee•5mo ago
for me to discuss problems requires human empathy from the listener. AI can't provide that. talking to an AI about personal problems is no better than talking to myself.

patience for answering technical/knowledge questions that i don't want to bother a human being with may be nice, but i get the same patience from a search engine. and the patience an AI provides is contrasted with the patience that i need to get the right answers.

i have endless patience when talking to a human being because i have empathy for them. but i don't have empathy for a machine, and therefore i have no patience at all for the potential mistakes and hallucinations that an AI might produce.

AI for therapy is even worse. the thought that i could receive bad/hallucinated advice from an AI outright scares me.

kepano•5mo ago
I had a similar though a while ago[1]:

> the most salient quality of language models is their ability to be infinitely patient > humans have low tolerance when dealing with someone who changes their mind often, or needs something explained twenty different ways

Something I enjoy when working with language models is the ability to completely abandon days of work because I realize I am on the wrong track. This is difficult to do with humans because of the social element of sunk cost fallacy — it can be hard to let go of the work we invested our own time into.

[1] https://x.com/kepano/status/1842274557559816194

124123124•5mo ago
Agree, I find it rather funny that LLM can refresh their context, but human remember their context day-through-day so it is sometime very hard to explain things in an alternative wording to them.
ggm•5mo ago
"I'm sorry, you have exceeded my budget for today and must either ask again later or pay for a higher level of service" is not infinite patience. More specifically it's also not "too cheap to meter" because its patently both metered, and not too cheap.

And yes, despite what they might say, people were not seeking intelligence, which is under-defined and highly misunderstood. They were seeking answers.

Animats•5mo ago
The near future: receiving a huge OpenAI bill because your kid asked "Why" over and over and got answers.
stuaxo•5mo ago
Silly because search engines work really well for this use case.
bee_rider•5mo ago
Huh, I expected going in that this would actually be about LLMs waiting on customer service lines or whatever. That actually seems like it would be a rare social good produced by these things; plenty of organizations seem to shirk their responsibility to provide prompt customer service by hoping people will give up…

I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?

TeMPOraL•5mo ago
> I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?

Medium-term that may be the problem. The social aspect of having another person "see you" is important in therapy. But on the immediate term, LLMs are a huge positive in this space. Professional therapy is stupidly expensive in terms of time and money, and makes it unavailable to majority of people, even those rather well-off.

And then there's availability, which, beyond what the article discussed, matters also because many people have problems that don't fit well with typical 1h sessions 2-4 times a month. LLMs let one have a 2+h therapy sessions every day at random hours for as long it takes for one to unburden themselves completely; something that's neither available nor affordable for most people.

stuaxo•5mo ago
This is idea nails it.

I can ask the LLM infinite "stupid questions".

For all the things I know a little about, it can push me in the direction of an average in that field.

I can do lots of little prototypes and find the gaps then think and come back or ask more, in turn I learn.

ktallett•5mo ago
But do you ever get good enough to make a contribution that's worthwhile? And also is your knowledge flawed because the knowledge of the data used to train the model is flawed?

Whilst I do see your point and I do see the value for prototyping, I don't quite agree that you can learn very much from it. Not more than the many basic intro to..... Articles can teach.

dusted•5mo ago
Funny, because, sometimes it's not the patience of the other side that's the problem, but on my own side.. and with LLMs, I find in particular my patience challenged.. Whereas with humans, it's often somewhat possible to gauge their level of comprehension and adjust accordingly, but with the savant-idiot-like qualities that most LLMs exhibit, it's really difficult to strike a balance, or even understand at which point they're irrecoverably lost.
ghssds•5mo ago
>Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?

I can talk to ChatGPT without authentication. The obligation to create a username/password is a step so high no current chatbot can overcome. And even if it's so good I would be willing to authenticate, how would I know? For me to know, they ask a username/password I'm unwilling to provide without knowing.

andrewrn•5mo ago
LLMs and AI are cool and great, but I have my worries. And this is more worrisome than the most desired quality of LLMs being intelligence. Reason being is that its just another way to not have to deal with other messy humans.

Technology almost definitionally reduces pain, and this view of LLMs can be seen as removing the "pain" of dealing with impatient, unempathetic humans. I think this might exacerbate the loneliness problems we're seeing.

rlsw – Raylib software OpenGL renderer in less than 5k LOC

https://github.com/raysan5/raylib/blob/master/src/external/rlsw.h
91•fschuett•4h ago•18 comments

"Butt breathing" might soon be a real medical treatment

https://arstechnica.com/science/2025/10/butt-breathing-might-soon-be-a-real-medical-treatment/
49•zdw•2h ago•13 comments

Daniel J. Bernstein updated cdb (Constant database) to go beyond 4GB

https://cdb.cr.yp.to/
17•kreco•1h ago•5 comments

PoE basics and beyond: What every engineer should know

https://www.edn.com/poe-basics-and-beyond-what-every-engineer-should-know/
32•voxadam•5d ago•11 comments

Replacing a $3000/mo Heroku bill with a $55/mo server

https://disco.cloud/blog/how-idealistorg-replaced-a-3000mo-heroku-bill-with-a-55-server/
361•jryio•4h ago•278 comments

LLMs can get "brain rot"

https://llm-brain-rot.github.io/
277•tamnd•10h ago•158 comments

Neural audio codecs: how to get audio into LLMs

https://kyutai.org/next/codec-explainer
328•karimf•12h ago•97 comments

Ask HN: Our AWS account got compromised after their outage

172•kinj28•9h ago•52 comments

ChatGPT Atlas

https://chatgpt.com/atlas
541•easton•7h ago•524 comments

NASA chief suggests SpaceX may be booted from moon mission

https://www.cnn.com/2025/10/20/science/nasa-spacex-moon-landing-contract-sean-duffy
208•voxleone•12h ago•645 comments

Wikipedia says traffic is falling due to AI search summaries and social video

https://techcrunch.com/2025/10/18/wikipedia-says-traffic-is-falling-due-to-ai-search-summaries-an...
259•gmays•23h ago•258 comments

Mathematicians have found a hidden 'reset button' for undoing rotation

https://www.newscientist.com/article/2499647-mathematicians-have-found-a-hidden-reset-button-for-...
100•mikhael•5d ago•71 comments

Doomsday scoreboard

https://doomsday.march1studios.com/
178•diymaker•5h ago•74 comments

Understanding conflict resolution and avoidance in PostgreSQL: a complete guide

https://www.pgedge.com/blog/living-on-the-edge
15•birdculture•1w ago•1 comments

Build your own database

https://www.nan.fyi/database
362•nansdotio•8h ago•61 comments

Getting DeepSeek-OCR working on an Nvidia Spark via brute force with Claude Code

https://simonwillison.net/2025/Oct/20/deepseek-ocr-claude-code/
118•simonw•1d ago•17 comments

Do not accept terms and conditions

https://www.termsandconditions.game/
72•halflife•4d ago•51 comments

Foreign hackers breached a US nuclear weapons plant via SharePoint flaws

https://www.csoonline.com/article/4074962/foreign-hackers-breached-a-us-nuclear-weapons-plant-via...
313•zdw•9h ago•210 comments

Minds, brains, and programs (1980) [pdf]

https://home.csulb.edu/~cwallis/382/readings/482/searle.minds.brains.programs.bbs.1980.pdf
43•measurablefunc•1w ago•12 comments

We need (at least) ergonomic, explicit handles [in Rust]

https://smallcultfollowing.com/babysteps/blog/2025/10/13/ergonomic-explicit-handles/
26•emschwartz•1w ago•1 comments

We rewrote OpenFGA in pure Postgres

https://getrover.substack.com/p/how-we-rewrote-openfga-in-pure-postgres
33•wbadart•4h ago•12 comments

Flexport Is Hiring SDRs in Chicago

https://job-boards.greenhouse.io/flexport/jobs/5690976?gh_jid=5690976
1•thedogeye•8h ago

The Salt and Pepper Shaker Museum

https://www.thesaltandpeppershakermuseum.com
23•NaOH•1w ago•6 comments

What do we do if SETI is successful?

https://www.universetoday.com/articles/what-do-we-do-if-seti-is-successful
115•leephillips•1d ago•181 comments

Show HN: Katakate – Dozens of VMs per node for safe code exec

https://github.com/Katakate/k7
83•gbxk•9h ago•36 comments

60k kids have avoided peanut allergies due to 2015 advice, study finds

https://www.cbsnews.com/news/peanut-allergies-60000-kids-avoided-2015-advice/
264•zdw•21h ago•258 comments

Diamond Thermal Conductivity: A New Era in Chip Cooling

https://spectrum.ieee.org/diamond-thermal-conductivity
164•rbanffy•13h ago•56 comments

The Lottery-fication of Everything

https://www.dopaminemarkets.com/p/the-lottery-fication-of-everything
63•_1729•4h ago•26 comments

Our modular, high-performance Merkle Tree library for Rust

https://github.com/bilinearlabs/rs-merkle-tree
124•bibiver•12h ago•26 comments

The death of thread per core

https://buttondown.com/jaffray/archive/the-death-of-thread-per-core/
71•ibobev•1d ago•30 comments