frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

System76 on Age Verification Laws

https://blog.system76.com/post/system76-on-age-verification/
50•LorenDB•1h ago•16 comments

GPT-5.4

https://openai.com/index/introducing-gpt-5-4/
738•mudkipdev•11h ago•623 comments

Nobody ever got fired for using a struct

https://www.feldera.com/blog/nobody-ever-got-fired-for-using-a-struct
73•gz09•3d ago•47 comments

Where things stand with the Department of War

https://www.anthropic.com/news/where-stand-department-war
341•surprisetalk•5h ago•324 comments

10% of Firefox crashes are caused by bitflips

https://mas.to/@gabrielesvelto/116171750653898304
405•marvinborner•1d ago•217 comments

The Brand Age

https://paulgraham.com/brandage.html
291•bigwheels•12h ago•241 comments

Stop Using Grey Text (2025)

https://catskull.net/stop-using-grey-text.html
59•catskull•6h ago•35 comments

Labor market impacts of AI: A new measure and early evidence

https://www.anthropic.com/research/labor-market-impacts
121•jjwiseman•7h ago•158 comments

Show HN: Swarm – Program a colony of 200 ants using a custom assembly language

https://dev.moment.com/
20•armandhammer10•1h ago•9 comments

CBP tapped into the online advertising ecosystem to track peoples’ movements

https://www.404media.co/cbp-tapped-into-the-online-advertising-ecosystem-to-track-peoples-movements/
419•ece•1d ago•174 comments

A standard protocol to handle and discard low-effort, AI-Generated pull requests

https://406.fail/
138•Muhammad523•7h ago•42 comments

Good software knows when to stop

https://ogirardot.writizzy.com/p/good-software-knows-when-to-stop
389•ssaboum•16h ago•202 comments

Wikipedia was in read-only mode following mass admin account compromise

https://www.wikimediastatus.net
929•greyface-•13h ago•321 comments

TeX Live 2026 is available for download now

https://www.tug.org/texlive/acquire.html
7•jithinraj•36m ago•0 comments

A GitHub Issue Title Compromised 4k Developer Machines

https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
367•edf13•13h ago•91 comments

Hardware hotplug events on Linux, the gory details

https://arcanenibble.github.io/hardware-hotplug-events-on-linux-the-gory-details.html
130•todsacerdoti•3d ago•10 comments

A ternary plot of citrus geneology

https://www.jlauf.com/writing/citrus/
113•jlauf•2d ago•20 comments

Hacking Super Mario 64 using covering spaces

https://happel.ai/posts/covering-spaces-geometries-visualized/
26•nill0•3d ago•4 comments

Remotely unlocking an encrypted hard disk

https://jyn.dev/remotely-unlocking-an-encrypted-hard-disk/
112•janandonly•11h ago•57 comments

Show HN: Jido 2.0, Elixir Agent Framework

https://jido.run/blog/jido-2-0-is-here
262•mikehostetler•14h ago•57 comments

Launch HN: Vela (YC W26) – AI for complex scheduling

42•Gobhanu•12h ago•37 comments

Show HN: PageAgent, A GUI agent that lives inside your web app

https://alibaba.github.io/page-agent/
82•simon_luv_pho•12h ago•46 comments

Structured AI (YC F25) Is Hiring

https://www.ycombinator.com/companies/structured-ai/jobs/3cQY6Cu-mechanical-design-engineer-found...
1•issygreenslade•8h ago

Judge orders government to begin refunding more than $130B in tariffs

https://www.wsj.com/politics/policy/judge-orders-government-to-begin-refunding-more-than-130-bill...
886•JumpCrisscross•15h ago•647 comments

How to install and start using LineageOS on your phone

https://lockywolf.net/2026-02-19_How-to-install-and-start-using-LineageOS-on-your-phone.d/index.html
28•todsacerdoti•5h ago•12 comments

AI and the Ship of Theseus

https://lucumr.pocoo.org/2026/3/5/theseus/
76•pixelmonkey•14h ago•84 comments

Breaking Down 50M Pins: A Smarter Way to Design 3D IC Packages

https://www.allaboutcircuits.com/industry-articles/breaking-down-50-million-pins-a-smarter-way-to...
3•WaitWaitWha•1h ago•0 comments

Proton Mail Helped FBI Unmask Anonymous 'Stop Cop City' Protester

https://www.404media.co/proton-mail-helped-fbi-unmask-anonymous-stop-cop-city-protestor/
299•sedatk•8h ago•144 comments

OpenTitan Shipping in Production

https://opensource.googleblog.com/2026/03/opentitan-shipping-in-production.html
97•rayhaanj•11h ago•16 comments

Code World Models for Parameter Control in Evolutionary Algorithms

https://www.alphaxiv.org/abs/2602.22260
3•camilochs•4d ago•0 comments
Open in hackernews

Patience too cheap to meter

https://www.seangoedecke.com/patience-too-cheap-to-meter/
76•swah•9mo ago

Comments

BrenBarn•9mo ago
When a person can't do something because it exhausts their patience, we usually describe it not by saying the task is difficult but that it is tedious, repetitive, boring, etc. So this article reinforces my view that the main impact of LLMs is their abilities at the low end, not the high end: they make it very easy to do a bad-but-maybe-adequate job at something that you're too impatient to do yourself.
perrygeo•9mo ago
I agree with this more daily.

Converting a dictionary into a list of records when you known that's what you want ... easy, mechanical, boring af, and something we should almost obviously outsource to machines. LLMs are great at this.

Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.

skydhash•9mo ago
> easy, mechanical, boring af, and something we should almost obviously outsource to machines

That’s when you learn vim or emacs. Instead of editing character wise, you move to bigger structures. Every editing task becomes a short list of commands and with the power of macros, repeatable. Then if you do it often, you add (easily) a custom command for it.

andyferris•9mo ago
Speaking of tedious and exhausting my patience… learning to use vim and emacs properly. I do like vim but I barely know how to use it and I’ve had well over a decade of opportunity to do so!

Pressing TAB with copilot to cover use cases you’ve never needed to discover a command or write a macro for is actually kinda cool, IMO.

TeMPOraL•9mo ago
> Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.

OTOH, unless I've been immersed in the larger problem space of streaming vs. batching and caching, and generally thinking on this level, there's a good chance LLMs will "think" of more critical edge cases and caveats than I will. I use scare quotes here not because of the "are LLMs really thinking?" question, but because this isn't really a matter of thinking - it's a matter of having all the relevant associations loaded in your mental cache. SOTA LLMs always have them.

Of course I'll get better results if I dive in fully myself and "do it right". But there's only so much time working adults have to "do it right", one has to be selective about focusing attention; for everything else, quick consideration + iteration are the way to go, and if I'm going to do something quick, well, it turns out I can do this much better with a good LLM than without, because the LLM will have all the details cached that I don't have time to think up.

(Random example from just now: I asked o3 to help me with power-cycling and preserving longevity of a screen in an IoT side project; it gave me good tips, and then mentioned I should also take into account the on-board SD card and protect it from power interruptions and wear. I haven't even remotely considered that, but it was a spot-on observation.)

This actually worries me a bit, too. Until now, I relied on my experience-honed intuition for figuring out non-obvious and second-order consequences of quick decisions. But if I start to rely on LLMs for this, what will it do to my intuition?

(Also I talked time, but it's also patience - and for those of us with executive functioning issues, that's often a difference between attempting a task or not even bothering with it.)

stuaxo•9mo ago
AI selling itself at the high end is much like car companies showing off shiny sports cars.
ChrisMarshallNY•9mo ago
It takes practice, skill, and self-actualization, to become a really good listener. I know I’m not there, yet, and I’ve been at it, a long time. I suspect most folks aren’t so good at it.

It’s entirely possible that LLMs could make it so that people expect superhuman patience from other people.

I think there was a post, here, a few days ago, about people being “lost” to LLMs.

th0ma5•9mo ago
That's the ultimate goal of these models, though, to exhaust you of any sass. They will eventually approach full hallucination I'd imagine for any eventually long enough context.
Centigonal•9mo ago
>However, there doesn’t seem to be a huge consumer pressure towards smarter models. Claude Sonnet had a serious edge over ChatGPT for over a year, but only the most early-adopter of software engineers moved over to it. Most users are happy to just go to ChatGPT and talk to whatever’s available.

I want to challenge this assumption. I think ChatGPT is good enough for the use cases of most of its users. However, for specialist/power user work (e.g. coding, enterprise AI, foundation models for AI tools) there is strong pressure to identify the models with the best performance/cost/latency/procurement characteristics.

I think most "vibe coding" enthusiasts are keenly aware of the difference between Claude 3.7/Gemini Pro 2.5 and GPT-4.1. Likewise, people developing AI chatbots quickly become aware of the latency difference between e.g. OpenAI's and Claude (via Bedrock)'s batch APIs.

This is similar to how most non-professionals can get away with Paint.NET, while professional photo/graphic design people struggle to jump from Photoshop to anything else.

wobfan•9mo ago
> ChatGPT is good enough for the use cases of most of its users

I think that's the point the author made. If the big majority of users wants this, but software developers want that, they obviously focus on this. Its what recent history confirmed and its what's logic in a capitalistic standpoint.

To break it down, developers want intelligence and quality, users want patience and validation. ChatGPT is good at the latter and okay (in comparison to competitors) at the first.

TeMPOraL•9mo ago
I don't know how "normies" use this, but ChatGPT has been steadily improving. Myself, I've been using it much more in the recent month than ever before (i.e. official webapp, as opposed to API or other providers), simply because o3 is just that good. The integration of search and thinking is something beautifully effective. It's definitely the smartest model around for any problem-solving queries, whether it's figuring out the pinout of some old electronic component you bought in Shenzhen a decade ago, or figuring out which product to buy to solve a problem and how it compares with alternatives.

I do agree that ChatGPT may just be good enough for casual users to not be worth exploring (I'm tired with constant churn of AI releases too - on that note, there should be a worldwide ban on multiple AI companies releasing similar tools at the same time; I don't have time to look into all of them at once!) - but they're definitely not getting the suboptimal deal here. At least not ones on the paid plan that are aware of the model switcher in the UI.

EDIT: Also, setting gpt-4o as the default model gives ChatGPT another stickiness point: it's (AFAIK still) unique image generator that qualitatively outclasses anything that came before.

timewizard•9mo ago
> Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?

Perhaps their use case is so unremarkable and unsophisticated that the quality of output is immaterial to it.

> Most good personal advice does not require substantial intelligence.

Is that what therapy is to this author? "Good advice given unintelligently?"

> They’re platitudes because they’re true!

And the appeal is you can get an LLM to repeat them to you? How exactly is that "appealing?"

> However, they are fundamentally a good fit for doing it because they are

...bad technology that can only solve a limited number of boring problems unreliably. Which is what saying platitudes to a person in trouble is and not at all what therapy is meant to be.

em-bee•9mo ago
for me to discuss problems requires human empathy from the listener. AI can't provide that. talking to an AI about personal problems is no better than talking to myself.

patience for answering technical/knowledge questions that i don't want to bother a human being with may be nice, but i get the same patience from a search engine. and the patience an AI provides is contrasted with the patience that i need to get the right answers.

i have endless patience when talking to a human being because i have empathy for them. but i don't have empathy for a machine, and therefore i have no patience at all for the potential mistakes and hallucinations that an AI might produce.

AI for therapy is even worse. the thought that i could receive bad/hallucinated advice from an AI outright scares me.

kepano•9mo ago
I had a similar though a while ago[1]:

> the most salient quality of language models is their ability to be infinitely patient > humans have low tolerance when dealing with someone who changes their mind often, or needs something explained twenty different ways

Something I enjoy when working with language models is the ability to completely abandon days of work because I realize I am on the wrong track. This is difficult to do with humans because of the social element of sunk cost fallacy — it can be hard to let go of the work we invested our own time into.

[1] https://x.com/kepano/status/1842274557559816194

124123124•9mo ago
Agree, I find it rather funny that LLM can refresh their context, but human remember their context day-through-day so it is sometime very hard to explain things in an alternative wording to them.
ggm•9mo ago
"I'm sorry, you have exceeded my budget for today and must either ask again later or pay for a higher level of service" is not infinite patience. More specifically it's also not "too cheap to meter" because its patently both metered, and not too cheap.

And yes, despite what they might say, people were not seeking intelligence, which is under-defined and highly misunderstood. They were seeking answers.

Animats•9mo ago
The near future: receiving a huge OpenAI bill because your kid asked "Why" over and over and got answers.
stuaxo•9mo ago
Silly because search engines work really well for this use case.
bee_rider•9mo ago
Huh, I expected going in that this would actually be about LLMs waiting on customer service lines or whatever. That actually seems like it would be a rare social good produced by these things; plenty of organizations seem to shirk their responsibility to provide prompt customer service by hoping people will give up…

I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?

TeMPOraL•9mo ago
> I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?

Medium-term that may be the problem. The social aspect of having another person "see you" is important in therapy. But on the immediate term, LLMs are a huge positive in this space. Professional therapy is stupidly expensive in terms of time and money, and makes it unavailable to majority of people, even those rather well-off.

And then there's availability, which, beyond what the article discussed, matters also because many people have problems that don't fit well with typical 1h sessions 2-4 times a month. LLMs let one have a 2+h therapy sessions every day at random hours for as long it takes for one to unburden themselves completely; something that's neither available nor affordable for most people.

stuaxo•9mo ago
This is idea nails it.

I can ask the LLM infinite "stupid questions".

For all the things I know a little about, it can push me in the direction of an average in that field.

I can do lots of little prototypes and find the gaps then think and come back or ask more, in turn I learn.

ktallett•9mo ago
But do you ever get good enough to make a contribution that's worthwhile? And also is your knowledge flawed because the knowledge of the data used to train the model is flawed?

Whilst I do see your point and I do see the value for prototyping, I don't quite agree that you can learn very much from it. Not more than the many basic intro to..... Articles can teach.

dusted•9mo ago
Funny, because, sometimes it's not the patience of the other side that's the problem, but on my own side.. and with LLMs, I find in particular my patience challenged.. Whereas with humans, it's often somewhat possible to gauge their level of comprehension and adjust accordingly, but with the savant-idiot-like qualities that most LLMs exhibit, it's really difficult to strike a balance, or even understand at which point they're irrecoverably lost.
ghssds•9mo ago
>Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?

I can talk to ChatGPT without authentication. The obligation to create a username/password is a step so high no current chatbot can overcome. And even if it's so good I would be willing to authenticate, how would I know? For me to know, they ask a username/password I'm unwilling to provide without knowing.

andrewrn•9mo ago
LLMs and AI are cool and great, but I have my worries. And this is more worrisome than the most desired quality of LLMs being intelligence. Reason being is that its just another way to not have to deal with other messy humans.

Technology almost definitionally reduces pain, and this view of LLMs can be seen as removing the "pain" of dealing with impatient, unempathetic humans. I think this might exacerbate the loneliness problems we're seeing.