frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The threat is comfortable drift toward not understanding what you're doing

https://ergosphere.blog/posts/the-machines-are-fine/
142•zaikunzhang•2h ago•72 comments

Talk like caveman

https://github.com/JuliusBrussee/caveman
160•tosh•3h ago•112 comments

Lisette a little language inspired by Rust that compiles to Go

https://lisette.run/
122•jspdown•5h ago•60 comments

Ubuntu now requires more RAM than Windows 11

https://www.howtogeek.com/ubuntu-now-requires-more-ram-than-windows-11/
43•jnord•48m ago•34 comments

Show HN: A game where you build a GPU

https://jaso1024.com/mvidia/
807•Jaso1024•19h ago•166 comments

German implementation of eIDAS will require an Apple/Google account to function

https://bmi.usercontent.opencode.de/eudi-wallet/wallet-development-documentation-public/latest/ar...
362•DyslexicAtheist•13h ago•327 comments

Hightouch (YC S19) Is Hiring

https://hightouch.com/careers#open-positions
1•joshwget•42m ago

OpenScreen is an open-source alternative to Screen Studio

https://github.com/siddharthvaddem/openscreen
343•jskopek•4d ago•58 comments

Introduction to Computer Music (2009) [pdf]

https://composerprogrammer.com/introductiontocomputermusic.pdf
174•luu•10h ago•54 comments

Sad Story of My Google Workspace Account Suspension

https://zencapital.substack.com/p/sad-story-of-my-google-workspace
37•zenincognito•57m ago•3 comments

Costco sued for seeking refunds on tariffs customers paid

https://arstechnica.com/tech-policy/2026/03/costco-sued-for-seeking-refunds-on-tariffs-customers-...
37•AdmiralAsshat•4d ago•18 comments

Scientists Figured Out How Eels Reproduce (2022)

https://www.intelligentliving.co/scientists-finally-figured-out-how-eels-reproduce/
53•thunderbong•3d ago•4 comments

Aegis – open-source FPGA silicon

https://github.com/MidstallSoftware/aegis
53•rosscomputerguy•6h ago•4 comments

LLM Wiki – example of an "idea file"

https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f
227•tamnd•19h ago•70 comments

Zml-smi: universal monitoring tool for GPUs, TPUs and NPUs

https://zml.ai/posts/zml-smi/
57•steeve•4d ago•8 comments

How many products does Microsoft have named 'Copilot'?

https://teybannerman.com/strategy/2026/03/31/how-many-microsoft-copilot-are-there.html
694•gpi•17h ago•328 comments

Show HN: OsintRadar – Curated directory for osint tools

https://osintradar.com/
21•lexalizer•6h ago•1 comments

Rubysyn: Clarifying Ruby's Syntax and Semantics

https://github.com/squadette/rubysyn/blob/master/README.md
57•petalmind•4d ago•7 comments

Show HN: I built a small app for FSI German Course

https://detawk.com/
41•syedmsawaid•3d ago•13 comments

Shared mutable state in Rust (2022)

https://draft.ryhl.io/blog/shared-mutable-state/
5•vinhnx•3d ago•1 comments

AWS engineer reports PostgreSQL perf halved by Linux 7.0, fix may not be easy

https://www.phoronix.com/news/Linux-7.0-AWS-PostgreSQL-Drop
331•crcastle•12h ago•96 comments

Show HN: sllm – Split a GPU node with other developers, unlimited tokens

https://sllm.cloud
165•jrandolf•21h ago•81 comments

Show HN: I made open source, zero power PCB hackathon badges

https://github.com/KaiPereira/Overglade-Badges
118•kaipereira•22h ago•11 comments

The Indie Internet Index – submit your favorite sites

https://iii.social
163•freshman_dev•22h ago•31 comments

Demonstrating Real Time AV2 Decoding on Consumer Laptops

http://aomedia.org/blog%20posts/Demonstrating-Real-Time-AV2-Decoding-on-Consumer-Laptops/
38•breve•11h ago•9 comments

Components of a Coding Agent

https://magazine.sebastianraschka.com/p/components-of-a-coding-agent
255•MindGods•23h ago•80 comments

Unverified: What Practitioners Post About OCR, Agents, and Tables

https://idp-software.com/news/idp-accuracy-reckoning-2026/
14•chelm•6h ago•2 comments

Show HN: M. C. Escher spiral in WebGL inspired by 3Blue1Brown

https://static.laszlokorte.de/escher/
100•laszlokorte•17h ago•16 comments

Show HN: Contrapunk – Real-time counterpoint harmony from guitar input, in Rust

https://contrapunk.com/
78•waveywaves•12h ago•34 comments

Ruckus: Racket for iOS

https://ruckus.defn.io/
140•nsm•2d ago•13 comments
Open in hackernews

Patience too cheap to meter

https://www.seangoedecke.com/patience-too-cheap-to-meter/
76•swah•10mo ago

Comments

BrenBarn•10mo ago
When a person can't do something because it exhausts their patience, we usually describe it not by saying the task is difficult but that it is tedious, repetitive, boring, etc. So this article reinforces my view that the main impact of LLMs is their abilities at the low end, not the high end: they make it very easy to do a bad-but-maybe-adequate job at something that you're too impatient to do yourself.
perrygeo•10mo ago
I agree with this more daily.

Converting a dictionary into a list of records when you known that's what you want ... easy, mechanical, boring af, and something we should almost obviously outsource to machines. LLMs are great at this.

Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.

skydhash•10mo ago
> easy, mechanical, boring af, and something we should almost obviously outsource to machines

That’s when you learn vim or emacs. Instead of editing character wise, you move to bigger structures. Every editing task becomes a short list of commands and with the power of macros, repeatable. Then if you do it often, you add (easily) a custom command for it.

andyferris•10mo ago
Speaking of tedious and exhausting my patience… learning to use vim and emacs properly. I do like vim but I barely know how to use it and I’ve had well over a decade of opportunity to do so!

Pressing TAB with copilot to cover use cases you’ve never needed to discover a command or write a macro for is actually kinda cool, IMO.

TeMPOraL•10mo ago
> Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.

OTOH, unless I've been immersed in the larger problem space of streaming vs. batching and caching, and generally thinking on this level, there's a good chance LLMs will "think" of more critical edge cases and caveats than I will. I use scare quotes here not because of the "are LLMs really thinking?" question, but because this isn't really a matter of thinking - it's a matter of having all the relevant associations loaded in your mental cache. SOTA LLMs always have them.

Of course I'll get better results if I dive in fully myself and "do it right". But there's only so much time working adults have to "do it right", one has to be selective about focusing attention; for everything else, quick consideration + iteration are the way to go, and if I'm going to do something quick, well, it turns out I can do this much better with a good LLM than without, because the LLM will have all the details cached that I don't have time to think up.

(Random example from just now: I asked o3 to help me with power-cycling and preserving longevity of a screen in an IoT side project; it gave me good tips, and then mentioned I should also take into account the on-board SD card and protect it from power interruptions and wear. I haven't even remotely considered that, but it was a spot-on observation.)

This actually worries me a bit, too. Until now, I relied on my experience-honed intuition for figuring out non-obvious and second-order consequences of quick decisions. But if I start to rely on LLMs for this, what will it do to my intuition?

(Also I talked time, but it's also patience - and for those of us with executive functioning issues, that's often a difference between attempting a task or not even bothering with it.)

stuaxo•10mo ago
AI selling itself at the high end is much like car companies showing off shiny sports cars.
ChrisMarshallNY•10mo ago
It takes practice, skill, and self-actualization, to become a really good listener. I know I’m not there, yet, and I’ve been at it, a long time. I suspect most folks aren’t so good at it.

It’s entirely possible that LLMs could make it so that people expect superhuman patience from other people.

I think there was a post, here, a few days ago, about people being “lost” to LLMs.

th0ma5•10mo ago
That's the ultimate goal of these models, though, to exhaust you of any sass. They will eventually approach full hallucination I'd imagine for any eventually long enough context.
Centigonal•10mo ago
>However, there doesn’t seem to be a huge consumer pressure towards smarter models. Claude Sonnet had a serious edge over ChatGPT for over a year, but only the most early-adopter of software engineers moved over to it. Most users are happy to just go to ChatGPT and talk to whatever’s available.

I want to challenge this assumption. I think ChatGPT is good enough for the use cases of most of its users. However, for specialist/power user work (e.g. coding, enterprise AI, foundation models for AI tools) there is strong pressure to identify the models with the best performance/cost/latency/procurement characteristics.

I think most "vibe coding" enthusiasts are keenly aware of the difference between Claude 3.7/Gemini Pro 2.5 and GPT-4.1. Likewise, people developing AI chatbots quickly become aware of the latency difference between e.g. OpenAI's and Claude (via Bedrock)'s batch APIs.

This is similar to how most non-professionals can get away with Paint.NET, while professional photo/graphic design people struggle to jump from Photoshop to anything else.

wobfan•10mo ago
> ChatGPT is good enough for the use cases of most of its users

I think that's the point the author made. If the big majority of users wants this, but software developers want that, they obviously focus on this. Its what recent history confirmed and its what's logic in a capitalistic standpoint.

To break it down, developers want intelligence and quality, users want patience and validation. ChatGPT is good at the latter and okay (in comparison to competitors) at the first.

TeMPOraL•10mo ago
I don't know how "normies" use this, but ChatGPT has been steadily improving. Myself, I've been using it much more in the recent month than ever before (i.e. official webapp, as opposed to API or other providers), simply because o3 is just that good. The integration of search and thinking is something beautifully effective. It's definitely the smartest model around for any problem-solving queries, whether it's figuring out the pinout of some old electronic component you bought in Shenzhen a decade ago, or figuring out which product to buy to solve a problem and how it compares with alternatives.

I do agree that ChatGPT may just be good enough for casual users to not be worth exploring (I'm tired with constant churn of AI releases too - on that note, there should be a worldwide ban on multiple AI companies releasing similar tools at the same time; I don't have time to look into all of them at once!) - but they're definitely not getting the suboptimal deal here. At least not ones on the paid plan that are aware of the model switcher in the UI.

EDIT: Also, setting gpt-4o as the default model gives ChatGPT another stickiness point: it's (AFAIK still) unique image generator that qualitatively outclasses anything that came before.

timewizard•10mo ago
> Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?

Perhaps their use case is so unremarkable and unsophisticated that the quality of output is immaterial to it.

> Most good personal advice does not require substantial intelligence.

Is that what therapy is to this author? "Good advice given unintelligently?"

> They’re platitudes because they’re true!

And the appeal is you can get an LLM to repeat them to you? How exactly is that "appealing?"

> However, they are fundamentally a good fit for doing it because they are

...bad technology that can only solve a limited number of boring problems unreliably. Which is what saying platitudes to a person in trouble is and not at all what therapy is meant to be.

em-bee•10mo ago
for me to discuss problems requires human empathy from the listener. AI can't provide that. talking to an AI about personal problems is no better than talking to myself.

patience for answering technical/knowledge questions that i don't want to bother a human being with may be nice, but i get the same patience from a search engine. and the patience an AI provides is contrasted with the patience that i need to get the right answers.

i have endless patience when talking to a human being because i have empathy for them. but i don't have empathy for a machine, and therefore i have no patience at all for the potential mistakes and hallucinations that an AI might produce.

AI for therapy is even worse. the thought that i could receive bad/hallucinated advice from an AI outright scares me.

kepano•10mo ago
I had a similar though a while ago[1]:

> the most salient quality of language models is their ability to be infinitely patient > humans have low tolerance when dealing with someone who changes their mind often, or needs something explained twenty different ways

Something I enjoy when working with language models is the ability to completely abandon days of work because I realize I am on the wrong track. This is difficult to do with humans because of the social element of sunk cost fallacy — it can be hard to let go of the work we invested our own time into.

[1] https://x.com/kepano/status/1842274557559816194

124123124•10mo ago
Agree, I find it rather funny that LLM can refresh their context, but human remember their context day-through-day so it is sometime very hard to explain things in an alternative wording to them.
ggm•10mo ago
"I'm sorry, you have exceeded my budget for today and must either ask again later or pay for a higher level of service" is not infinite patience. More specifically it's also not "too cheap to meter" because its patently both metered, and not too cheap.

And yes, despite what they might say, people were not seeking intelligence, which is under-defined and highly misunderstood. They were seeking answers.

Animats•10mo ago
The near future: receiving a huge OpenAI bill because your kid asked "Why" over and over and got answers.
stuaxo•10mo ago
Silly because search engines work really well for this use case.
bee_rider•10mo ago
Huh, I expected going in that this would actually be about LLMs waiting on customer service lines or whatever. That actually seems like it would be a rare social good produced by these things; plenty of organizations seem to shirk their responsibility to provide prompt customer service by hoping people will give up…

I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?

TeMPOraL•10mo ago
> I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?

Medium-term that may be the problem. The social aspect of having another person "see you" is important in therapy. But on the immediate term, LLMs are a huge positive in this space. Professional therapy is stupidly expensive in terms of time and money, and makes it unavailable to majority of people, even those rather well-off.

And then there's availability, which, beyond what the article discussed, matters also because many people have problems that don't fit well with typical 1h sessions 2-4 times a month. LLMs let one have a 2+h therapy sessions every day at random hours for as long it takes for one to unburden themselves completely; something that's neither available nor affordable for most people.

stuaxo•10mo ago
This is idea nails it.

I can ask the LLM infinite "stupid questions".

For all the things I know a little about, it can push me in the direction of an average in that field.

I can do lots of little prototypes and find the gaps then think and come back or ask more, in turn I learn.

ktallett•10mo ago
But do you ever get good enough to make a contribution that's worthwhile? And also is your knowledge flawed because the knowledge of the data used to train the model is flawed?

Whilst I do see your point and I do see the value for prototyping, I don't quite agree that you can learn very much from it. Not more than the many basic intro to..... Articles can teach.

dusted•10mo ago
Funny, because, sometimes it's not the patience of the other side that's the problem, but on my own side.. and with LLMs, I find in particular my patience challenged.. Whereas with humans, it's often somewhat possible to gauge their level of comprehension and adjust accordingly, but with the savant-idiot-like qualities that most LLMs exhibit, it's really difficult to strike a balance, or even understand at which point they're irrecoverably lost.
ghssds•10mo ago
>Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?

I can talk to ChatGPT without authentication. The obligation to create a username/password is a step so high no current chatbot can overcome. And even if it's so good I would be willing to authenticate, how would I know? For me to know, they ask a username/password I'm unwilling to provide without knowing.

andrewrn•10mo ago
LLMs and AI are cool and great, but I have my worries. And this is more worrisome than the most desired quality of LLMs being intelligence. Reason being is that its just another way to not have to deal with other messy humans.

Technology almost definitionally reduces pain, and this view of LLMs can be seen as removing the "pain" of dealing with impatient, unempathetic humans. I think this might exacerbate the loneliness problems we're seeing.