frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

DDoSecrets publishes 410 GB of heap dumps, hacked from TeleMessage

https://micahflee.com/ddosecrets-publishes-410-gb-of-heap-dumps-hacked-from-telemessages-archive-server/
364•micahflee•6h ago•84 comments

I got fooled by AI-for-science hype–here's what it taught me

https://www.understandingai.org/p/i-got-fooled-by-ai-for-science-hypeheres
63•qianli_cs•2h ago•16 comments

The Windows Subsystem for Linux is now open source

https://blogs.windows.com/windowsdeveloper/2025/05/19/the-windows-subsystem-for-linux-is-now-open-source/
1257•pentagrama•14h ago•815 comments

Have I Been Pwned 2.0

https://www.troyhunt.com/have-i-been-pwned-2-0-is-now-live/
485•LorenDB•9h ago•152 comments

Making Video Games (Without an Engine) in 2025

https://noelberry.ca/posts/making_games_in_2025/
18•selvan•1h ago•3 comments

Jules: An Asynchronous Coding Agent

https://jules.google/
304•travisennis•9h ago•124 comments

What are people doing? Live-ish estimates based on global population dynamics

https://humans.maxcomperatore.com/
113•willbc•5h ago•33 comments

Zod 4

https://zod.dev/v4
663•bpierre•15h ago•193 comments

A shower thought turned into a Collatz visualization

https://abstractnonsense.com/collatz/
93•abstractbill•6h ago•15 comments

Ann, the Small Annotation Server

https://mccd.space/posts/design-pitch-ann/
28•todsacerdoti•3h ago•2 comments

GitHub Copilot Coding Agent

https://github.blog/changelog/2025-05-19-github-copilot-coding-agent-in-public-preview/
402•net01•14h ago•247 comments

Ask HN: Do people actually pay for small web tools?

20•scratchyone•2d ago•18 comments

Claude Code SDK

https://docs.anthropic.com/en/docs/claude-code/sdk
323•sync•13h ago•150 comments

Show HN: A free, privacy preserving, archive of public Discord servers

https://searchcord.io
44•searchcord•4h ago•37 comments

Launch HN: Better Auth (YC X25) – Authentication Framework for TypeScript

206•bekacru•16h ago•81 comments

Kilo: A text editor in less than 1000 LOC with syntax highlight and search

https://github.com/antirez/kilo
124•klaussilveira•10h ago•17 comments

Run GitHub Actions locally

https://github.com/nektos/act
218•flashblaze•3d ago•89 comments

Game theory illustrated by an animated cartoon game

https://ncase.me/trust/
262•felineflock•14h ago•43 comments

A man who visited every country in the world without boarding a plane (2023)

https://www.theguardian.com/travel/2023/aug/16/take-the-high-road-the-man-who-visited-every-country-in-the-world-without-boarding-a-plane
39•thunderbong•2d ago•13 comments

Memory Consistency Models: A Tutorial

https://jamesbornholt.com/blog/memory-models/
33•tanelpoder•5h ago•2 comments

Biff – a batteries-included web framework for Clojure

https://biffweb.com
25•TheWiggles•3h ago•1 comments

The forbidden railway: Vienna-Pyongyang (2008)

http://vienna-pyongyang.blogspot.com/2008/04/how-everything-began.html
175•1317•12h ago•48 comments

Terraform MCP Server

https://github.com/hashicorp/terraform-mcp-server
65•kesor•8h ago•15 comments

Remarks on AI from NZ

https://nealstephenson.substack.com/p/remarks-on-ai-from-nz
150•zdw•4d ago•75 comments

xAI's Grok 3 comes to Microsoft Azure

https://techcrunch.com/2025/05/19/xais-grok-3-comes-to-microsoft-azure/
122•mfiguiere•14h ago•113 comments

Patience too cheap to meter

https://www.seangoedecke.com/patience-too-cheap-to-meter/
41•swah•2d ago•14 comments

Experimentation Matters: Why Nuenki isn't using pairwise evaluations

https://nuenki.app/blog/experimentation_matters_why_we_arent_using_pairwise
4•Alex-Programs•3d ago•0 comments

WireGuard vanity keygen

https://github.com/axllent/wireguard-vanity-keygen
78•simonpure•10h ago•13 comments

Too Much Go Misdirection

https://flak.tedunangst.com/post/too-much-go-misdirection
163•todsacerdoti•15h ago•78 comments

Solving the local optima problem – NQueens

https://github.com/Dpbm/n-rainhas/blob/main/readme-en.md
7•ColinWright•3d ago•1 comments
Open in hackernews

Patience too cheap to meter

https://www.seangoedecke.com/patience-too-cheap-to-meter/
41•swah•2d ago

Comments

BrenBarn•1d ago
When a person can't do something because it exhausts their patience, we usually describe it not by saying the task is difficult but that it is tedious, repetitive, boring, etc. So this article reinforces my view that the main impact of LLMs is their abilities at the low end, not the high end: they make it very easy to do a bad-but-maybe-adequate job at something that you're too impatient to do yourself.
perrygeo•4h ago
I agree with this more daily.

Converting a dictionary into a list of records when you known that's what you want ... easy, mechanical, boring af, and something we should almost obviously outsource to machines. LLMs are great at this.

Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.

skydhash•2h ago
> easy, mechanical, boring af, and something we should almost obviously outsource to machines

That’s when you learn vim or emacs. Instead of editing character wise, you move to bigger structures. Every editing task becomes a short list of commands and with the power of macros, repeatable. Then if you do it often, you add (easily) a custom command for it.

andyferris•2h ago
Speaking of tedious and exhausting my patience… learning to use vim and emacs properly. I do like vim but I barely know how to use it and I’ve had well over a decade of opportunity to do so!

Pressing TAB with copilot to cover use cases you’ve never needed to discover a command or write a macro for is actually kinda cool, IMO.

ChrisMarshallNY•5h ago
It takes practice, skill, and self-actualization, to become a really good listener. I know I’m not there, yet, and I’ve been at it, a long time. I suspect most folks aren’t so good at it.

It’s entirely possible that LLMs could make it so that people expect superhuman patience from other people.

I think there was a post, here, a few days ago, about people being “lost” to LLMs.

th0ma5•5h ago
That's the ultimate goal of these models, though, to exhaust you of any sass. They will eventually approach full hallucination I'd imagine for any eventually long enough context.
Centigonal•5h ago
>However, there doesn’t seem to be a huge consumer pressure towards smarter models. Claude Sonnet had a serious edge over ChatGPT for over a year, but only the most early-adopter of software engineers moved over to it. Most users are happy to just go to ChatGPT and talk to whatever’s available.

I want to challenge this assumption. I think ChatGPT is good enough for the use cases of most of its users. However, for specialist/power user work (e.g. coding, enterprise AI, foundation models for AI tools) there is strong pressure to identify the models with the best performance/cost/latency/procurement characteristics.

I think most "vibe coding" enthusiasts are keenly aware of the difference between Claude 3.7/Gemini Pro 2.5 and GPT-4.1. Likewise, people developing AI chatbots quickly become aware of the latency difference between e.g. OpenAI's and Claude (via Bedrock)'s batch APIs.

This is similar to how most non-professionals can get away with Paint.NET, while professional photo/graphic design people struggle to jump from Photoshop to anything else.

wobfan•21m ago
> ChatGPT is good enough for the use cases of most of its users

I think that's the point the author made. If the big majority of users wants this, but software developers want that, they obviously focus on this. Its what recent history confirmed and its what's logic in a capitalistic standpoint.

To break it down, developers want intelligence and quality, users want patience and validation. ChatGPT is good at the latter and okay (in comparison to competitors) at the first.

timewizard•4h ago
> Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?

Perhaps their use case is so unremarkable and unsophisticated that the quality of output is immaterial to it.

> Most good personal advice does not require substantial intelligence.

Is that what therapy is to this author? "Good advice given unintelligently?"

> They’re platitudes because they’re true!

And the appeal is you can get an LLM to repeat them to you? How exactly is that "appealing?"

> However, they are fundamentally a good fit for doing it because they are

...bad technology that can only solve a limited number of boring problems unreliably. Which is what saying platitudes to a person in trouble is and not at all what therapy is meant to be.

kepano•3h ago
I had a similar though a while ago[1]:

> the most salient quality of language models is their ability to be infinitely patient > humans have low tolerance when dealing with someone who changes their mind often, or needs something explained twenty different ways

Something I enjoy when working with language models is the ability to completely abandon days of work because I realize I am on the wrong track. This is difficult to do with humans because of the social element of sunk cost fallacy — it can be hard to let go of the work we invested our own time into.

[1] https://x.com/kepano/status/1842274557559816194

124123124•38m ago
Agree, I find it rather funny that LLM can refresh their context, but human remember their context day-through-day so it is sometime very hard to explain things in an alternative wording to them.
ggm•2h ago
"I'm sorry, you have exceeded my budget for today and must either ask again later or pay for a higher level of service" is not infinite patience. More specifically it's also not "too cheap to meter" because its patently both metered, and not too cheap.

And yes, despite what they might say, people were not seeking intelligence, which is under-defined and highly misunderstood. They were seeking answers.

Animats•2h ago
The near future: receiving a huge OpenAI bill because your kid asked "Why" over and over and got answers.
bee_rider•1h ago
Huh, I expected going in that this would actually be about LLMs waiting on customer service lines or whatever. That actually seems like it would be a rare social good produced by these things; plenty of organizations seem to shirk their responsibility to provide prompt customer service by hoping people will give up…

I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?