frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Malus – Clean Room as a Service

https://malus.sh
864•microflash•7h ago•341 comments

AI error jails innocent grandmother for months in North Dakota fraud case

https://www.grandforksherald.com/news/north-dakota/ai-error-jails-innocent-grandmother-for-months...
32•rectang•23m ago•11 comments

Bubble Sorted Amen Break

https://parametricavocado.itch.io/amen-sorting
190•eieio•4h ago•66 comments

Reversing memory loss via gut-brain communication

https://med.stanford.edu/news/all-news/2026/03/gut-brain-cognitive-decline.html
148•mustaphah•4h ago•40 comments

ATMs didn't kill bank teller jobs, but the iPhone did

https://davidoks.blog/p/why-the-atm-didnt-kill-bank-teller
233•colinprince•6h ago•284 comments

The Met releases high-def 3D scans of 140 famous art objects

https://www.openculture.com/2026/03/the-met-releases-high-definition-3d-scans-of-140-famous-art-o...
158•coloneltcb•5h ago•32 comments

Runners who churn butter on their runs

https://www.runnersworld.com/news/a70683169/how-to-make-butter-while-running/
34•randycupertino•1h ago•14 comments

Show HN: OneCLI – Vault for AI Agents in Rust

https://github.com/onecli/onecli
89•guyb3•4h ago•34 comments

Launch HN: IonRouter (YC W26) – High-throughput, low-cost inference

https://ionrouter.io
24•vshah1016•2h ago•10 comments

Bringing Chrome to ARM64 Linux Devices

https://blog.chromium.org/2026/03/bringing-chrome-to-arm64-linux-devices.html
14•ingve•1h ago•13 comments

An old photo of a large BBS (2022)

https://rachelbythebay.com/w/2022/01/26/swcbbs/
110•xbryanx•1h ago•73 comments

WolfIP: Lightweight TCP/IP stack with no dynamic memory allocations

https://github.com/wolfssl/wolfip
66•789c789c789c•5h ago•6 comments

Forcing Flash Attention onto a TPU and Learning the Hard Way

https://archerzhang.me/forcing-flash-attention-onto-a-tpu
7•azhng•4d ago•0 comments

Dolphin Progress Release 2603

https://dolphin-emu.org/blog/2026/03/12/dolphin-progress-report-release-2603/
268•BitPirate•11h ago•44 comments

Converge (YC S23) Is Hiring a Founding Platform Engineer (NYC, Onsite)

https://www.runconverge.com/careers/founding-platform-engineer
1•thomashlvt•4h ago

Big data on the cheapest MacBook

https://duckdb.org/2026/03/11/big-data-on-the-cheapest-macbook
257•bcye•9h ago•234 comments

Show HN: Understudy – Teach a desktop agent by demonstrating a task once

https://github.com/understudy-ai/understudy
56•bayes-song•4h ago•16 comments

Show HN: Axe – A 12MB binary that replaces your AI framework

https://github.com/jrswab/axe
110•jrswab•7h ago•73 comments

Document poisoning in RAG systems: How attackers corrupt AI's sources

https://aminrj.com/posts/rag-document-poisoning/
4•aminerj•7h ago•0 comments

US private credit defaults hit record 9.2% in 2025, Fitch says

https://www.marketscreener.com/news/us-private-credit-defaults-hit-record-9-2-in-2025-fitch-says-...
144•JumpCrisscross•8h ago•290 comments

The Road Not Taken: A World Where IPv4 Evolved

https://owl.billpg.com/ipv4x/
32•billpg•5h ago•53 comments

Are LLM merge rates not getting better?

https://entropicthoughts.com/no-swe-bench-improvement
81•4diii•9h ago•90 comments

Full Spectrum and Infrared Photography

https://timstr.website/blog/fullspectrumphotography.html
37•alter_igel•4d ago•13 comments

NASA's DART spacecraft changed an asteroid's orbit around the sun

https://www.sciencenews.org/article/spacecraft-changed-asteroid-orbit-nasa
86•pseudolus•3d ago•46 comments

The Cost of Indirection in Rust

https://blog.sebastiansastre.co/posts/cost-of-indirection-in-rust/
64•sebastianconcpt•3d ago•30 comments

Show HN: Rudel – Claude Code Session Analytics

https://github.com/obsessiondb/rudel
118•keks0r•7h ago•72 comments

Italian prosecutors seek trial for Amazon, 4 execs in alleged $1.4B tax evasion

https://www.reuters.com/world/italian-prosecutors-seek-trial-amazon-four-execs-over-alleged-14-bl...
217•amarcheschi•5h ago•54 comments

Kotlin creator's new language: talk to LLMs in specs, not English

https://codespeak.dev/
256•souvlakee•6h ago•218 comments

DDR4 Sdram – Initialization, Training and Calibration

https://www.systemverilog.io/design/ddr4-initialization-and-calibration/
41•todsacerdoti•2d ago•9 comments

Claude now creates interactive charts, diagrams and visualizations

https://claude.com/blog/claude-builds-visuals
147•adocomplete•5h ago•89 comments
Open in hackernews

The Bitter Lesson Has No Utility Function

https://gfrm.in/posts/bitter-lesson-missing-half/index.html
14•slygent•2h ago

Comments

PaulHoule•1h ago
... well, well, well. I spent a lot of the 2010s revisiting symbolic AI and I'd say the worst problem it had was "reasoning with uncertainty" If you consider a medical diagnosis system like

https://en.wikipedia.org/wiki/Mycin

the result is probabilistic in nature, there's always some chance you'll get it wrong.

Language processing is the same. Language is ambiguous, there are thousands of possible parse trees for a common sentence. You might be talking with somebody and then get a piece of information that revises your interpretation of what they said an hour ago. It's just like that.

In that time frame I was very interested in the idea that decision theory was the key link between computation and action whether you were using symbolic methods (e.g. a very plausible set of rules for address matching might be 99.9% reliable in some cases, 97% in others, 2% in others) or learned methods. A model for predicting market prices is priceless, but put that together with a Kelly Better and you've got a trading strategy.

Maybe there is more to his argument than I got, but as I see it he's defending a boundary that isn't there.

ordu•1h ago
The bitter lesson has no utility function, but it has a predicting power. Decision theory, bayesian networks and causality will see a niche applications while LLMs are getting all the money. If the former are as good as their promises, they will be developing tools and accruing a knowledge of how to use them and which problems are good for them. It will last till LLMs hit the local maxima and will not be able to move further. They will try to eat even more resources to overcome it, but they'll get just some more evidence of LLMs being trapped in a local maxima. Stocks will crush, the market will correct itself, and a lot of smart unemployed people will start to look for ways to get away from local maxima.

At that moment things will become really interesting. If decision theory, bayesianism and causality will be able to show something that can be combined with LLM to create something marketable, then they will have their big chance. Or maybe those smart people will devise some other way out of the local maxima.

Bayesian methods and causality has their applications, there are tools to use them, but you can't just feed news into them to get back a most likely structure of a secret global government run by interdimensional lizard people. Probably if you combine them with LLM, than the resulting tool will be able to perform this task?

xg15•1h ago
What irks me a bit at the way the Bitter Lesson is interpreted is that seemingly it didn't just throw out handcrafted model/feature generation, but also any attempt to interpret the learned models and features.

Like, in theory, this should be the absolute best time for people interested in analyzing unstructured data: Here there is this wealth of open-weight models, trained on half the internet that must have developed all kinds of absolutely insane feature detectors for all kinds of media: Programming languages, human-language prose, images, audio, video, whatever you want!

In practice, the models are mostly treated as black boxes and the weights as inscrutable. Which is why we now have the weird situation that our models are able to understand incredibly subtle and abstract semantic concepts in text - but the pre- and postprocessing is still on the level of regexes and string heuristics like 50 years ago. There doesn't seem to be any inbetween.

archermarks•1h ago
> Several commenters suggested the original essay was written by an LLM. They were half right. Both that essay and this one were written with Claude as a drafting partner. I directed the argument; the LLM helped with prose. I mention this not as confession but as demonstration: the human brought the utility function, the machine brought the compute. If that division of labour bothers you, I’d suggest the discomfort says more about the Bitter Lesson than about my writing process.

This hits like 100% of the AI prose bingo card.

twoodfin•1h ago
Dammit. “Helping with prose” sounds like “getting a better grade from my English teacher”.

The quality of your prose is important because it increases the effective bandwidth between your thoughts and the reader.

Either the coherent thoughts are there or they’re not. Using an LLM to tune your prose is very much akin to those awful AI-assisted conversions of standard def television to 4K: Inventing details and nonsense structure to fill space.

polotics•24m ago
Yes! The quality of prose is IMHO f(maximal brevity with no loss of clarity, grammar be damned)

LLMs destroy that. always!

TFA is a case in point: It could, so should, fit in four sentences.

exmadscientist•1h ago
> Several commenters suggested the original essay was written by an LLM. They were half right. Both that essay and this one were written with Claude as a drafting partner. I directed the argument; the LLM helped with prose. I mention this not as confession but as demonstration: the human brought the utility function, the machine brought the compute. If that division of labour bothers you, I’d suggest the discomfort says more about the Bitter Lesson than about my writing process.

This paragraph is pretty condescending to your reader. Whatever else is going on with AI authors, the fact is that if your reader can tell you wrote a piece with AI (and I could with this one), you fucked up.

I think one of the longer-term consequences of AI authors will be that writing gets shorter. There's a lot of fluff in a lot of writing (though not as much as there used to be in say the 19th century), and much of it's culturally expected. We might end up at a place where writing is much shorter and readers expect their own AI assistants to fill in the gaps. That might not be so bad.

But if you can't write a piece without AI, do you understand what you've written? It could go either way. But the condescension here combined with the obvious tells do not make me think highly of this author and his argument.

xg15•1h ago
We have no idea what "drafting partner" means in that case. Maybe the person isn't a native English speaker or is for whatever other reason insecure about their prose? It would be sad if they couldn't make their argument because of that.

I honestly don't like the style of the essay either - maybe reading HN now trains one to view every "It's not X, it's Y" with suspicion. But as long as it's only the style and the author didn't get the entire argument from AI, I think it's worth skipping over it and focus on what they want to say.

(That's the difference I see to AI slop: with slop, there is no message to parse out because everything is generated. If the author here really only used AI to clean up their prose, I'm fine with it)

isx726552•1h ago
> Several commenters suggested the original essay was written by an LLM. They were half right. Both that essay and this one were written with Claude as a drafting partner. I directed the argument; the LLM helped with prose.

That’s all well and good, but I think he needs to take a closer look at some of the resulting prose and clarify a little more. Most of it is good, but there are some unclear statements, like this (right after his descriptions of “Camp A” and “Camp B”):

> Sutton says Camp B wins. My essay was filed under Camp A. But decision theory belongs to neither camp.

The second sentence quoted above doesn’t specify, but I’m pretty sure it means that it was filed under Camp A by the commenters, and incorrectly at that. If so, it would probably read better as:

> Sutton says Camp B wins. Commenters seemed to file my essay under Camp A, and then dismissed it. But that’s incorrect; decision theory belongs to neither camp.

Or something along those lines.

I honestly think this isn’t nit-picky feedback, either. This is a crucial set of sentences which appear to lay out the main point of the essay, so it’s vitally important that they be clear … who “filed” it as a particular camp, and was that correct or incorrect? It should be revised to convey that, as well as better connecting that to what incorrect conclusions might have been drawn from that. The information can be gleaned from the surrounding context of course, but I found that crucial sentence to throw off the flow what was otherwise a really great essay.

sshine•6m ago
I think you have a fair point, but you bothered to give feedback for how this article could be more congruent as if a non-assisted writing process would not warrant similar feedback.

The number of blog sentences that end abruptly halfway through have drastically fallen since I applied AI to my writing process.

There is nuance beyond the feeling that you know the author because he either uses the em dash a lot, the word “comprehensive” a lot, uses bullet points with bold text a lot, or writes detailed summaries reiterating what was just said. I read a lot from that author and I could use a break.