frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Cockpit is a web-based graphical interface for servers

https://github.com/cockpit-project/cockpit
127•modinfo•3h ago•80 comments

Astral to Join OpenAI

https://astral.sh/blog/openai
1167•ibraheemdev•10h ago•721 comments

Google details new 24-hour process to sideload unverified Android apps

https://arstechnica.com/gadgets/2026/03/google-details-new-24-hour-process-to-sideload-unverified...
410•0xedb•6h ago•474 comments

How the Turner twins are mythbusting modern technical apparel

https://www.carryology.com/insights/how-the-turner-twins-are-mythbusting-modern-gear/
80•greedo•2d ago•38 comments

Return of the Obra Dinn: spherical mapped dithering for a 1bpp first-person game

https://forums.tigsource.com/index.php?topic=40832.msg1363742#msg1363742
213•PaulHoule•3d ago•31 comments

Bombarding gamblers with offers greatly increases betting and gambling harm

https://www.bristol.ac.uk/news/2026/march/bombarding-gamblers-with-offers-greatly-increases-betti...
7•hhs•42m ago•1 comments

Show HN: Three new Kitten TTS models – smallest less than 25MB

https://github.com/KittenML/KittenTTS
287•rohan_joshi•7h ago•94 comments

EsoLang-Bench: Evaluating Genuine Reasoning in LLMs via Esoteric Languages

https://esolang-bench.vercel.app/
48•matt_d•2h ago•14 comments

Be intentional about how AI changes your codebase

https://aicode.swerdlow.dev
42•benswerd•2h ago•20 comments

Noq: n0's new QUIC implementation in Rust

https://www.iroh.computer/blog/noq-announcement
132•od0•5h ago•17 comments

Waymo Safety Impact

https://waymo.com/safety/impact/
173•xnx•3h ago•150 comments

Clockwise acquired by Salesforce and shutting down next week

https://www.getclockwise.com
51•nigelgutzmann•3h ago•33 comments

From Oscilloscope to Wireshark: A UDP Story (2022)

https://www.mattkeeter.com/blog/2022-08-11-udp/
66•ofrzeta•4h ago•12 comments

NanoGPT Slowrun: 10x Data Efficiency with Infinite Compute

https://qlabs.sh/10x
83•sdpmas•4h ago•14 comments

4Chan mocks £520k fine for UK online safety breaches

https://www.bbc.com/news/articles/c624330lg1ko
223•mosura•8h ago•353 comments

“Your frustration is the product”

https://daringfireball.net/2026/03/your_frustration_is_the_product
388•llm_nerd•12h ago•233 comments

Launch HN: Voltair (YC W26) – Drone and charging network for power utilities

43•wweissbluth•6h ago•23 comments

Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster

https://blog.skypilot.co/scaling-autoresearch/
110•hopechong•6h ago•51 comments

OpenBSD: PF queues break the 4 Gbps barrier

https://undeadly.org/cgi?action=article;sid=20260319125859
175•defrost•9h ago•54 comments

Juggalo makeup blocks facial recognition technology (2019)

https://consequence.net/2019/07/juggalo-makeup-facial-recognition/
216•speckx•10h ago•134 comments

My Random Forest Was Mostly Learning Time-to-Expiry Noise

https://illya.sh/threads/out-of-sample-permutation-feature-importance-for-random
6•iluxonchik•3d ago•0 comments

Minecraft Source Code Is Interesting

https://www.karanjanthe.me/posts/minecraft-source/
8•KMJ-007•1h ago•2 comments

An update on Steam / GOG changes for OpenTTD

https://www.openttd.org/news/2026/03/19/steam-changes-update
248•jandeboevrie•6h ago•179 comments

Android developer verification: Balancing openness and choice with safety

https://android-developers.googleblog.com/2026/03/android-developer-verification.html
20•WalterSobchak•3h ago•8 comments

Tesla: Failure of the FSD's degradation detection system [pdf]

https://static.nhtsa.gov/odi/inv/2026/INOA-EA26002-10023.pdf
143•doener•3h ago•63 comments

The Need for an Independent AI Grid

https://amppublic.com/
12•olalonde•2h ago•1 comments

The Shape of Inequalities

https://www.andreinc.net/2026/03/16/the-shape-of-inequalities/
88•nomemory•9h ago•14 comments

Xiaomi launches next-gen SU7 with 902 km range and Lidar, still undercuts Tesla

https://electrek.co/2026/03/19/xiaomi-launches-next-gen-su7-902-km-range-undercuts-tesla/
55•breve•2h ago•20 comments

macOS 26 breaks custom DNS settings including .internal

https://gist.github.com/adamamyl/81b78eced40feae50eae7c4f3bec1f5a
302•adamamyl•8h ago•150 comments

Connecticut and the 1 Kilometer Effect

https://alearningaday.blog/2026/03/19/connecticut-and-the-1-kilometer-effect/
36•speckx•5h ago•24 comments
Open in hackernews

Be intentional about how AI changes your codebase

https://aicode.swerdlow.dev
41•benswerd•2h ago

Comments

benswerd•2h ago
I've seen a lot of people talking about how AI is making codebases worse. I reject that, people are making codebases worse by not being intentional about how their AI writes code.

This is my take on how to not write slop.

tabwidth•1h ago
The intention part is right but the bottleneck is review. AI is really good at turning your clean semantic functions into pragmatic ones without you noticing. You ask for a feature, it slips a side effect into something that was pure, tests still pass. By the time you catch it you've got three more PRs built on top.
peacebeard•1h ago
In my experience trying to push the onus of filtering out slop onto reviewers is both ineffective and unfair to the reviewer. When you submit code for review you are saying "I believe to the best of my ability that this code is high quality and adequate but it's best to have another person verify that." If the AI has done things without you noticing, you haven't reviewed its output well enough yet and shouldn't be submitting it to another person yet.
peacebeard•1h ago
Agreed. When you submit code you must take responsibility for its quality. Blaming AI for low quality code is like blaming hammers for giant holes in the drywall. If you don't know how to use AI tools without confidence that your code is high quality, you need to re-assess how you use those tools. I'm not saying AI tools are bad. They're great. But the prevalence of people pushing the tools beyond their limits is not a failure of the tools. Vibe coding may be fun but tight-leash high-oversight AI usage is underrated in my opinion.
systemsweird•1h ago
I think there’s just a lot of people who would love to push lower quality code for a variety of legitimate and illegitimate reasons (time pressure, cost, laziness, skill issues, bad management, etc). AI becomes a perfect scapegoat for lowered code quality.

And you’re completely right, humans are still the ones in control here. It’s entirely possible to use AI without lowering your standards.

mika-el•1h ago
We did something similar — wrote markdown skill files that teach agents our coding patterns. Naming conventions, which libraries to use, how we structure components. Basically onboarding docs but for agents.

One thing we learned the hard way: shorter rules work better. We started with a 600-line comprehensive guide and the agent actually got worse. Every token in the skill competes for context window space with your actual conversation. Once we cut to under 200 lines per skill, consistency went up significantly.

The semantic vs pragmatic function split in this post is a good frame. I am not sure agents need that level of abstraction explained to them though — what they actually need is concrete examples. "Use pdfplumber not PyPDF2" beats "prefer minimal semantic functions" every time.

p1necone•1h ago
I haven't really extensively evaluated this, but my instinct is to really aggressively trim any 'instructions' files. I try to keep mine at a mid-double-digit linecount and leave out anything that's not critically important. You should also be skeptical of any instructions that basically boil down to "please follow this guideline that's generally accepted to be best practice" - most current models are probably already aware - stick to things that are unique to your project, or value decisions that aren't universally agreed upon.
w29UiIm2Xz•32m ago
Shouldn't all of this be implicit from the codebase? Why do I have to write a file telling it these things?
cjonas•27m ago
For any sufficiently large codebase, the agent only ever has a very % of the code loaded into context. Context engineering strategies like "skills" allow the agent to more efficiently discover the key information required to produce consistent code.
cyanydeez•27m ago
mostly because reading the code base fills up the context window; as you aggregate context, you then need to synthesize the basics; these things arnt intelligence; they dont know whats useless and whats useful. They're as accurate as the structureyou surround them with.
keeganpoppen•15m ago
it’s not that shorter rules are intrinsically better, it’s that longer rules tend to have irrelevant junk in them. ceteris paribus, longer rules are better. it’s just most of the time the longer rules fall under the Blaise Pascal-ian “i regret i didn’t have time to make this shorter”.
slopinthebag•13m ago
AI comments are against the rules. Fuck off, bot.
benswerd•9m ago
Wrestled with this a bit. The struggle with this one in particular is its as much for people to read as it is for agents, and the agents are secondary in its case.

I generally agree on this as best practice today, though I think it will become irrelevant in the next 2 generations of models.

mrbluecoat•1h ago
..but unintentional AI (aka Modern Chaos Monkey) is so much more fun!
benswerd•1h ago
LOL fr. I've been talking with some friends about RL on chaos monkeying the codebase to benchmark on feature isolation for measuring good code.
ChrisMarshallNY•1h ago
Because of the way that I use AI, I am constantly looking at the code. I usually leave it alone, if I can; even if I don't really like it.

I will, often go back, after the fact, and ask for refactors and documentation.

It works. Probably a lot slower than using agents, but I test every step, and it is a lot faster than I would do it, unassisted.

benswerd•1h ago
I don't think testing the product alone is good enough, because when you give it tests it has to pass it prioritizes passing them at the expense of everything else — including code quality. I've seen it pull in random variables, break semantic functions, etc.
ChrisMarshallNY•1h ago
Oh, no. I test. Each. and. Every. Step.

I use a test harness, and step through the code, look at debug logs, and abuse the code, as much as possible.

Kind of a pain, but I find unit tests are a bit of a "false hope" kind of thing: https://littlegreenviper.com/testing-harness-vs-unit/

clbrmbr•1h ago
Page not rendering well on iPhone Safari.

Good content tho!

gravitronic•7m ago
*adds "be intentional" to the prompt*

Got it, good idea.