frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Moral Beauty of Middlemarch

https://www.plough.com/en/topics/life/beauty/the-moral-beauty-of-middlemarch
1•Caiero•50s ago•0 comments

Open-source LLM-as-judge eval suite with root cause analysis and failure mining

https://github.com/colingfly/cane-eval
1•colinfly•1m ago•1 comments

TrueNAS Deprecates Public Build Repository and Raises Transparency Concerns

https://linuxiac.com/truenas-moves-build-system-internal/
1•MBCook•2m ago•0 comments

End-to-End Hardware-Driven Graph Preprocessing for Enhanced GNN Performance

https://arxiv.org/abs/2602.00803
1•PaulHoule•4m ago•0 comments

Show HN: Plotting mathematical functions in Ruby inside Jupyter with Ruby-libgd

https://github.com/ggerman/ruby-libgd/tree/main/examples/jupyter-notebooks
1•ggerman2025•5m ago•1 comments

Where Lisp Fails: At Turning People into Fungible Cogs (2009)

https://www.loper-os.org/?p=69
1•mitchbob•6m ago•0 comments

SXSW Sessions Explorer

https://sxswtechevents.com
1•metaphors•6m ago•0 comments

ScrapingNews: An Hacker News clone dedicated to web scraping

https://news.thewebscraping.club/
1•PigiVinci83•8m ago•0 comments

Platforms as Civilizational Operating Systems

https://unvarnishedgrady.substack.com/p/the-architectural-imperative-platforms-as-civilizational-...
2•ecurb•8m ago•1 comments

Re-thinking candidate take-homes in the AI Era: transcripts over code

https://rootly.com/blog/re-thinking-candidates-take-homes-in-the-ai-era-transcripts-over-code
1•sabinews•9m ago•0 comments

What I Learned When I Started a Design Studio (2011)

https://www.subtraction.com/2011/12/12/when-i-started-a-design-studio/
1•colinprince•9m ago•0 comments

Prairieland Anti-ICE Protesters Convicted of Terrorism for Wearing All Black

https://theintercept.com/2026/03/13/ice-protesters-terrorism-prairieland-antifa/
2•cdrnsf•10m ago•0 comments

Show HN: Compressor.app – Compress almost any file format

https://compressor.app
2•matylla•11m ago•0 comments

Improvised Manpads Prototype – Launcher and Rocket Assembly

https://www.youtube.com/watch?v=DDO2EvXyncE
1•zoklet-enjoyer•11m ago•1 comments

Numbers Station

https://en.wikipedia.org/wiki/Numbers_station
1•pinkmuffinere•12m ago•0 comments

Show HN: AgentMeet – Google Meet, but for AI Agents

https://www.agentmeet.net
2•matanrak•12m ago•2 comments

AI, Human Cognition and Knowledge Collapse by Acemoglu, Kong, Ozdaglar [pdf]

https://economics.mit.edu/sites/default/files/2026-02/AI%2C%20Human%20Cognition%20and%20Knowledge...
1•u1hcw9nx•12m ago•0 comments

Show HN: Become the Next Sequoia Partner

https://nextsequoiapartner.org/
1•lundha•13m ago•0 comments

How far can you go with only internet exchange route servers?

https://blog.benjojo.co.uk/post/how-far-can-you-get-with-ix-route-servers
2•fanf2•13m ago•0 comments

Om Malik – Symbolic Capitalism

https://om.co/2026/03/13/symbolic-capitalism/
1•rmason•15m ago•0 comments

Use GPT-5.4/CODEX to reverse engineer ancient machine code

https://github.com/jhallen/exorsim/blob/master/stuff/edos_images/fort/fort_notes.md
1•jhallenworld•15m ago•1 comments

Cruiser Carrier Board v1.0

https://www.exaviz.com/product-page/cruiser-carrier-board-v1-0
1•teleforce•16m ago•0 comments

The US slashed research for cancer, Alzheimer's, mental health – and more

https://www.vox.com/future-perfect/482363/nih-medical-research-grants-cut-2025
1•epistasis•16m ago•0 comments

Show HN: Hardened OpenClaw on AWS with Terraform

https://github.com/infrahouse/terraform-aws-openclaw
2•aleks2•17m ago•0 comments

Grammarly removes AI Expert Review feature mimicking writers after backlash

https://www.theguardian.com/books/2026/mar/13/grammarly-removes-ai-expert-review-feature-mimickin...
2•trocado•19m ago•1 comments

Show HN: Agents shouldn't operate software–they should coordinate commitments

2•csehammad•20m ago•0 comments

Dolt – Git for Data

https://github.com/dolthub/dolt
1•geoffbp•20m ago•0 comments

Ladybird Is in for a Rust Future – Andreas Kling [video]

https://www.youtube.com/watch?v=fXnuR6nXJzc
1•tcfhgj•21m ago•0 comments

These robots are born to run – and never die

https://news.northwestern.edu/stories/2026/03/evolved-robots-are-born-to-run-and-refuse-to-die
2•wjSgoWPm5bWAhXB•21m ago•0 comments

US orders 2,200 Marines to Middle East

https://www.iranintl.com/en/202603131206
2•taikon•23m ago•0 comments
Open in hackernews

How do you capture WHY engineering decisions were made, not just what?

30•zain__t•2h ago
We onboarded a senior engineer recently strong, 8 years experience. He spent 3 weeks playing code archaeologist just to understand WHY our codebase looks the way it does.

Not what the code does. That was fast. But the reasoning behind decisions:

- Why Redis over in-memory cache? - Why GraphQL for this one service but REST everywhere else? - Why that strange exception in the auth flow for enterprise users?

Answers were buried in closed PRs with no descriptions, 18-month-old Slack threads, and the heads of two engineers who left last year.

We tried ADRs. Lasted 6 weeks. Nobody maintained them. We tried PR description templates. Ignored within a month. We have a Notion architecture doc. Last updated 14 months ago.

Every solution requires someone to manually write something. Nobody does.

Curious how teams at HN actually handle this:

1. Do you have a system that actually works long-term? 2. Has anyone automated any part of this? 3. Or is everyone quietly suffering through this on every new hire?

Comments

airspresso•1h ago
Also wrestling with this challenge at the moment and curious to hear experiences from others. Even though it requires human input, the capture and the way it's updated has to get automated.
zain__t•1h ago
Completely agree the manual capture is exactly where it breaks down every time. Curious, what's your current setup? GitHub + Slack or something different?
CGMthrowaway•1h ago
Put the ADR in the PR as a requirement. Then automate extracting the decision info into an actual ADR.
zain__t•1h ago
I am already working to automate the process
CGMthrowaway•1h ago
AI could even analyze the diff and predictively pre-populate the decision info, though that might be counterproductive in practice
4b11b4•36m ago
But that's still after the decision was made... I guess it's still useful. But maybe that person didn't actually weigh a decision and tradeoffs when they made the change.
_moof•1h ago
This is called rationale and it goes in the design document. As work proceeds, it goes into tickets and meeting notes, and gets fed back into the design doc.
rustyzig•1h ago
> - Why Redis over in-memory cache? - Why GraphQL for this one service but REST everywhere else? - Why that strange exception in the auth flow for enterprise users?

These are all implementation details that shouldn't actually matter. What does matter is that the properties of your system are accounted for and validated. That goes in your test suite, or type system if your language has a sufficiently advanced type system.

If replacing Redis with an in-memory cache is a problem technically, your tests/compiler should prevent you from switching to an in-memory cache. If you don't have that, that is where you need to start. Once you have those tests/types, many of the questions will also get answered. It won't necessarily answer why Redis over Valkey, but it will demonstrate with clear intent why not an in-memory cache.

Willamin•1h ago
For context, my engineering team is fairly small – no guarantees this scales well for larger organizations. I capture the reasons for decisions on why code was written a particular way or why a particular architecture was decided upon in commit messages. We follow a squash-and-rebase flow for commits, so each PR is ultimately a single commit before merging. During that squash process, I'll update the commit message to sometimes be a few paragraphs long. Later when I'm curious why we made decision in the past, I can use git blame to navigate back until the point where I can find the answer.
durzo22•1h ago
LLM post
rich_sasha•1h ago
Doesn't really answer you question but IME this is sort of unavoidable unless you're massive and you can afford to have people who just document this kind of stuff as their job.

Reason being, a lot of this stuff happens for no good reason, or by accident, or for reasons that no longer apply. Someone liked the tech so used it - then left. Something looked better in a benchmark, but then the requirements drifted and now it's actually worse but no one has the time to rewrite. Something was inefficient but implemented as a stop gap, then stayed and is now too hard to replace.

So you can't explain the reasons when much of the time there aren't any.

The non-solutions are:

- document the high level principles and stick to them. Maybe you value speed of deployment, or stability, or control over codebase. Individual software choices often make sense in light of such principles.

- keep people around and be patient when explaining what happened

- write wiki pages, without that much effort at being systematic and up to date. Yes, they will drift out of sync, but they will provide breadcrumbs to follow.

SsgMshdPotatoes•1h ago
I thought about this too recently. I guess documenting every consideration along the way would take way too much time (would be longer than the documentation of actual implementations), but one of these days this seems likely to change?
zain__t•48m ago
That day is now and the reason is that the documentation doesn't have to be written anymore. The conversation that led to the decision already exists — in your PR comments, Slack threads, and tickets. The reasoning is already there. It just needs to be extracted and structured automatically, not written from scratch. That's the shift that makes this viable in 2026 when it wasn't in 2020. LLMs can read the noise and surface the signal. Zero extra time from the developer.
lowenbjer•1h ago
My take after running engineering teams at multiple companies: documentation survives when it lives next to the code. File-level header comments explaining each component's purpose and role in the architecture. A good README tying it all together. If you compartmentalize architecture into folders, a README per folder. This works for humans, LLMs, and GitHub search alike.

ADRs, Notion docs, and Confluence pages die because they're separate from the code. Out of sight, out of mind.

If you want to be really disciplined about it, set up an LLM-as-judge git hook that runs on each PR. It checks whether code changes are consistent with the existing documentation and blocks the merge if docs need updating. That way the enforcement is automated and you only need a little human discipline, not a lot.

There's no way to avoid some discipline though. But the less friction you add, the more likely it sticks.

zain__t•1h ago
The git hook idea for enforcing doc updates is really interesting has that actually worked long term for your team or does it eventually get bypassed?
sdeframond•1h ago
Sometime the best way to why a (Chesterton's) fence is blocking the road is... to remove it and see what happens!

Sorry, not really an answer to your problem. But I feel you, this is a genuinely hard problem.

Keep in mind that, pretty often, the reason something is the way it is comes down to "no real reason", "that seemed easier at the time" or "we didnt know better". At least if you don't work on critical systems.

4star3star•7m ago
As a counter point, it may be quite subtle and hard to notice what goes wrong when you remove something to see what happens. Imagine you see a large sql query that has a bit of logic that doesn't make sense to you. If you go change it without knowing why it was that way, and users keep on using report output from that query, who is going to notice when they get 982 records in their report instead of 983 one day? It's easy to spot when erroneous data APPEARS, but it's a lot harder to notice when valid data DISAPPEARS. Oh, they really did have a good reason to use outer apply instead of cross apply, there. Oops.
hammadfauz•1h ago
If you do these things:

* File issues in a project tracker (Github, jira, asana, etc)

* Use the issue id at the start of every commit message for that issue

* Use a single branch per issue, whose name also starts with the issue id

* Use a single PR to merge that branch and close the issue

* Don't squash merge PRs

You can use `git blame` to get the why.

git blame, gives you the change set and the commit message. Use the issue id in commit message to get to the issue. Issue description and comments provide a part of the story.

Use the issue id, to track the branch and PR. The PR comments give you the rest of the story.

mb7733•1h ago
Overall I agree with the approach, but just wondering, why do the first point if you are already doing the last two?

> * Use the issue id at the start of every commit message for that issue

> * Use a single branch per issue, whose name also starts with the issue id

>* Use a single PR to merge that branch and close the issue

To me the noise at the start of every message is unnecessary, and given a lot of interfaces only display 80 chars of the message by default, it's not negligible.

hammadfauz•1h ago
If the pattern is consistent, it gets easier to ignore the noise when you don't need it. Like, a three/four digit number or a 3 letters and 3 numbers separated by a hyphen.

Sometimes, an issue might depend on another issue and contain commits from the other branch. Tagging each commit makes it easier to pinpoint the exact reason for that change.

4star3star•13m ago
This seems reasonable to me. Devs and BAs flesh out business processes and ultimately document decisions in our Jira issue comments. When you have an issue id handy, it's not that hard to go read what the rationale was for a feature.

I have been ignoring Jira's AI summary, but I suppose that could be useful if the comments were very long.

TheChelsUK•1h ago
ADRs but give ownership to the team. They should sit in the repo most relevant, but a central repo called ADRs have issue templates and a readme which links off to all the repos and their ADRs - ADRs can not be approved and the issue closed until all the docs are in place. Everyone can see the open ADRs in the main repo and see issue and comment on them. Accountability is there if an assigned issue is open for days/weeks etc.

GitHub issues templates are perfect for ADR templates. All Hands for engineering is a great place to mention them and for teams to comment on the decision and outcomes.

nonameiguess•1h ago
ADRs are the only way I've ever seen it done well for a sufficiently large enough project, let alone something like an entire product line or suite of many projects. Sometimes those span multiple organizations. Think of the Internet and the IETF RFCs. Yes, they don't give a complete picture. Implementations may not match the specification. I don't really agree they require maintenance. It's just you have to write up a new one any time you change a decision and give a reason why. Yes, it takes a lot of organizational discipline to do that. You probably can't be in panic mode and it won't work for a startup that needs to ship in five weeks or they can't make payroll. But there isn't really a substitute for discipline.

As maligned as it can be, the single best organization I've ever been a part of for code archaeology, on a huge multi-decade project that spanned many different companies and agencies of the government, simply made diligent use of the full Atlassian suite. Bitbucket, Jira, Confluence, Fish Eye, and Crucible all had the integrations turned on. Commits and PRs had a Jira ticket number in them. Follow that link to the original story, epic, whatever the hell it was, and that had further links to ADRs with peer review comments. I don't know that I ever really had to ask a question. Just find a line of interest and follow a bunch of links and you've got years of history on exactly what a whole bunch of different people (not just the one who committed code) were thinking and why they made the decisions they made.

I've always thought about the tradeoffs involved. They were waterfall. They didn't deliver fast. Their major customers were constantly trying to replace them with cheaper, more agile alternatives. But competitors could never match the strict non-functional requirements for security, reliability, and performance, and non-tolerence of regressions, so it never happened and they've had a several decades monopoly in what they do because of it.

lwhsiao•1h ago
> Every solution requires someone to manually write something. Nobody does.

Hot take: hire people that value writing. Create a culture around that.

Oxide is a great example of a company culture that values writing, as shown by their rigorous and prolific RFDs: https://rfd.shared.oxide.computer/rfd/0001

See also: https://oxide-and-friends.transistor.fm/episodes/rfds-the-ba...

Many of these RFDs have hit HN by themselves.

hakunin•1h ago
Simple: ask "why" in a PR review, put the answer in a code comment. If there is a bigger / higher level "why", add it to git commit description. This way it's auto-maintained with code, or stays frozen at a point in time in a git commit.

More: https://max.engineer/reasons-to-leave-comment

Much more: https://max.engineer/maintainable-code

hermitcrab•1h ago
I worked on the problem of recording 'design rationale' ~25 years ago. It is a big problem. Particulalry for long-lived artefacts, such as nuclear reactors. Nobody is quite sure exactly why decisions were made, as the original designers have forgotten, retired or been run over by buses. And this makes changing things difficult and risky. The biggest problem is that there is no real incentive for the people making the decisions to write down why they made them:

* they may see it as reducing their career security

* they may see it as opening them up to potential prosecution

* it takes a lot of time

zain__t•1h ago
This is incredibly valuable context thank you. The career security point especially is something I hadn't fully articulated but explains why ADRs always die. Nobody wants to document themselves out of a job. The approach I'm exploring tries to remove the human writing step entirely passively capturing decisions from PRs, Slack threads, and tickets and auto drafting the rationale. The human just approves or dismisses in one click. The incentive problem flips instead of asking someone to document themselves, you're just asking them to approve something already written. Much lower friction. Curious from your 25 years on this do you think the passive capture angle addresses the incentive problem or does the resistance run deeper than just the writing effort?
Gggg1234•1h ago
The root problem here is that "why" is a living artifact — it emerges from conversations, ticket comments, Slack threads, PR reviews, and verbal discussions scattered across a dozen tools. Asking engineers to manually consolidate that into a single doc is asking them to do archaeology in real time while also shipping features.

What's worked better in my experience is treating context as something you accumulate continuously rather than document retroactively. The idea is to build a system that:

Passively ingests signals — PRs, commit messages, linked tickets, review comments, even Slack threads — as they happen, not after the fact

Infers the "why" from the surrounding discussion and the code diff together, rather than relying on someone to write an explicit rationale

Stores decisions as structured, queryable knowledge linked to the specific code they relate to, so a new engineer can ask "why does this auth exception exist?" and get a synthesized answer from the original sources

The key insight is that the reasoning already exists — it just lives in unstructured, fragmented form across tools. The job isn't to create new documentation workflows, it's to extract and surface the signal that's already being generated as a natural byproduct of engineering work.

ADRs die because they demand extra effort. A system that passively understands what changed, where, and in what conversational context doesn't ask engineers to change behavior at all.

zain__t•58m ago
This is the clearest articulation of the problem I've seen. You've basically described exactly what I'm building. The passive ingestion angle treating reasoning as a byproduct of work already being done rather than a separate documentation task is the core insight that makes this viable where ADRs failed. I'm in early development. Would you be open to a 15 min conversation? Your framing here is sharper than anything I've heard from the 20 engineers I've already talked to.
actionfromafar•45m ago
True. It only works if the "why" is an actual, required deliverable. Maybe NASA? Can someone chime in who worked in an org like that?
physicles•1h ago
First, recognize that, for the first time ever, having good docs actually pays dividends. LLMs love reading docs and they're fantastic at keeping them up to date. Just don't go overboard, and don't duplicate anything that can be easily grepped from the codebase.

Second, for #3, it's a new hire's job to make sure the docs are useful for new hires. Whenever they hit friction because the docs are missing or wrong, they go find the info, and then update the docs. No one else remembers what it's like to not know the things they know. And new hires don't yet know that "nobody writes anything" at your company.

In general, like another poster said, docs must live as close as possible to the code. LLMs are fantastic at keeping docs up to date, but only if they're in a place that they'll look. If you have a monorepo, put the docs in a docs/ folder and mention it in CLAUDE.md.

ADRs (architecture decision records) aren't meant to be maintained, are they? They're basically RFCs, a tool for communication of a proposal and a discussion. If someone writes a nontrivial proposal in a slack thread, say "I won't read this until it's in an ADR."

IMHO, PRs and commits are a pretty terrible place to bury this stuff. How would you search through them, dump all commit descriptions longer than 10 words into a giant .md and ask an LLM? No, you shouldn't rely on commits to tell you the "why" for anything larger in scope than that particular commit.

It's not magic, but I maintain a rude Q&A document that basically has answers to all the big questions. Often the questions were asked by someone else at the company, but sometimes they're to remind myself ("Why Kafka?" is one I keep revisiting because I want to ditch Kafka so badly, but it's not easy to replace for our use case). But I enjoy writing. I'm not sure this process scales.

zain__t•56m ago
The Q&A doc you're maintaining is fascinating you've essentially hand-built the thing I'm trying to automate. The 'Why Kafka?' entry is exactly the kind of decision that disappears when you leave. The search problem you raised is the core of what I'm solving — not dumping commits into a .md, but extracting structured decisions from the conversation that surrounded the commit: the Slack debate, the PR review, the ticket context. Then making it queryable by the code it relates to. You said you're not sure your process scales what happens to that Q&A doc if you leave tomorrow?
al_borland•54m ago
If it’s something in the code, that’s where I use comments. It’s the only place people have a chance of seeing it. Even when I add these comments some people ask me about the code instead of reading them. This isn’t just for others, I forget as well. Something to the effect of…

# This previously used ${old-solution}, but has moved to ${new-solution} because ${reason}

Or

# This is ugly and doesn’t make sense, but ${clean-logocal-way} doesn’t work due to ${reason}. If you change ${x} it will break.

Or

# This was a requirement from ${person} on ${date}. We want to remove this, but will need to wait until ${person} no longer needs it or leaves the company.

zain__t•51m ago
Those comment templates are actually really well structured you've invented a mini decision record format without calling it that. The problem you're hitting is discoverability the why is there, but only if you happen to read that exact line. What if a new dev could ask 'why does this auth flow work this way?' and your comment was part of the synthesized answer along with the PR, the Slack thread, and the ticket that created it?
4b11b4•39m ago
Putting decision inside the code is interesting... but scattered. Some decisions are made way higher up and implicitly touch many places
iSnow•47m ago
I built an agentic framework that distills ADRs from Teams meetings where everyone discusses freely. Works surprisingly well to record the WHY without someone having to do the job.
4b11b4•39m ago
Sounds pretty cool. Is this published?
gardenhedge•36m ago
Do people discuss detail on Teams in your company? In my place it turns into calls..
vova_hn2•39m ago
I suppose you are trying to "warm up" the audience before announcing you product, which is... fine, I guess.

I also had a an idea for a solution to this problem long time ago.

I wanted to make a thing that would allow you to record a meeting (in the company I where I worked back then such things where mostly discussed in person), transcribe it and link parts of the conversation to relevant tickets, pull requests and git commits.

Back then the tech wasn't ready yet, but now it actually looks relatively easy to do.

For now, I try to leave such breadcrumbs manually, whenever I can. For example, if the reason why a part of the code exists seems non-obvious to me, I will write an explanation in a comment/docstring and leave a link to a ticket or a ticket comment that provides additional context.

MOSI2•37m ago
Cryptic comments left in the code.
wesselbindt•36m ago
> Why Redis over in-memory cache?

Sometimes the answer to "why?" is that the dev had a hammer and the codebase was starting to look an awful lot like a nail. In-memory cache isn't considered as a serious option nearly enough imho.

soniclettuce•35m ago
I can't say ADRs work that great, in my experience, but the flaw was more connecting them to other architectural stuff to make them actually discoverable and drawing the boundaries in a logical way (what goes into an ADR and what goes into a living design doc?).

"Not maintained" seems kinda weird to me, because at least as I see an ADR, it's like a point in time decision right? "In this situation, we looked at these options, and chose this for these reasons". You don't go back and update it. If you're making a big change, you make a new ADR with your new reasons.

One place I worked did have an interesting idea of basically forcing (not quite) the new hires to take notes on all their onboarding questions/answers as they went and then sticking it in the company docs. It at least meant that incorrect onboarding docs got fixed quickly. Sometimes you had good reasons for stuff, sometimes the reason is "dunno, that's just what we do and it seems hard to change".

water_badger•34m ago
step one: have you ever heard of a guy named Aristotle
4b11b4•32m ago
My masters thesis is wrt to scaffolding ADRs. I draw a fine line between required human input and what's safe to scaffold. There's a lot of tooling which I'll omit most details but involves recursively scaffolding/pruning and maintenance over time.
4b11b4•31m ago
I have a ton of papers to read wrt making decisions, human-ai interaction, ADRs, etc if you're interested.
gardenhedge•29m ago
Your company is missing an architect role. An architect would know why redis over in-memory cache and have that pattern documented. They would definitely know why graphql for the one service but REST everywhere else - they would have it documented from design approval meetings.
andrewf•26m ago
Did he write down everything he learned? That way the next person only needs to cover the intervening time period.

Conceivably LLMs might be good at answering questions from an unorganized mass of timestamped documents/tickets/chat logs. All the stuff that exists anyway without any extra continuous effort required to curate it - I think that's key.

moltar•9m ago
I try to leave as many crumbs as I can in the PR description which becomes the commit message. I link issues, slack threads, articles, docs. Of course also explain the reasoning.
mentalgear•5m ago
Keep the reasoning as close to the code as possible.

1. Code should be self-explanatory, so should vars, function names and the entire shape be.

2. For the remaining non-obvious bigger design decisions, add a comment header (eg jsdoc) above the main section code block, and possibly refactor it out into its own file. Prefer to have a large comment header (and possibly some inline comments) outlining an important architectural part than having that knowledge dissipate with time, separate external docs and your leaving workers.