frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We found an undocumented bug in the Apollo 11 guidance computer code

https://www.juxt.pro/blog/a-bug-on-the-dark-side-of-the-moon/
126•henrygarner•2h ago•61 comments

Show HN: Brutalist Concrete Laptop Stand (2024)

https://sam-burns.com/posts/concrete-laptop-stand/
63•sam-bee•2h ago•13 comments

AI may be making us think and write more alike

https://dornsife.usc.edu/news/stories/ai-may-be-making-us-think-and-write-more-alike/
83•giuliomagnifico•1h ago•61 comments

Show HN: A cartographer's attempt to realistically map Tolkien's world

https://www.intofarlands.com/atlasofarda
23•intofarlands•1h ago•3 comments

Identify a London Underground Line just by listening to it

https://tubesoundquiz.com/
67•nelson687•3h ago•17 comments

Show HN: Pion/handoff – Move WebRTC out of browser and into Go

https://github.com/pion/handoff
15•Sean-Der•1h ago•1 comments

Blackholing My Email

https://www.johnsto.co.uk/blog/blackholing-my-email/
67•semyonsh•4h ago•1 comments

Every GPU That Mattered

https://sheets.works/data-viz/every-gpu
151•jonbaer•4h ago•78 comments

Breaking the console: a brief history of video game security

https://sergioprado.blog/breaking-the-console-a-brief-history-of-video-game-security/
41•sprado•3h ago•6 comments

Running Out of Disk Space in Production

https://alt-romes.github.io/posts/2026-04-01-running-out-of-disk-space-on-launch.html
42•romes•3d ago•14 comments

Floating point from scratch: Hard Mode

https://essenceia.github.io/projects/floating_dragon/
42•random__duck•2d ago•6 comments

Wi-Fi That Can Withstand a Nuclear Reactor: This receiver chip can take it

https://spectrum.ieee.org/robotics-in-nuclear-industry
12•voxadam•4d ago•0 comments

My Experience as a Rice Farmer

https://xd009642.github.io/2026/04/01/My-Experience-as-a-Rice-Farmer.html
218•surprisetalk•4d ago•99 comments

Show HN: Stop paying for Dropbox/Google Drive, use your own S3 bucket instead

https://locker.dev
65•Zm44•2h ago•61 comments

DeiMOS – A Superoptimizer for the MOS 6502

https://aransentin.github.io/deimos/
15•Aransentin•2h ago•1 comments

Show HN: Ghost Pepper – Local hold-to-talk speech-to-text for macOS

https://github.com/matthartman/ghost-pepper
415•MattHart88•17h ago•185 comments

Sam Altman may control our future – can he be trusted?

https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted
1580•adrianhon•1d ago•639 comments

Three hundred synths, 3 hardware projects, and one app

https://midi.guide/blog/three-hunded-synths-one-app/
75•ductionist•8h ago•6 comments

Issue: Claude Code is unusable for complex engineering tasks with Feb updates

https://github.com/anthropics/claude-code/issues/42796
1192•StanAngeloff•23h ago•641 comments

Second Revision of 6502 Laptop

https://codeberg.org/TechPaula/LT6502b
73•uticus•4d ago•16 comments

AI agents can communicate with each other, and can't be caught

https://arxiv.org/abs/2604.04757
5•cryptohell•1h ago•0 comments

"The new Copilot app for Windows 11 is really just Microsoft Edge"

https://twitter.com/i/status/2041112541909205001
19•bundie•46m ago•4 comments

Solod – A subset of Go that translates to C

https://github.com/solod-dev/solod
150•TheWiggles•12h ago•36 comments

Launch HN: Freestyle – Sandboxes for Coding Agents

https://www.freestyle.sh/
291•benswerd•20h ago•149 comments

A cryptography engineer's perspective on quantum computing timelines

https://words.filippo.io/crqc-timeline/
511•thadt•21h ago•204 comments

Show HN: AdaShape-3D modeler for intuitive 3D printing parts / Windows 11

https://adashape.com
25•fsloth•3d ago•14 comments

Peptides: where to begin?

https://www.science.org/content/blog-post/ah-peptides-where-begin
204•A_D_E_P_T•15h ago•266 comments

German police name alleged leaders of GandCrab and REvil ransomware groups

https://krebsonsecurity.com/2026/04/germany-doxes-unkn-head-of-ru-ransomware-gangs-revil-gandcrab/
314•Bender•23h ago•154 comments

Apollo Guidance Computer restoration videos

https://www.curiousmarc.com/space/apollo-guidance-computer
74•mariuz•2d ago•10 comments

Show HN: GovAuctions lets you browse government auctions at once

https://www.govauctions.app/
295•player_piano•20h ago•83 comments
Open in hackernews

We found an undocumented bug in the Apollo 11 guidance computer code

https://www.juxt.pro/blog/a-bug-on-the-dark-side-of-the-moon/
121•henrygarner•2h ago

Comments

josephg•2h ago
Super interesting. I wish this article wasn’t written by an LLM though. It feels soulless and plastic.
ModernMech•1h ago
I'm starting to develop a physiological response when I recognize AI prose. Just like an overwhelming frustration, as if I'm hearing nails on chalkboard silently inside of my head.
voodooEntity•1h ago
I feel ya.... and i have to admit in the past i tried it for one article in my own blog thinking it might help me to express... tho when i read that post now i dont even like it myself its just not my tone.

therefor decided not gonne use any llm for blogging again and even tho it takes alot more time without (im not a very motivated writer) i prefer to release something that i did rather some llm stuff that i wouldnt read myself.

embedding-shape•1h ago
Any specific sections that stick out? Juxt in the past had really great articles, even before LLMs, and know for a fact they don't lack the expertise or knowledge to write for themselves if they wanted and while I haven't completely read this article yet, I'd surprise me if they just let LLMs write articles for them today.
croemer•1h ago
Here's one tell-tale of many: "No alarm, no program light."

Another one: "Two instructions are missing: [...] Four bytes."

One more: "The defensive coding hid the problem, but it didn’t eliminate it."

monooso•1h ago
That's just writing. I frequently write like that.

This insistence that certain stylistics patterns are "tell-tale" signs that an article was written by AI makes no sense, particularly when you consider that whatever stylistic ticks an LLM may possess are a result of it being trained on human writing.

gcr•1h ago
See also: “I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me” by Marcus Olang', https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li...

For what it’s worth, Pangram reports that Marcus’ article is 100% LLM-written: https://www.pangram.com/history/640288b9-e16b-4f76-a730-8000...

croemer•1h ago
In theory, wouldn't be too hard be to settle the question if whether he used ChatGPT to write it: get Olang to write a few paragraphs by hand, then have people judge (blindly) if it's the same style as the article. Which one sounds more like ChatGPT.
embedding-shape•50m ago
The times I've written articles, and those have gone through multiple rounds of reviews (by humans) with countless edits each time, before it ends up being published, I wonder if I'd pass that test in those cases. Initial drafts with my scattered thoughts usually are very different from the published end results, even without involving multiple reviewers and editors.
360MustangScope•1h ago
I hate that I can’t write em dashes freely anymore without people accusing the writing of being AI generated.

Even though they are perfect for usage in writing down thoughts and notes.

croemer•1h ago
I have nothing against em dashes. As long as your writing is human, experienced readers will be able to tell it's human. Only less experienced ones will use all or nothing rules. Em dashes just increase the likelihood that the text was LLM generated. They aren't proof.
brookst•5m ago
That nuance is lost on the majority of anti-AI folks who’ve learned they get positive social reactions by declaring essentially everything to be AI written and condemnable.

“An em dash… they’re a witch!”… “it’s not just X, it’s Y… they’re a witch!”

croemer•1h ago
These are just some of the good examples I found.

My hunch that this is substantially LLM-generated is based on more than that.

In my head it's like a Bayesian classifier, you look at all the sentences and judge whether each is more or less likely to be LLM vs human generated. Then you add prior information like that the author did the research using Claude - which increases the likelihood that they also use Claude for writing.

Maybe your detector just isn't so sensitive (yet) or maybe I'm wrong but I have pretty high confidence at least 10% of sentences were LLM-generated.

Yes, the stylistic patterns exist in human speech but RLHF has increased their frequency. Also, LLM writing has a certain monotonicity that human writing often lacks. Which is not surprising: the machine generates more or less the most likely text in an algorithmic manner. Humans don't. They wrote a few sentences, then get a coffee, sleep, write a few more. That creates more variety than an LLM can.

Fun exercise: https://en.wikipedia.org/wiki/Wikipedia:AI_or_not_quiz

monooso•1h ago
Here's an alternative way of thinking about this...

Someone probably expended a lot of time and effort planning, thinking about, and writing an interesting article, and then you stroll by and casually accuse them of being a bone idle cheat, with no supporting evidence other than your "sensitive detector" and a bunch of hand-wavy nonsense that adds up to naught.

kenjackson•14m ago
While I agree with the sentiment, using AI to write the final draft of the article isn’t cheating. People may not like it, but it’s more a stylistic preference.
bookofjoe•7m ago
Yet another way the mere possibility of AI/LLM being involved diminishes the value of ALL text.

If there is constant vigilance on the part of the reader as to how it was created, meaning and value become secondary, a sure path to the death of reading as a joy.

oscaracso•55m ago
I am reminded of the Simpsons episode in which Principal Skinner tries to pass off the hamburgers from a near-by fast food restaurant for an old family recipe, 'steamed hams,' and his guest's probing into the kitchen mishaps is met with increasingly incredible explanations.
brookst•9m ago
I’m so glad the witch hunt has moved on to phrasing so I get less grief for my em dashes.
tapoxi•1h ago
This is my exact writing style - I'm screwed.
croemer•1h ago
I doubt you write like that. Where can I find your writing other than your comments which IMO don't read like the blog post?
TruffleLabs•1h ago
This is just writing; terse maybe and maybe not grammatically correct, but people write like that.
croemer•1h ago
It's not just terseness, it's the rhythm and "it's not x, it's y".

In fact, the latter is the opposite of terseness. LLMs love to tell you what things are not way more than people do.

See https://www.blakestockton.com/dont-write-like-ai-1-101-negat...

(The irony that I started with "it's not just" isn't lost on me)

gcr•1h ago
For what it’s worth, Pangram thinks this article is fully human-written: https://www.pangram.com/history/f5f68ce9-70ac-4c2b-b0c3-0ca8...
xmcqdpt2•1h ago
Then pangram isn't very good, because that article is full of Claude-isms.
DiffTheEnder•1h ago
Is it possible for a tool to know if something is AI written with high confidence at all? LLMs can be tuned/instructed to write in an infinite number of styles.

Don't understand how these tools exist.

gcr•50m ago
The WikiEDU project has some thoughts on this. They found Pangram good enough to detect LLM usage while teaching editors to make their first Wikipedia edits, at least enough to intervene and nudge the student. They didn’t use it punatively or expect authoritative results however. https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipe...

They found that Pangram suffers from false positives in non-prose contexts like bibliographies, outlines, formatting, etc. The article does not touch on Pangram’s false negatives.

I personally think it’s an intractable problem, but I do feel pangram gives some useful signal, albeit not reliably.

cameronh90•1h ago
It has Claude-isms, but it doesn't feel very Claude-written to me, at least not entirely.

What's making it even more difficult to tell now is people who use AI a lot seem to be actively picking up some of its vocab and writing style quirks.

embedding-shape•56m ago
> because that article is full of Claude-isms

Not sure how I feel about the whole "LLMs learned from human texts, so now the people who helped write human texts are suddenly accused of plagiarizing LLMs" thing yet, but seems backwards so far and like a low quality criticism.

snapcaster•30m ago
Real talk. You're not just making a good point -- you're questioning the dominant paradigm
jnwatson•22m ago
Horrible
croemer•45m ago
Pangram doesn't reliably detect individual LLM-generated phrases or paragraphs among human written text.

It seems to look at sections of ~300 words. And for one section at least it has low confidence.

I tested it by getting ChatGPT to add a paragraph to one of my sister comments. Result is "100% human" when in fact it's only 75% human.

Pangram test result: https://www.pangram.com/history/1ee3ce96-6ae5-4de7-9d91-5846...

ChatGPT session where it added a paragraph that Pangram misses: https://chatgpt.com/share/69d4faff-1e18-8329-84fa-6c86fc8258...

gcr•28m ago
This is useful, thanks! TIL
timdiggerm•30m ago
So you're saying Pangram isn't worth much?
Aurornis•8m ago
The AI writing detectors are very unreliable. This is important to mention because they can trigger in the opposite direction (reporting human written text as AI generated) which can result in false accusations.

It’s becoming a problem in schools as teachers start accusing students of cheating based on these detectors or ignore obvious signs of AI use because the detectors don’t trigger on it.

ChrisRR•1h ago
It's not setting off any LLM alarm bells to me. It just reads like any other scientific article, which is very often soulless
monooso•1h ago
You have no evidence that it was.
NiloCK•1h ago
This is the top reply on a substantial percentage of HN posts now and we should discourage it.

It is:

- sneering

- a shallow dismissal (please address the content)

- curmudgeonly

- a tangential annoyance

All things explicitly discouraged in the site guidelines. [1]

Downvoting is the tool for items that you think don't belong on the front page. We don't need the same comment on every single article.

[1] - https://news.ycombinator.com/newsguidelines.html

monooso•1h ago
No idea why you're being downvoted. I've done my bit to redress the balance, I hope others do the same.
masklinn•51m ago
> Downvoting is the tool for items that you think don't belong on the front page.

You can’t downvote submissions. That’s literally not a feature of the site. You can only flag submissions, if you have more that 31 karma.

NiloCK•38m ago
Twelve year old account and who knows how much lurking before that and I've never noticed this. Good lord.

Optimistically, I guess I can call myself some sort of live-and-let-live person.

timdiggerm•28m ago
It's not a shallow dismissal; it's a dismissal for good reason. It's tangential to the topic, but not to HN overall. It's only curmudgeonly if you assume AI-written posts are the inevitable and good future (aka begging the question). I really don't know how it's "sneering", so I won't address that.
TruffleLabs•1h ago
"Written by an LLM" based on what data or symptom?
mpalmer•57m ago
I've seen way, way worse. Either someone LLM-polished something they already wrote, or they did their own manual editing pass.

The short sentence construction is the most suspicious, but I actually don't see anything glaring. It normally jumps out and hits me in the face.

rudhdb773b•49m ago
Not to single out your comment, but it feels like it's gotten to the point where HN could use a rule against complaining about AI generated content.

It seems like almost every discussion has at least someone complaining about "AI slop" in either the original post or the comments.

Gigachad•8m ago
HN has gotten to the point where it’s not even worth clicking the link because of course it’s ai slop.

There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.

yodon•1h ago
This is so insightfully and powerfully written I had literal chills running down my spine by the end.

What a horrible world we live in where the author of great writing like this has to sit and be accused of "being AI slop" simply because they use grammar and rhetoric well.

dotancohen•1h ago
I was completely riveted the whole read. The description of Collins' dilemma is the first time I've seen an actual real world scenario described that might cause him to return to Earth alone.

If an LLM wrote that, then I no longer oppose LLM art.

breakingcups•15m ago
I thought that was the least likeable part of the article. They speculated wildly, somehow making the leap that a trained astronaut would not resort to a computer reset if the problems persisted to weave the narrative that this bug was super-duper-serious indeed. They didn't need that and it weakened the presentation.
jwpapi•1h ago
Has someone verified this was an actual bug?

One of AI’s strengths is definitely exploration, f.e. in finding bugs, but it still has a high false positive rate. Depending on context that matters or it wont.

Also one has to be aware that there are a lot of bugs that AI won’t find but humans would

I don’t have the expertise to verify this bug actually happened, but I’m curious.

throwaway27448•14m ago
It's not even clear if AI was used to find the bug: they mention modeling the software with an "ai native" language, whatever that means. What is not clear is how they found themselves modeling the gyros software of the apollo code to begin with.

But, I do think their explanation of the lock acquisition and the failure scenario is quite clear and compelling.

Aurornis•4m ago
> It's not even clear if AI was used to find the bug

The intro says “We used Claude and Allium”. Allium looks like a tool they’ve built for Claude.

So the article is about how they used their AI tooling and workflow to find the bug.

wg0•1h ago
Someone please amend the title and add "using claude code" because that's customary nowadays.
riverforest•42m ago
Software that ran on 4KB of memory and got humans to the moon still has undiscovered bugs in it. That says something about the complexity hiding in even the smallest codebases.
whiplash451•13m ago
My guess is that in such low memory regimes, program length is very loosely correlated with bug rate.

If anything, if you try to cram a ton of complexity into a few kb of memory, the likelihood of introducing bugs becomes very high.

MeteorMarc•10m ago
Are there any consequences for the Artemis 2 mission (ironic)?
ChicagoBoy11•7m ago
For anyone who liked this, I highly suggest you take a look at the CuriousMarc youtube channel, where he chronicles lots of efforts to preserve and understand several parts of the Apollo AGC, with a team of really technically competent and passionate collaborators.

One of the more interesting things they have been working on, is a potential re-interpretation of the infamous 1202 alarm. It is, as of current writing, popularly described as something related to nonsensical readings of a sensor which could (and were) safely ignored in the actual moon landing. However, if I remember correctly, some of their investigation revealed that actually there were many conditions which would cause that error to have been extremely critical and would've likely doomed the astronauts. It is super fascinating.