frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
98•valyala•4h ago•16 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
43•zdw•3d ago•11 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•19 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
56•surprisetalk•3h ago•54 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
98•mellosouls•6h ago•176 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
144•AlexeyBrin•9h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
101•vinhnx•7h ago•13 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
851•klaussilveira•1d ago•258 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
139•valyala•4h ago•109 comments

First Proof

https://arxiv.org/abs/2602.05192
68•samasblack•6h ago•52 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1093•xnx•1d ago•618 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
7•mbitsnbites•3d ago•0 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
64•thelok•6h ago•10 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
235•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
519•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
94•onurkanbkrc•9h ago•5 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
31•momciloo•4h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
259•alainrk•8h ago•425 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
49•rbanffy•4d ago•9 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
187•1vuio0pswjnm7•10h ago•267 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
615•nar001•8h ago•272 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
36•marklit•5d ago•6 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
348•ColinWright•3h ago•414 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
125•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
99•speckx•4d ago•117 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
33•sandGorgon•2d ago•15 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•119 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
288•isitcontent•1d ago•38 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments
Open in hackernews

GPT‑5-Codex and upgrades to Codex

https://simonwillison.net/2025/Sep/15/gpt-5-codex/
57•amrrs•4mo ago

Comments

lostmsu•4mo ago
The pelican is not very good
TiredOfLife•4mo ago
But probably fast
AstroBen•4mo ago
Would be faster if it got on the bike
sanxiyn•4mo ago
> "We find that comments by GPT‑5-Codex are less likely to be incorrect or unimportant" -- less unimportant comments in code is definitely an improvement!

This seems to be a misunderstanding. In the original OpenAI article, comment here is about code review comment, not comment in code.

knowsuchagency•4mo ago
It's annoying to see a link to a Theo video -- same guy who went with Simon to OpenAI's GPT-5 glazefest and had to backpedal when everyone realized what a shill he is.

I know neither of them are journalists -- I'm probably expecting too much -- but Simon should know better.

Ancapistani•4mo ago
While not a journalist, Simon definitely has a background in journalism.

He was one of the original authors of Django, back when it was a “web framework for journalists with deadlines”.

knowsuchagency•4mo ago
Exactly. That's why I said he should know better. He never should have gone to that event to hype GPT-5 under the guise of "testing" it out.
simonw•4mo ago
I did actually consider that quite a bit when I got invited to OpenAI's mysterious recorded launch event (they didn't tell us it was GPT-5 until we got there) - would it damage my credibility as an independent voice in the AI space?

I decided to risk it. Crucially OpenAI at no point asked for any influence over my content at all, aside from sticking to their embargo (which I've done with other companies before.)

doctoboggan•4mo ago
Is it possible that open ai let you test a private version of GPT-5 that was better than what was released to the public, like the previous commenter claimed?
simonw•4mo ago
They changed the model ID we were using multiple times in the two weeks we had access to - so clearly they were still iterating on the model during that time.

They weren't deceptive about that - the new model IDs were clearly communicated - but with hindsight it did mean that those early impressions weren't an exact match for what was finally released.

My biggest miss was that I didn't pay attention to the ChatGPT router while I was previewing the models. I think a lot of the early disappointment in GPT-5 was caused by the router sending people to the weaker model.

For what it's worth, the GPT-5 I'm using today feels as impressive to me as the one I had during the preview. It's great at code and great at search, the two things I care most about.

beng-nl•4mo ago
This seems to me like a very harsh take on Theo’s motivations. I don’t know him beyond what I’ve learned from his videos, but given occams razor I’m inclined to believe him: gpt5 seemed much better during the private demo than the public release. There are many possible explanations but jumping to ‘shill’ (implying deception) seems uncalled for.
rolymath•4mo ago
Literally the only channel I've ever blocked on Youtube.
pietz•4mo ago
Do we really know that gpt-5-codex is a finetune of gpt-5(-thinking)? The article doesn't clearly say that, right?

I suspect that this is smaller than gpt-5 or at least a quantized version. Similar to what I suspect Opus 4.1 is. That would also explain why it's faster.

simonw•4mo ago
OpenAI say:

"Today, we’re releasing GPT‑5-Codex—a version of GPT‑5 further optimized for agentic coding in Codex."

So yeah, simplifying that to a "fine-tune" is likely incorrect. I just added a correction note about that to my article.

pietz•4mo ago
Thank you for your work, Simon.