frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
185•ColinWright•1h ago•168 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
22•valyala•2h ago•6 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
124•AlexeyBrin•7h ago•24 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
17•valyala•2h ago•1 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
65•vinhnx•5h ago•9 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
155•alephnerd•2h ago•106 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
833•klaussilveira•22h ago•250 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
119•1vuio0pswjnm7•8h ago•149 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1061•xnx•1d ago•613 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
79•onurkanbkrc•7h ago•5 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
4•gnufx•57m ago•1 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
489•theblazehen•3d ago•177 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
212•jesperordrup•12h ago•72 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
567•nar001•6h ago•259 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
226•alainrk•6h ago•354 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
40•rbanffy•4d ago•7 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
10•momciloo•2h ago•0 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
29•marklit•5d ago•3 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•33 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
77•speckx•4d ago•82 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
275•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
288•dmpetrov•22h ago•155 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•12 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
557•todsacerdoti•1d ago•269 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
427•ostacke•1d ago•111 comments
Open in hackernews

GPT‑5-Codex and upgrades to Codex

https://simonwillison.net/2025/Sep/15/gpt-5-codex/
57•amrrs•4mo ago

Comments

lostmsu•4mo ago
The pelican is not very good
TiredOfLife•4mo ago
But probably fast
AstroBen•4mo ago
Would be faster if it got on the bike
sanxiyn•4mo ago
> "We find that comments by GPT‑5-Codex are less likely to be incorrect or unimportant" -- less unimportant comments in code is definitely an improvement!

This seems to be a misunderstanding. In the original OpenAI article, comment here is about code review comment, not comment in code.

knowsuchagency•4mo ago
It's annoying to see a link to a Theo video -- same guy who went with Simon to OpenAI's GPT-5 glazefest and had to backpedal when everyone realized what a shill he is.

I know neither of them are journalists -- I'm probably expecting too much -- but Simon should know better.

Ancapistani•4mo ago
While not a journalist, Simon definitely has a background in journalism.

He was one of the original authors of Django, back when it was a “web framework for journalists with deadlines”.

knowsuchagency•4mo ago
Exactly. That's why I said he should know better. He never should have gone to that event to hype GPT-5 under the guise of "testing" it out.
simonw•4mo ago
I did actually consider that quite a bit when I got invited to OpenAI's mysterious recorded launch event (they didn't tell us it was GPT-5 until we got there) - would it damage my credibility as an independent voice in the AI space?

I decided to risk it. Crucially OpenAI at no point asked for any influence over my content at all, aside from sticking to their embargo (which I've done with other companies before.)

doctoboggan•4mo ago
Is it possible that open ai let you test a private version of GPT-5 that was better than what was released to the public, like the previous commenter claimed?
simonw•4mo ago
They changed the model ID we were using multiple times in the two weeks we had access to - so clearly they were still iterating on the model during that time.

They weren't deceptive about that - the new model IDs were clearly communicated - but with hindsight it did mean that those early impressions weren't an exact match for what was finally released.

My biggest miss was that I didn't pay attention to the ChatGPT router while I was previewing the models. I think a lot of the early disappointment in GPT-5 was caused by the router sending people to the weaker model.

For what it's worth, the GPT-5 I'm using today feels as impressive to me as the one I had during the preview. It's great at code and great at search, the two things I care most about.

beng-nl•4mo ago
This seems to me like a very harsh take on Theo’s motivations. I don’t know him beyond what I’ve learned from his videos, but given occams razor I’m inclined to believe him: gpt5 seemed much better during the private demo than the public release. There are many possible explanations but jumping to ‘shill’ (implying deception) seems uncalled for.
rolymath•4mo ago
Literally the only channel I've ever blocked on Youtube.
pietz•4mo ago
Do we really know that gpt-5-codex is a finetune of gpt-5(-thinking)? The article doesn't clearly say that, right?

I suspect that this is smaller than gpt-5 or at least a quantized version. Similar to what I suspect Opus 4.1 is. That would also explain why it's faster.

simonw•4mo ago
OpenAI say:

"Today, we’re releasing GPT‑5-Codex—a version of GPT‑5 further optimized for agentic coding in Codex."

So yeah, simplifying that to a "fine-tune" is likely incorrect. I just added a correction note about that to my article.

pietz•4mo ago
Thank you for your work, Simon.