frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
142•theblazehen•2d ago•42 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
668•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
949•xnx•19h ago•551 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•32 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
53•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
229•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
16•kaonwarb•3d ago•19 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
222•dmpetrov•14h ago•117 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
27•jesperordrup•4h ago•16 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
330•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
494•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
381•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
412•lstoll•20h ago•278 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
19•bikenaga•3d ago•4 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•6 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
256•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
43•helloplanets•4d ago•42 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•4 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
59•gfortaine•12h ago•25 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1066•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
149•SerCe•10h ago•138 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
182•limoce•3d ago•98 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

C++: "model of the hardware" vs. "model of the compiler" (2018)

http://ithare.com/c-model-of-the-hardware-vs-model-of-the-compiler/
32•oumua_don17•6mo ago

Comments

gsliepen•6mo ago
Early programming languages had to work with the limited hardware capabilities of the time in order to be efficient. Nowadays, we have so much processing power available that the compiler can optimize the code for you, so the language doesn't have to follow hardware capabilities anymore. So it's only logical that the current languages should work the limitations of the compilers. Perhaps one day those limitations will be gone as well for practical purposes, and it would be interesting to see what programming languages could be made then.
lmm•6mo ago
Isn't that the tail wagging the dog? If you build the language to fit current compilers then it will be impossible to ever redesign those compilers.
gsliepen•6mo ago
Why would that be impossible? Most programming languages are still Turing complete, so you can build whatever you want in them.
lmm•6mo ago
You said this was an efficiency issue, and Church-Turing says nothing about efficiency.
gpderetta•6mo ago
"Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy."

- Alan Perlis, Epigrams on Programming

rcxdude•6mo ago
Maybe, but if you don't consider the existing compilers you run the risk of making something that is unimplementable in one of the existing compilers, or perhaps at all. (C++ has had some issue with this in the past, which is I think why it's explicitly a consideration in the process now)
flohofwoe•6mo ago
> Nowadays, we have so much processing power available that the compiler can optimize the code for you, so the language doesn't have to follow hardware capabilities anymore.

That must be why builds today take just as long as in the 1990s, to produce a program that makes people wait just as long as in the 1990s, despite the hardware being thousands of times faster ;)

In reality, people just throw more work at the compiler until build times become "unbearable", and optimize their code only until it feels "fast enough". These limits of "unbearable" and "fast enough" are built into humans and don't change in a couple of decades.

Or as the ancient saying goes: "Software is a gas; it expands to fill its container."

adrianN•6mo ago
At least we can build software systems that are a few orders of magnitude more complex than in the 90s for approximately the same price. The question is whether the extra complexity also offers extra value.
flohofwoe•6mo ago
True, but a lot of that complexity is also just pointless boilerplate / busywork disguised as 'best practices'.
Trex_Egg•6mo ago
I am eager to have an example to explain how a "best practices" is making the software unbearable or slow?
flohofwoe•6mo ago
Some C++ related 'best practices' off the top of my head:

- put each class into its own header/source file pair (a great way to explode your build times!)

- generally replace all raw pointers with shared_ptr or unique_ptr

- general software patterns like model-view-controller, a great way to turn a handful lines of code into dozens of files with hundreds of lines each

- use exceptions for error handling (although these days this is widely considered a bad idea, but it wasn't always)

- always prefer the C++ stdlib over self-rolled solutions

- etc etc etc...

It's been a while since I closely followed modern C++ development, so I'm sure there are a couple of new ones, and some which have fallen out of fashion.

aw1621107•6mo ago
> put each class into its own header/source file pair (a great way to explode your build times!)

Is that really sufficient to explode build times on its own? Especially if you're just using the more basic C++ features (no template (ab)use in particular).

pjmlp•6mo ago
Not at all, you can write in the C subset that C++ supports and anti-C++ folks will still complain.

Meanwhile the C builds done in UNIX workstations (Aix, Solaris, HP-UX) for our applications back in 2000, were taking about 1 hour per deployment target, hardly blazing fast.

pjmlp•6mo ago
> - put each class into its own header/source file pair (a great way to explode your build times!)

Only if you fail to use binary libraries in the process.

Apparently folks like to explode build times with header only libraries nowadays, as if C and C++ were scripting languages.

> - generally replace all raw pointers with shared_ptr or unique_ptr

Some folks care about safety.

I have written C applications with handles, doing two way conversions between pointers and handles, and I am not talking about Windows 16 memory model.

> - general software patterns like model-view-controller, a great way to turn a handful lines of code into dozens of files with hundreds of lines each

I am old enough to have used Yourdon Structured Method in C applications

> - use exceptions for error handling (although these days this is widely considered a bad idea, but it wasn't always)

Forced return code checks with automatic stack unwinding are still exceptions, even if they look differently.

Also what about setjmp()/longjmp() all over the place?

> - always prefer the C++ stdlib over self-rolled solutions

Overconfidence that everyone knows better than people paid to write compilers usually turns out bad, unless they are actually top developers.

There are plenty of modern best practices for C as well, that is how we try to avoid making a mess out of people think being a portable assembler, and industries rely on MISRA, ISO 26262, and similar for that matter.

j16sdiz•6mo ago
The problem is: "the platform" is never defined.

When you decouple the language from the hardware and you don't specify an abstraction model (like java vm do), "the platform" is just whatever the implementer feels like at that moment.

simonask•6mo ago
It's not really about "limitations" of the hardware, so much as it is about the fact that things have crystallized a lot since the 90s. There are no longer any mainstream architectures using big-endian integers for example, and there are zero architectures using anything but two's complement. All mainstream computers are Von Neumann machines too (programs are stored; functions are data). All bytes are 8 bits wide, and native word sizes are a clean multiple of that.

Endianness will be with us for a while, but modern languages don't really need to consider the other factors, so they can take significant liberties in their design that match the developer's intuition more precisely.

gsliepen•6mo ago
I was thinking more about higher-order things, like a compiler being able to see that your for-loop is just counting the number of bits set in an integer, and replacing it with a popcount instruction, or being able to replace recursion with tail calls, or doing complex things at compile-time rather than run-time.
flohofwoe•6mo ago
At least the popcount example (along with some other 'bit twiddling hacks' inspired optimizations) is just a magic pattern matching trick that happens fairly late in the compilation process though (AFAIK at least), and the alternative to simply offer an optional popcount builtin is a completely viable low-tech solution that was already possible in the olden days (and this still has the the advantage that it is entirely predictable instead of depending on magic compiler tricks).

Basic compile time constant folding also isn't anything modern, even the most primitive 8-bit assemblers of the 1980s allowed to write macros and expressions which were evaluated at compile time - and that gets you maybe 80% to the much more impressive constant folding over deep callstacks that modern compilers are capable of (e.g. what's commonly known as 'zero cost abstraction').

deterministic•6mo ago
Nope. Performance really matters. Even today. And even for web applications! Just remember how you feel using a slow sluggish website vs. a snappy fast one. It's night and day.
bluetomcat•6mo ago
What a mess of an article. A pretentious mishmash of scattered references with some vague abstract claims that could be summarised in one paragraph.
flohofwoe•6mo ago
Sort of fitting though, because C++ coroutines turned out quite the mess (are they actually usable in real world code by now?).

I think in the end it's just another story of a C++ veteran living through the inevitable Modern C++ trauma and divorce ;)

(I wonder what he's up to today, ITHare was quite popular in game dev circles in the 2010s for his multiplayer networking blog posts and books)

TuxSH•6mo ago
> C++ coroutines turned out quite the mess (are they actually usable in real world code by now?).

They are, they are extensively used by software like ScyllaDB which itself is used by stuff like Discord, BlueSky, Comcast, etc.

C++ coroutines and "stackless coroutines" in general are just compiler-generated FSMs. As for allocation, you can override operator new for the promise types and that operator new gets forwarded the coroutine's function arguments

simonask•6mo ago
They are compiler-generated FSMs, but I think it's worth noting that the C++ design was landed in a way that precluded many people from ever seriously considering using them, especially due to the implicit allocation. The reason you are using C++ in the first place is because you care about details like allocation, so to me this is a gigantic fumble.

Rust gets it right, but has its own warts, especially if you're coming from async in a GC world. But there's no allocation; Futures are composable value types.

pjmlp•6mo ago
The C++ model is that in theory there is an allocation, in practice depending on how a specific library was written, the compiler would be able to elide the allocation.

It is the same principle that drives languages like Rust in regards to being safe by default, in theory stuff like bounds checks cause a performance hit, in practice compilers are written to elide as much as possible.

uep•6mo ago
I think you missed an important point in the parent comment. You can override the allocation for C++ coroutines. You do have control over details like allocation.

C++ coroutines are so lightweight and customizable (for good and ill), that in 2018 Gor Nishanov did a presentation where he scheduled binary searches around cache prefetching using coroutines. And yes, he modified the allocation behavior, though he said it only resulted in a modest improvement on performance.

captainmuon•6mo ago
> The reason you are using C++ in the first place is because you care about details like allocation, so to me this is a gigantic fumble.

I wouldn't say that applies to everybody. I use C++ because it interfaces with the system libraries on every platform, because it has class-based inheritance (like Java and C#, unlike Rust and Zig) and because it compiles to native code without an external runtime. I don't care to much about allocations.

For me the biggest fumble is that C++ provides the async framework, but no actual async stdlib (file io and networking). It took a while for options to be available, and while eg Asio works nicely it is crazily over engineered in places.

pjmlp•6mo ago
I like what Rust offers over C++ in terms of safety and community culture, but I don't enjoy being a tool builder for ecosystem gaps, I rather spend the time directly using the tools that already exist, plus I have Java and .NET ecosystems for safety, as I am really on the automatic resource management side.

Zig, is really Modula-2 in C's cloathing, I don't like the kind of handmade culture that has around it, and its way of dealing with use after free I can also get in C and C++, for the last thirty years, it is a matter of actually learning the tooling.

Thus C++ it is, for anything that isn't can't be taken over by a compiled managed language.

I would like to use D more, but it seems to have lost its opportunity window, although NASA is now using it, so who knows.

TuxSH•6mo ago
You can write stuff like this:

  void *operator new(std::size_t sz, Foo &foo, Bar &bar) { return foo.m_Buffer; /* should be std::max_align_t-aligned \*/ }
and force all coroutines of your Coroutine type to take (Foo &, Bar &) as arguments this way (works with as many overloads as you like).
gpderetta•6mo ago
The required allocation make them awkward to use for short lived automatic objects like generators. But for async operations were you are eventually going to need a long lived context object anyway, it is a non-issue especially given the ability to customize allocators.

I say this as someone that is not a fan of the stackess coroutines in general, and the C++ solution in particular.

pjmlp•6mo ago
They have been always usable in the real world, as they were initially based on async model of doing C++ programming in WinRT, inspired by .NET async/await.

Hence why anyone that has done low level .NET async/await code with awaitables and magic peoples, will fell right at home in C++ co-routines.

Anyone using WinAppSDK with C++ will eventually make use of them.