frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
38•thelok•2h ago•3 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
101•AlexeyBrin•6h ago•18 comments

First Proof

https://arxiv.org/abs/2602.05192
51•samasblack•3h ago•37 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
789•klaussilveira•20h ago•242 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
38•vinhnx•3h ago•5 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
62•onurkanbkrc•5h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
462•theblazehen•2d ago•165 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1040•xnx•1d ago•587 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
506•nar001•4h ago•234 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
48•mellosouls•3h ago•49 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
183•jesperordrup•10h ago•65 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
63•1vuio0pswjnm7•7h ago•59 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
186•alainrk•5h ago•280 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
27•rbanffy•4d ago•5 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
15•0xmattf•2h ago•7 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
19•marklit•5d ago•0 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
108•videotopia•4d ago•27 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
58•speckx•4d ago•62 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
268•isitcontent•20h ago•34 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
169•bookofjoe•2h ago•152 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
197•limoce•4d ago•107 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
281•dmpetrov•21h ago•150 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•47 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
548•todsacerdoti•1d ago•266 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
422•ostacke•1d ago•110 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
37•matt_d•4d ago•13 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
365•vecti•23h ago•167 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
465•lstoll•1d ago•305 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
341•eljojo•23h ago•209 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
66•helloplanets•4d ago•70 comments
Open in hackernews

Multi-Stage Programming with Splice Variables

https://tsung-ju.org/icfp25/
54•matt_d•7mo ago

Comments

TimorousBestie•7mo ago
This is fascinating. I could see it being very useful for writing SIMD abstraction layers (like Highway or SIMDe) without so much of the cruft.
kldx•7mo ago
> For example, instead of a power function that uses a loop, you could generate specialized code like x * x * x * x * x directly. This eliminates runtime overhead and creates highly optimized code.

Could anyone explain to me how this is different from templates or parameter pack expansion in C++? I can see the constexpr-ness here is encoded in the type system and appears more composable, but I am not sure if I am missing the point.

I looked at the paper but I can't find anything related to C++.

gsliepen•7mo ago
> Could anyone explain to me how this is different from templates or parameter pack expansion in C++?

I don't think it's any different.

> I can see the constexpr-ness here is encoded in the type system

I also see they introduce new constructs like let$, so it's not just a type system thing.

> I looked at the paper but I can't find anything related to C++.

I don't think the author needs to compare their code to C++. That said, it looks to me like it is similar to the upcoming C++26's reflection capabilities.

naasking•7mo ago
Typically, multistage languages permit program generation at any stage, including runtime. So that would be different than C++.
gsliepen•7mo ago
You can get pretty far in C++ though: https://codereview.stackexchange.com/questions/259045/poor-m...
naasking•7mo ago
This looks like a reinvention of the final, tagless interpreter, which has been a great, common technique in functional languages for awhile:

https://okmij.org/ftp/tagless-final/index.html#tagless-final

It's an interpreter though, not a JIT. This kind of programming language thing is a bit of a hobby horse of mine, so see the comment I just posted on this for full details:

https://codereview.stackexchange.com/questions/259045/poor-m...

perihelions•7mo ago
How is this different from a syntactic macro?
burakemir•7mo ago
Two big differences:

  - it is typed, and

  - multi-stage programming can also describe runtime-code generation.
gsliepen•7mo ago
> For example, instead of a power function that uses a loop, you could generate specialized code like x * x * x * x * x directly. This eliminates runtime overhead and creates highly optimized code.

This is misguided. For decennia now, there is no reason to assume that hand-unrolled code is faster than a for-loop. Compilers optimize this stuff, and they do this even better than mindlessly multiplying x by itself. For example, raising x to the power 6 only needs 3 multiplications, see for example: https://godbolt.org/z/Edz4jjqvv

While there are definitely use cases for meta-programming, optimization is not one of them.

naasking•7mo ago
Optimization is absolutely one of them, once you start dealing with higher order programs or programs with sophisticated control flow. Compiler optimizations are not enough to infer the Futamura projections in all cases, for instance.

https://okmij.org/ftp/tagless-final/index.html#tagless-final

gsliepen•7mo ago
Interesting, I never heard of Futamura projections before. Looking at the definition, it seems like the first projection (specializing an interpreter for given source code) is already handled pretty well by today's compilers, just by unrolling and inlining. And they can go even further and evaluate parts at compile time, see for example https://github.com/IoanThomas/constexpr-chip8. I can see how the second and third projections are not handled by compiler optimizations though.
finiteparadox•7mo ago
The point is that compiler optimisations are a black box and not guaranteed. They can be very brittle wrt to seemingly harmless source changes (even something as simple as making an extra intermediate assignment). You are at the mercy of the 'fuel' of the optimisation passes. With staging you get to control exactly what gets inlined/partially evaluated. Of course, to get good results you need to know what to optimise for.
gsliepen•7mo ago
> With staging you get to control exactly what gets inlined/partially evaluated.

I want to stress that this is not true. Sure, sometimes it might work, but compilers can also uninline, as well as reorder the way things are evaluated. Compilers don't do a 1:1 mapping of lines of code to assembly instructions anymore; instead they are designed to take your program as input, and generate the best executable that has the same observable effect as your code. So whatever optimization you perform in the source code, it is going to be very brittle as well wrt to seemingly harmless compiler changes (like changing compiler flags, updating the compiler to a new version, and so on).

While indeed nothing is guaranteed, at this point in time the compiler is vastly better at optimizing code than humans are. If you want to make a point that multi-stage programming helps optimize code, you have to do much better than an example of raising x to some power.

finiteparadox•7mo ago
I think you are missing the point a bit. With staging you can build up arbitrary levels of compile time abstractions and be sure that they will not appear in the final executable. Of course, an optimising compiler will reorder/rearrange code regardless. But it won't reintroduce all the abstraction layers that have been staged away. After enough abstraction layers, without staging even a compiler that optimises aggressively won't know to evaluate them away.

Let's put it another way: do you think there is utility in macros at all? And do you think that type safe code is better than untyped code? If you say yes to both, you must also think that staging is useful, since it basically gives you type safe macros. Now lots more things can be macros instead of runtime functions, and you don't need to deal with the ergonomic issues that macros have in other languages. For a more real world example, see Jeremy Yallop's work on fused lexing and parsing.

captainbland•7mo ago
Does this have practical advantages over JIT runtime optimisation used in e.g the JVM? My assumption is that maybe you can generate optimised code perhaps without as much analytical overhead of the JVM or more specific optimisations (so controlled by the programmer) but honestly the details of how this might compare in practice are a bit over my head.