frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

GrapheneOS and Forensic Extraction of Data

https://discuss.grapheneos.org/d/13107-grapheneos-and-forensic-extraction-of-data
113•SoKamil•1h ago•12 comments

Gregg Kellogg has passed away

https://lists.w3.org/Archives/Public/public-json-ld-wg/2025Sep/0012.html
102•daenney•2h ago•8 comments

Behind the Scenes of Bun Install

https://bun.com/blog/behind-the-scenes-of-bun-install
57•Bogdanp•1h ago•22 comments

Reshaped is now open source

https://reshaped.so/blog/reshaped-oss
120•michaelmior•4h ago•28 comments

An Engineering History of the Manhattan Project

https://www.construction-physics.com/p/an-engineering-history-of-the-manhattan
14•rbanffy•1h ago•4 comments

DeepCodeBench: Real-World Codebase Understanding by Q&A Benchmarking

https://www.qodo.ai/blog/deepcodebench-real-world-codebase-understanding-by-qa-benchmarking/
52•blazercohen•4h ago•2 comments

I Solved PyTorch's Cross-Platform Nightmare

https://svana.name/2025/09/how-i-solved-pytorchs-cross-platform-nightmare/
14•msvana•3d ago•2 comments

Mapping to the PICO-8 palette, perceptually

https://30fps.net/pages/perceptual-pico8-pixel-mapping/
25•ibobev•3d ago•11 comments

KDE launches its own distribution

https://lwn.net/SubscriberLink/1037166/caa6979c16a99c9e/
587•Bogdanp•16h ago•386 comments

Piramidal (YC W24) Is Hiring Back End Engineer

https://www.ycombinator.com/companies/piramidal/jobs/1HvdaXs-full-stack-engineer-platform
1•dsacellarius•2h ago

C++20 Modules: Practical Insights, Status and TODOs

https://chuanqixu9.github.io/c++/2025/08/14/C++20-Modules.en.html
34•ashvardanian•3d ago•26 comments

Show HN: Term.everything – Run any GUI app in the terminal

https://github.com/mmulet/term.everything
975•mmulet•2d ago•131 comments

PgEdge Goes Open Source

https://www.pgedge.com/blog/pgedge-goes-open-source
46•Bogdanp•6h ago•8 comments

DOOMscrolling: The Game

https://ironicsans.ghost.io/doomscrolling-the-game/
354•jfil•15h ago•83 comments

Germany is not supporting ChatControl – blocking minority secured

https://digitalcourage.social/@echo_pbreyer/115184350819592476
804•xyzal•5h ago•245 comments

Hashed sorting is typically faster than hash tables

https://reiner.org/hashed-sorting
130•Bogdanp•3d ago•20 comments

ChatGPT Developer Mode: Full MCP client access

https://platform.openai.com/docs/guides/developer-mode
479•meetpateltech•22h ago•258 comments

How the tz database works (2020)

https://yatsushi.com/blog/tz-database/
44•jumbosushi•3d ago•6 comments

GrapheneOS accessed Android security patches but not allowed to publish sources

https://grapheneos.social/@GrapheneOS/115164133992525834
43•uneven9434•6h ago•6 comments

Brussels faces privacy crossroads over encryption backdoors

https://www.theregister.com/2025/09/11/eu_chat_control/
35•jjgreen•2h ago•9 comments

Where did the Smurfs get their hats (2018)

https://www.pipelinecomics.com/beginning-bd-smurfs-hats-origin/
109•andsoitis•13h ago•40 comments

Court rejects Verizon claim that selling location data without consent is legal

https://arstechnica.com/tech-policy/2025/09/court-rejects-verizon-claim-that-selling-location-dat...
529•nobody9999•12h ago•64 comments

Show HN: I built a minimal Forth-like stack interpreter library in C

6•Forgret•1h ago•3 comments

Teens are adjusting to the smartphone ban

https://gothamist.com/news/from-burner-phones-to-decks-of-cards-nyc-teens-are-adjusting-to-the-sm...
20•geox•33m ago•25 comments

A desktop environment without graphics (tmux-like)

https://github.com/Julien-cpsn/desktop-tui
126•mustaphah•3d ago•39 comments

The HackberryPi CM5 handheld computer

https://github.com/ZitaoTech/HackberryPiCM5
227•kristianpaul•2d ago•77 comments

Jiratui – A Textual UI for interacting with Atlassian Jira from your shell

https://jiratui.sh/
274•gjvc•23h ago•68 comments

Intel's E2200 "Mount Morgan" IPU at Hot Chips 2025

https://chipsandcheese.com/p/intels-e2200-mount-morgan-ipu-at
81•ingve•15h ago•29 comments

Rewriting Dataframes for MicroHaskell

https://mchav.github.io/rewriting-dataframes-for-microhs/
54•internet_points•3d ago•4 comments

Defeating Nondeterminism in LLM Inference

https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
287•jxmorris12•20h ago•117 comments
Open in hackernews

C++20 Modules: Practical Insights, Status and TODOs

https://chuanqixu9.github.io/c++/2025/08/14/C++20-Modules.en.html
33•ashvardanian•3d ago

Comments

triknomeister•2h ago
> Yes, we can. C++20 Modules are usable in a Linux + Clang environment. There are also examples showing that C++20 Modules are usable in a Windows environment with MSVC. I have not yet heard of GCC’s C++20 Modules being used in non-trivial projects.

People keep saying this and yet I do not know of a good example from a real life project which did this which I can test. This seems very much still an experimental thing.

Kelteseth•1h ago
For me it is simply because Qt moc does not support modules yet. This is my last comment from the tracking issue https://bugreports.qt.io/browse/QTBUG-86697

> C++ 26 reflections have now been voted in. This would get rid of moc entirely, but I really do not see how this will become widely available in the next 5-10 Years+. This would require Qt to move to C++ 26, but only if compiler support is complete for all 3 compilers AND older Linux distros that ship these compilers. For example, MSVC still has no native C++ 23 flag (In CMake does get internally altered to C++ latest aka. C++ 26) , because they told me that they will only enable it is considered 100% stable. So I guess we need to add modules support into moc now, waiting another 10 years is not an option for me .

pjmlp•55m ago
Here, Raytracing in a Weekend, using modules,

https://github.com/pjmlp/RaytracingWeekend-CPP

Also shows how to use static libraries alongside modules.

wild_pointer•44m ago
lol

> can haz real life project?

> sure, here's X in a Weekend

pjmlp•41m ago
Unfortunately I lack the Office source code to share with you.
bluGill•32m ago
It is just beyond experimental now, and finally in the early adopter phase. Those early adopters are trying things are trying to develop best practices - which is to say as always: they will be trying things that future us will laugh at how stupid it was to do.

There are still some features that are missing from compilers, but enough is there that you can target all 3 major compilers and still get most of modules and benefit from them. However if you do this remember you are an early adopter and you need to be prepared to figure out the right way to do things - including fixing things that you get wrong once you figure out what is right.

Also, if you are writing a library you cannot benefit from modules unless you are willing to force all your consumers to adopt modules. This is not reasonable for major libraries used by many so they will be waiting until more projects adopt modules.

Still modules need early adopters and they show great promise. If you write C++ you should spend a little time playing with them in your current project even if you can't commit anything.

juliangmp•19m ago
I think it still is in a "well technically its possible" state. And I fear it'll remain that way for a bit longer.

A while ago I made a small example to test how it would work in an actual project and that uses cmake (https://codeberg.org/JulianGmp/cpp-modules-cmake-example). And while it works™, you can't use any compiler provided modules or header modules. Which means that 1) so you'll need includes for anything from the standard library, no import std 2) you'll also need includes for any third party library you want to use

When I started a new project recently I was considering going with modules, but in the end I chose against it because I dont want to mix modules and includes in one project.

menaerus•2h ago
> The data I have obtained from practice ranges from 25% to 45%, excluding the build time of third-party libraries, including the standard library.

> Online, this number varies widely. The most exaggerated figure I recall is a 26x improvement in project compilation speed after a module-based refactoring.

> Furthermore, if a project uses extensive template metaprogramming and stores constexpr variable values in Modules, the compilation speed can easily increase by thousands of times, though we generally do not discuss such cases.

> Apart from these more extreme claims, most reports on C++20 Modules compilation speed improvements are between 10% and 50%.

I'd like to see references to those claims and experiments, size of the codebase etc. I find it hard to believe the figures since the bottleneck in large codebases is not a compute, e.g. headers preprocessing, but it's a memory bandwidth.

skeezyboy•1h ago
> I find it hard to believe the figures since the bottleneck in large codebases is not a compute, e.g. headers preprocessing, but it's a memory bandwidth.

source? language? what exactly does memory bandwidth have to do with compilation times in your example?

menaerus•1h ago
Chill out. Compiler is a heavily multithreaded program that is utilizing all of the cores in C and C++ compilation model. Since each thread is doing the work, it will obviously also consume memory, no? Computing 101. Total amount of data being touched R/W we call a dataset. A dataset in cases of larger codebases does not fit into the cache. When dataset does not fit into the cache then the data starts to live in main memory. Accessing the data in main memory consumes memory bandwidth of the system. Try running 64 threads and 64-core system touching the data in memory and you will see for yourself.
compiler-guy•34m ago
Compilers are typically not multithreaded. llvm certainly isn’t, although its linker is. C++ builds are usually many single threaded compilation processes running in parallel.
menaerus•27m ago
You're nitpicking, that's what I meant. Many processes in parallel or many threads in parallel, former will achieve better utilization of memory. Regardless, it doesn't invalidate what I said
bluGill•18m ago
It isn't memory utilization it is bandwidth. The CPU can only get so many bytes in and out from main memory and only has so much cache. Eventually the cores are fighting each other for access to the main memory they need. There is plenty of memory in the system, the CPU just can't get at enough of it.

NUMA (non-unifrom memory access - basically give each CPU a serpate bank of RAM, and if you need something that is in the other bank of RAM you need to ask the other CPU) exists because of this. I don't have access to a NUMA to see how they compare. My understanding (which could be wrong) is OS designers are still trying to figure out how to use them well, and they are not expected to do well for all problems.

thechao•17m ago
I was going to reply directly to you; but the re-reply is fine. I don't think your conclusion is wrong, but your analysis is bogus AF. Compiler transforms are usually strongly superpolynomial (quadratic or cubic or some NP-hard demon); a Knuth fast pass is going to traverse the entire IR tree under observation. The thing is, the IR tree under observation is usually pretty small; while it won't fit in the localest cache, it's almost certainly not in main memory after the first sweep. Subsequent trees will be somewhere in the far reaches of memory... but there's an awful lot of work between fetching trees.
unddoch•1h ago
> I'd like to see references to those claims and experiments, size of the codebase etc. I find it hard to believe the figures since the bottleneck in large codebases is not a compute, e.g. headers preprocessing, but it's a memory bandwidth.

Edit: I think I misunderstood what you meant by memory bandwidth at first? Modules reduce the amount of work being done by the compiler in parsing and interpreting C++ code (think constexpr). Even if your compilation infrastructure is constrained by RAM access, modules replace a compute+RAM heavy part with a trivial amount of loading a module into compiler memory so it's a win.

lingolango•1h ago
>since the bottleneck in large codebases is not a compute, e.g. headers preprocessing, but it's a memory bandwidth.

SSD bandwidth: 4-10GB/s RAM bandwidth: 5-10x that, say 40GB/s.

If compute was not a bottleneck, the entire linux kernel should compile in less than 1 second.

menaerus•50m ago
On 40-core or 64-core machine there's more compute than you will ever need for a compilation process. Compilation is a heavy I/O workload not a heavy compute workload, in most cases, where it actually matters.
lingolango•36m ago
Linux is ~1.5GB of source text and the output is typically a binary less than 100MB. That should take a few hundred milliseconds to read in from an SSD or be basically instant from RAM cache, and then a few hundred ms to write out the binary.

So why does it take minutes to compile?

Compilation is entirely compute bound, the inputs and outputs are minuscule data sizes, in the order of megabytes for typical projects - maybe gigabytes for multi million line projects, but that is still only a second or two from an SSD.

menaerus•23m ago
Output as a result is 100mb. Process of compilation accumulates magnitudes more data. Evidence is the constant memory pressure you have in 32G or 64G or even 128G systems. Now given that the process of compilation on even such high end systems take non trivial amount of time, tens of minutes, what do you think how much data bounces from and in memory? It accumulates to a lot more than what you suggest.
bluGill•23m ago
I don't build linux from source, but in my tests with large machines (and my C++ work project with more than 10 million lines of code) somewhere between 40 and 50 cores compile speed starts decreasing as you add more cores. When I moved my source files to a ramdisk the speed got even worse so I know disk IO isn't the issue (there was a lot of RAM on this machine so I don't expect to run low on RAM even with that many cores in use). I don't know how to find the truth, but all signs point to memory bandwidth being the issue.

Of course the above is specific to the machines I did my testing on. A different machine may have other differences from my setup. Still my experience matches the claim: at 40 cores memory bandwidth is the bottleneck not CPU speed.

Most people don't have 40+ core machines to play with, and so will not see those results. The machines I tested on cost > $10,000 so most would argue that is not affordable.

menaerus•17m ago
One of the biggest reasons why people see so much compilation improvement speed on Apple M chips - massive bandwidth improvement in contrast to other machines, even some older servers. 100G/s single core main memory. It starts to drop, e.g. it doesn't scale linearly, when you add more and more cores to the workload, due to L3 contention I'd say, but it goes up to 200G/s IIRC.
Someone•21m ago
> So why does it take minutes to compile?

I’m not claiming anything about it being I/O or compute bound, but you are missing some sources of I/O:

- the compiler reads many source files (e.g. headers) multiple times

- the compiler writes and then reads lots of intermediate data

- the OS may have to swap out memory

Also, there may be resource contention that makes the system do neither I/O nor compute for part of the build.

ho_schi•21m ago
Looking at the examples from using Clang (I use GCC btw.):

   clang++ -std=c++20 Hello.cppm --precompile -o Hello.pcm
   clang++ -std=c++20 use.cpp -fmodule-file=Hello=Hello.pcm Hello.pcm -o Hello.out
   ./Hello.out
Why is something which shall makes things easy and secure so complicated?

I'm used to:

    g++ -o hello hello.cpp

It can use headers. Or doesn't use headers. I doesn't matter. That's the decision of the source file. To be fair, the option -std=c++20 probably isn't necessary in future.

I recommend skimming over this issue from Meson:

https://github.com/mesonbuild/meson/issues/5024

Reading the last few blog posts from a developer of Meson, providing some insights why Meson doesn't support modules until now:

https://nibblestew.blogspot.com/

bluGill•14m ago
Because hello is so simple you don't need complicated. When you are doing something complicated though you have to accept that it is complicated. I could concatenate all 20 million lines of C++ (round number) I work with into one file and building would be as simple as your hello example - but that simple building comes at great cost (you try working with a 20 million line file, and then merging it with changes someone else made) and so I'm willing to accept more complex builds.
ho_schi•2m ago
Thank you. That's right and where usually issues arise and tools challenged. If the hello worlds case starts that complicated already I'm going to being careful.

I'm eager to gather info but the weak spots of headers (and macros) are obvious. Probably holding a waiting position for undefined time. At least as long Meson doesn't support them.

Wikipedia contains false info about toolings: https://en.wikipedia.org/wiki/Modules_(C%2B%2B)#Tooling_supp...

Meson doesn't support modules as of 2025-09-11.

PS: I'm into new stuff when it looks stable and the benefits are obvious. But this looks complicated and backing out of complicated stuff is painful, when necessary.

delta_p_delta_x•8m ago
> Why is something which shall makes things easy and secure so complicated? > I'm used to: > > g++ -o hello hello.cpp

That is simple, because C++ inherited C's simplistic, primitive, and unsafe compilation and abstraction model of brute-force textual inclusion. When you scale this to a large project with hundreds of thousands of translation units, every command-line invocation becomes a huge list of flag soup that plain Makefiles become inscrutably complicated.

Every other reasonably-recent programming language and ecosystem that isn't a direct superset of C has some description of a directed acyclic graph of dependencies, whether it be requirements.txt, cargo.toml, Maven, dotnet and Nuget .csproj files, Go modules, OPAM, PowerShell gallery, and more.

C++20 modules are a very good thing. There are two problems with it: we didn't have a working and correct compiler implementation before the paper was accepted into C++20, and secondly, the built/binary module interface specification is not fixed, so BMIs aren't (yet) portable across compilers.

The Meson developer is notorious for stirring the pot with respect to both the build system competition, and C++20 modules. The Reddit thread on his latest blog post provides a searing criticism for why he is badly mistaken: https://www.reddit.com/r/cpp/comments/1n53mpl/we_need_to_ser...