frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Same-day upstream Linux support for Snapdragon 8 Elite Gen 5

https://www.qualcomm.com/developer/blog/2025/10/same-day-snapdragon-8-elite-gen-5-upstream-linux-...
93•mfilion•2h ago•61 comments

The Input Stack on Linux: An End-to-End Architecture Overview

https://venam.net/blog/unix/2025/11/27/input_devices_linux.html
18•venamresm__•1h ago•0 comments

Arthur Conan Doyle explored men’s mental health through Sherlock Holmes

https://scienceclock.com/arthur-conan-doyle-delved-into-mens-mental-health-through-his-sherlock-h...
170•PikelEmi•7h ago•205 comments

The VanDersarl Blériot: a 1911 airplane homebuilt by teenage brothers

https://www.historynet.com/vandersarl-bleriot/
12•ForHackernews•1h ago•2 comments

Quake Engine Indicators

https://fabiensanglard.net/quake_indicators/index.html
55•liquid_x•3d ago•6 comments

We're Losing Our Voice to LLMs

https://tonyalicea.dev/blog/were-losing-our-voice-to-llms/
256•TonyAlicea10•3h ago•253 comments

Linux Kernel Explorer

https://reverser.dev/linux-kernel-explorer
433•tanelpoder•12h ago•64 comments

Penpot: The Open-Source Figma

https://github.com/penpot/penpot
580•selvan•16h ago•137 comments

Show HN: Runprompt – run .prompt files from the command line

https://github.com/chr15m/runprompt
56•chr15m•3h ago•22 comments

Pakistan says rooftop solar output to exceed grid demand in some hubs next year

https://www.reuters.com/sustainability/boards-policy-regulation/pakistan-says-rooftop-solar-outpu...
36•toomuchtodo•1h ago•12 comments

Abuse of the nullish coalescing operator in JS/TS

https://fredrikmalmo.com/blog/js-ts-nullish-empty-string-coalescing
9•fred_•6d ago•1 comments

Show HN: MkSlides – Markdown to slides with a similar workflow to MkDocs

https://github.com/MartenBE/mkslides
42•MartenBE•5h ago•6 comments

Tell HN: Happy Thanksgiving

34•prodigycorp•12h ago•6 comments

DIY NAS: 2026 Edition

https://blog.briancmoses.com/2025/11/diy-nas-2026-edition.html
329•sashk•15h ago•196 comments

Ray Marching Soft Shadows in 2D (2020)

https://www.rykap.com/2020/09/23/distance-fields/
146•memalign•10h ago•24 comments

Mixpanel Security Breach

https://mixpanel.com/blog/sms-security-incident/
155•jaredwiener•11h ago•95 comments

Seagate achieves 6.9TB storage capacity per platter

https://www.tomshardware.com/pc-components/hdds/seagate-achieves-a-whopping-6-9tb-storage-capacit...
26•elorant•1h ago•16 comments

The Concrete Pontoons of Bristol

https://thecretefleet.com/blog/f/the-concrete-pontoons-of-bristol
29•surprisetalk•6d ago•1 comments

Show HN: SyncKit – Offline-first sync engine (Rust/WASM and TypeScript)

https://github.com/Dancode-188/synckit
36•danbitengo•3h ago•11 comments

Interactive λ-Reduction

https://deltanets.org/
93•jy14898•2d ago•21 comments

Music eases surgery and speeds recovery, study finds

https://www.bbc.com/news/articles/c231dv9zpz3o
159•1659447091•13h ago•74 comments

G0-G3 corners, visualised: learn what "Apple corners" are

https://www.printables.com/model/1490911-g0-g3-corners-visualised-learn-what-apple-corners
108•dgroshev•4d ago•53 comments

Willis Whitfield: Creator of clean room technology still in use today (2024)

https://www.sandia.gov/labnews/2024/04/04/willis-whitfield-a-simple-man-with-a-simple-solution-th...
133•rbanffy•2d ago•50 comments

Gemini CLI Tips and Tricks for Agentic Coding

https://github.com/addyosmani/gemini-cli-tips
362•ayoisaiah•1d ago•127 comments

Protect Public School Students from Surveillance of Off-Campus Speech

https://www.eff.org/deeplinks/2025/11/eff-arizona-federal-court-protect-public-school-students-su...
32•hn_acker•2h ago•9 comments

S&box is now an open source game engine

https://sbox.game/news/update-25-11-26
388•MaximilianEmel•22h ago•133 comments

Running Unsupported iOS on Deprecated Devices

https://nyansatan.github.io/run-unsupported-ios/
197•OuterVale•19h ago•97 comments

Coq: The World's Best Macro Assembler? (2013) [pdf]

https://nickbenton.name/coqasm.pdf
116•addaon•13h ago•45 comments

Functional Data Structures and Algorithms: a Proof Assistant Approach

https://fdsa-book.net/
100•SchwKatze•16h ago•14 comments

Voyager 1 is about to reach one light-day from Earth

https://scienceclock.com/voyager-1-is-about-to-reach-one-light-day-from-earth/
1019•ashishgupta2209•1d ago•353 comments
Open in hackernews

Link Time Optimizations: New Way to Do Compiler Optimizations

https://johnnysswlab.com/link-time-optimizations-new-way-to-do-compiler-optimizations/
39•signa11•6mo ago

Comments

sakex•6mo ago
Maybe add the date to the title, because it's hardly new at this point
vsl•6mo ago
...or in 2020 (the year of the article).
Deukhoofd•6mo ago
What do you mean, new? LTO has been in GCC since 2011. It's old enough to have a social media account in most jurisdictions.
jeffbee•6mo ago
Pretty sure MSVC ".NET" was doing link-time whole-program optimization in 2001.
andyayers•6mo ago
HPUX compilers were doing this back in 1993.
jeffbee•6mo ago
Oh yeah, well ... actually I got nothin'. You win.

I will just throw in some nostalgia for how good that compiler was. My college roommate brought an HP pizza box that his dad secured from HP, and the way the C compiler quoted chapter and verse from ISO C in its error messages was impressive.

abainbridge•6mo ago
Or academics in 1986: https://dl.acm.org/doi/abs/10.1145/13310.13338

The idea of optimizations running at different stages in the build, with different visibility of the whole program, was discussed in 1979, but the world was so different back then that the discussion seems foreign. https://dl.acm.org/doi/pdf/10.1145/872732.806974

srean•6mo ago
Yes and if I remember correctly there used to be Linux distros that had all the distro binaries LTO'ed.
phkahler•6mo ago
I tried LTO with Solvespace 4 years ago and got about 15 percent better performance:

https://github.com/solvespace/solvespace/issues/972

Build time was terrible taking a few minutes vs 30-40 seconds for a full build. Have they done anything to use multi-core for LTO? It only used one core for that.

Also tested OpenMP which was obviously a bigger win. More recently I ran the same test after upgrading from an AMD 2400G to a 5700G which has double the cores and about 1.5x the IPC. The result was a solid 3x improvement so we scale well with cores going from 4 to 8.

wahern•6mo ago
Both clang and GCC support multi-core LTO, as does Rust. However, you have to partition the code, so the more cores you use the less benefit to LTO. Rust partitions by crate by default, but it can increase parallelism by partitioning each crate. I think "fat LTO" is the term typically used for whole-program, or at least in the case of Rust, whole-crate LTO, whereas "thin LTO" is what you get when you LTO partitions and then link those together normally. For clang and GCC, you can either have them automatically partition the code for thin LTO , or do it explicitly via your Makefile rules[1].

[1] Interestingly, GCC actually invokes Make internally to implement thin LTO, which lets it play nice with GNU Make's job control and obey the -j switch.

WalterBright•6mo ago
Link time optimizations were done in the 1980s if I recall correctly.

I never tried to implement them, finding it easier and more effective for the compiler to simply compile all the source files at the same time.

The D compiler is designed to be able to build one object file per source file at a time, or one object file which combines all of the source files. Most people choose the one object file.

srean•6mo ago
I think MLton does it this way.

http://mlton.org/WholeProgramOptimization

Dynamically linked and dynamically loaded libraries are useful though (paid for with its problems of course)

tester756•6mo ago
Yea, generating many object files seems like weird thing. Maybe it was good thing decades ago, but now?

Because then you need to link them, thus you need some kind of linker.

Just generate one output file and skip the linker

WalterBright•6mo ago
I've considered many times doing just that.
tester756•6mo ago
And what was the result/conclusion of such considerations?
WalterBright•6mo ago
Not worth the effort.

1. linkers have increased enormously in complexity

2. little commonality between linkers for different platforms

3. compatibility with the standalone linkers

4. trying to keep up with constant enhancement of existing linkers

yencabulator•6mo ago
Not maybe. Sufficient RAM for compilation was a serious issue back in the day.
kazinator•6mo ago
Sure, and if any file is touched, just process them all.
adrian_b•6mo ago
Some compilers had incremental compilation to handle this during development builds.

Then only the functions touched inside some file would be recompiled, not the remainder of the file or other files.

Obviously, choosing incremental compilation inhibited some optimizations.

adrian_b•6mo ago
Generating many object files is pointless for building an executable or a dynamic library, but it remains the desired behavior for building a static library.

Many software projects that must generate multiple executables are better structured as a static library plus one source file with the "main" function for each executable.

WalterBright•6mo ago
One thing the D compiler does is it can generate a library in one step (no need to use the librarian). Give a bunch of source files and object files on the command line, specify a library as the output, and boom! library created directly (compiling the source files, and adding the object files).

I haven't used a librarian program for maybe a decade.

senkora•6mo ago
In C++, there is a trick to get this behavior called "unity builds", where you include all of your source files into a single file and then invoke the compiler on that file.

Of course, being C++, this subtly changes behavior and must be done carefully. I like this article that explains the ins and outs of using unity builds: https://austinmorlan.com/posts/unity_jumbo_build/

WalterBright•6mo ago
> this subtly changes behavior

The D module design ensures that module imports are independent of each other and are independent of the importer.

YorickPeterse•6mo ago
For Inko (https://inko-lang.org/) I went a step further: it generates an object file for each type, instead of per source file or per project. The idea is that if e.g. a generic type is specialized into a new instance (or has some methods added to it), only the object file for that type needs to be re-generated. This in turn should allow for much more fine-grained incremental compilation.

The downside is that you can end up with thousands of object files, but for modern linkers that isn't a problem.

dooglius•6mo ago
It sounds like this would prevent the inherit concurrency you would get out of handling files separately?
WalterBright•6mo ago
It's complicated and not at all clear. For example, most modules import other modules. With separate compilation, most of the modules need to be compiled multiple times, with all-together, it's only once.

On the other hand, the optimizer and code generator can be run concurrently in multiple processes/threads.

Remnant44•6mo ago
Link time optimization is definitely not new, but it is incredibly powerful - I have personally had situations where the failure to be able to inline functions from a static library without lto cut performance in half.

It's easy to dismiss a basic article like this, but it's basically a discovery that every Junior engineer will make, and it's useful to talk about those too!

srean•6mo ago
The inline keyword should really have been intended for call sites rather than definitions.

Perhaps language designers thought that if a function needs to be inlined everywhere, it would lead to verbose code. In any case, it's a weak hint that compilers generally treat with much disdain.

lilyball•6mo ago
ffmpeg has a lot of assembly code in it, so it's a very odd choice of program to use for this kind of test as LTO is presumably not going to do anything to the assembly.
mcdeltat•6mo ago
Different .c/.cpp files being a barrier to optimisation always struck me as an oddly low bar for the 21st century. Yes I know the history of compilation units but these days that's not how we use the system. We don't split code into source files for memory reasons, we do it for organisation. On a small/medium codebase and a decent computer you could probably fit dozens of source files into memory to compile and optimise together. The memory constraint problem has largely disappeared.

So why do we still use the old way? LTO seems effectively like a hack to compensate for the fact that the compilation model doesn't fit our modern needs. Obviously this will never change in C/C++ due to momentum and backwards compatibility. But a man can dream.

kazinator•6mo ago
LTO breaks code which assumes that the compiler has no idea what is behind an external function call and must not assume anything about the values of objects that the code might have access to:

    securely_wipe_memory(&obj, sizeof obj);
    return;
  }
Compiler peeks into securely_wipe_memory and sees that it has no effect because obj is a local variable which has no "next use" in the data flow graph. Thus the call is removed.

Another example:

    gc_protect(object);
    return
  }
Here, gc_protect is an empty function. Without LTO, the compiler must assume that the value of object is required for the gc_protect call and so the generated code has to hang on to that value until that call is made. With LTO, the compiler peeks at the definition of gc_protect and sees the ruse: the function is empty! Therefore, that line of code does not represent a use of the variable. The generated code can use the register or memory location for something else long before that line. If the garbage collector goes off in that part of the code, the object is prematurely collected (if what was lost happens to be the last reference to it).

Some distros have played with turning on LTO as a default compiler option for building packages. It's a very, very bad idea.

djmips•6mo ago
So slow
jordiburgos•6mo ago
Any idea on the performance improvements with these LTO?