frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Black Sabbath's Ozzy Osbourne dies aged 76

https://www.bbc.co.uk/news/live/cn0qq5nyxn0t
194•fantunes•44m ago•32 comments

Tiny Code Reader: a $7 QR code sensor

https://excamera.substack.com/p/tiny-code-reader-a-7-qr-code-sensor
64•jamesbowman•2h ago•18 comments

Show HN: Any-LLM – lightweight and open-source router to access any LLM Provider

https://github.com/mozilla-ai/any-llm
28•AMeckes•1h ago•12 comments

First Hubble Telescope Images of Interstellar Comet 3I/Atlas

https://bsky.app/profile/astrafoxen.bsky.social/post/3luiwnar3j22o
38•jandrewrogers•2h ago•5 comments

Gemini North telescope discovers long-predicted stellar companion of Betelgeuse

https://www.science.org/content/article/betelgeuse-s-long-predicted-stellar-companion-may-have-been-found-last
47•layer8•2h ago•12 comments

Fun with Gzip Bombs and Email Clients

https://www.grepular.com/Fun_with_Gzip_Bombs_and_Email_Clients
6•bundie•7m ago•0 comments

Show HN: Compass CNC – Open-Source Handheld CNC Router

https://www.compassrouter.com
10•camchaney•3d ago•1 comments

Go allocation probe

https://www.scattered-thoughts.net/writing/go-allocation-probe/
62•blenderob•4h ago•19 comments

Swift-Erlang-Actor-System

https://forums.swift.org/t/introducing-swift-erlang-actor-system/81248
3•todsacerdoti•6m ago•0 comments

Better Auth (YC X25) Is Hiring

https://www.ycombinator.com/companies/better-auth/jobs/N0CtN58-staff-engineer
1•bekacru•2h ago

Font Comparison: Atkinson Hyperlegible Mono vs. JetBrains Mono and Fira Code

https://www.anthes.is/font-comparison-review-atkinson-hyperlegible-mono.html
111•maybebyte•4h ago•90 comments

Facts don't change minds, structure does

https://vasily.cc/blog/facts-dont-change-minds/
149•staph•3h ago•101 comments

OSS Rebuild: open-source, Rebuilt to Last

https://security.googleblog.com/2025/07/introducing-oss-rebuild-open-source.html
80•tasn•5h ago•30 comments

TODOs Aren't for Doing

https://sophiebits.com/2025/07/21/todos-arent-for-doing
158•todsacerdoti•5h ago•121 comments

Bypassing Watermark Implementations

https://blog.kulkan.com/bypassing-watermark-implementations-fe39e98ca22b
19•laserspeed•3h ago•5 comments

Cosmic Dawn: The Untold Story of the James Webb Space Telescope

https://plus.nasa.gov/video/cosmic-dawn-the-untold-story-of-the-james-webb-space-telescope/
15•baal80spam•2d ago•1 comments

Don't animate height

https://www.granola.ai/blog/dont-animate-height
112•birdculture•2d ago•58 comments

Blip: Peer-to-Peer Massive File Sharing by Former Dropbox Engineers

https://blip.net/
103•miles•3h ago•77 comments

Launch HN: Promi (YC S24) – Personalize e-commerce discounts and retail offers

7•pmoot•2h ago•4 comments

Yt-transcriber – Give a YouTube URL and get a transcription

https://github.com/pmarreck/yt-transcriber
122•Bluestein•5h ago•31 comments

1KB JavaScript Numbers Station

https://shkspr.mobi/blog/2025/07/1kb-js-numbers-station/
36•blenderob•4h ago•19 comments

Reverse Proxy Deep Dive: Why HTTP Parsing at the Edge Is Harder Than It Looks

https://startwithawhy.com/reverseproxy/2025/07/20/ReverseProxy-Deep-Dive-Part2.html
30•miggy•4h ago•4 comments

Show HN: The Magic of Code – book about the wonders and weirdness of computation

https://themagicofcode.com/sample/
60•arbesman•6h ago•17 comments

Subliminal Learning: Models Transmit Behaviors via Hidden Signals in Data

https://alignment.anthropic.com/2025/subliminal-learning/
4•treebrained•1h ago•1 comments

DaisyUI: Tailwind CSS Components

https://daisyui.com/
146•a_bored_husky•5h ago•121 comments

Stop Pretending LLMs Have Feelings Media's Dangerous AI Anthropomorphism Problem

https://www.readtpa.com/p/stop-pretending-chatbots-have-feelings
22•labrador•1h ago•10 comments

An unprecedented window into how diseases take hold years before symptoms appear

https://www.bloomberg.com/news/articles/2025-07-18/what-scientists-learned-scanning-the-bodies-of-100-000-brits
164•helsinkiandrew•4d ago•81 comments

How to Firefox

https://kau.sh/blog/how-to-firefox/
615•Vinnl•8h ago•361 comments

CSS's problems are Tailwind's problems

https://colton.dev/blog/tailwind-is-the-worst-of-all-worlds/
83•coltonv•2h ago•116 comments

MakeShift: Security Analysis of Shimano Di2 Wireless Gear Shifting in Bicycles

https://www.usenix.org/conference/woot24/presentation/motallebighomi
24•motorest•3d ago•33 comments
Open in hackernews

The .a file is a relic: Why static archives were a bad idea all along

https://medium.com/@eyal.itkin/the-a-file-is-a-relic-why-static-archives-were-a-bad-idea-all-along-8cd1cf6310c5
61•eyalitki•3d ago

Comments

tux3•8h ago
I actually wrote a tool a to fix exactly this asymmetry between dynamic libraries (a single object file) and static libraries (actually a bag of loose objects)

I never really advertised it, but what it does is take all the objects inside your static library, and tells the linker to make a static library that contains a single merged object.

https://github.com/tux3/armerge

The huge advantage is that with a single object, everything works just like it would for a dynamic library. You can keep a set of public symbols and hide your private symbols, so you don't have pollution issues.

Objects that aren't needed by any public symbol (recursively) are discarded properly, so unlike --whole-archive you still get the size benefits of static linking.

And all your users don't need to handle anything new or to know about a new format, at the end of the day you still just ship a regular .a static library. It just happens to contain a single object.

I think the article's suggestion of a new ET_STAT is a good idea, actually. But in the meantime the closest to that is probably to use ET_REL, a single relocatable object in a traditional ar archive.

stabbles•7h ago
It sounds interesting, but I think it's better if a linker could resolve dependencies of static libraries like it's done with shared libraries. Then you can update individual files without having to worry about outdated symbols in these merged files.
tux3•7h ago
If you mean updating some dependency without recompiling the final binary, that's not possible with static linking.

However the ELF format does support complex symbol resolution, even for static objects. You can have weak and optional symbols, ELF interposition to override a symbol, and so forth.

But I feel like for most libraries it's best to keep it simple, unless you really need the complexity.

amluto•7h ago
Is there any actual functional difference between the author’s proposed ET_STAT and an appropriately prepared ET_RET file?

For that matter, I’ve occasionally wondered if there’s any real reason you can’t statically link an ET_DYN (.so) file other than lack of linker support.

tux3•7h ago
I think everything that you would want to do with an ET_STAT file is possible today, but it is a little off the beaten path, and the toolchain command line options today aren't as simple as for dynamic libraries (e.g. figuring out how to hide symbols in a relocatable object is completely different on the GNU toolchain, LLVM on Linux, or Apple-LLVM which also supports relocatable objects, but has a whole different object file format).

I would also be very happy to have one less use of the legacy ar archive format. A little known fact is that this format is actually not standard at all, there's several variants floating around that are sometimes incompatible (Debian ar, BSD ar, GNU ar, ...)

harryvederci•7h ago
Minor suggestion: the article refers to a RHEL 6 developer guide section about static linking. Maybe a more recent article can be used (if their viewpoint hasn't changed).
dzaima•7h ago
How possible would it be to have a utility that merges multiple .o files (or equivalently a .a file) into one .o file, via changing all hidden symbols to local ones (i.e. alike C's "static")? Would solve the private symbols leaking out, and give a single object file that's guaranteed to link as a whole. Or would that break too many assumptions made by other things?
Joker_vD•7h ago
Like, a linker, with "objcopy --strip-symbols" run as the post-step? I believe you can do this even today.
dzaima•7h ago
--localize-hidden seems to be more what I was thinking of. So this works:

    ld --relocatable --whole-archive crappy-regular-static-archive.a -o merged.o
    objcopy --localize-hidden merged.o merged.o
This should (?) then solve most issues in the article, except that including the same library twice still results in an error.
reactordev•7h ago
I did this with my dependencies for my game engine. Built them all as libs and used linker to merge them all together. Makes building my codebase as easy as -llibutils
benreesman•7h ago
I routinely tear apart badly laid-out .a files and re-ar them into something useful. It's a few lines of bash.
tux3•7h ago
This works, but scripting with the ar tool is annoying because it doesn't handle all the edge cases of the .a format.

For instance if two libraries have a source file foo.c with the same name, you can end up with two foo.o, and when you extract they override each other. So you might think to rename them, but actually this nonsense can happen with two foo.o objects in the same archive.

The errors you get when running into these are not fun to debug.

benreesman•7h ago
This is the nastiest one in my `libmodern-cpp` suite: https://gist.github.com/b7r6/31a055e890eaaa9e09b260358da897b....

It took a few minutes, probably has a few edge cases I haven't banged out yet, and now I get to `-l` and I can deploy with `rsync` instead of fucking Docker or something.

I take that deal.

benreesman•6h ago
`boost` is a little sticky too: https://gist.github.com/b7r6/e9d56c0f6d55bc0620b2ce190e15d44...

but for your trouble: https://gist.github.com/b7r6/0cc4248e24288551bcc06281c831148...

If there's interest in this I can make a priority out of trying to get it open-sourced.

tux3•5h ago
Yes, boost is one of those that gave me the biggest trouble as well.

I feel like we really need better toolchains in the first place. None of this intrinsically needs to be made complex, it's all a lack of proper support in the standard tools.

benreesman•52m ago
It's not though, as you can see from building two of the most notoriously nasty libraries on God's earth, emitting reasonable `pkg-config` is trivial, it's string concatenation.

The problem is misaligned incentives: CMake is bad, but it was sort of in the right place at the right time and became a semi-standard, and it's not in the interests of people who work in the CMake ecosystem to emit correct standard artifact manifests.

Dynamic linking by default is bad, but the gravy train on that runs from Docker to AWS to insecure-by-design TLS libraries.

The fix is for a few people who care more about good computing than money or fame to do simple shit like I'm doing above and make it available. CMake will be very useful in destroying CMake: it already encodes the same information that correct `pkg-config` needs.

amiga386•7h ago
> Yet, what if the logger’s ctor function is implemented in a different object file?

This is a contrived example akin to "what if I only know the name of the function at runtime and have to dlsym()"?

Have a macro that "enables use of" the logger that the API user must place in global scope, so it can write "extern ctor_name;". Or have library specific additions for LDFLAGS to add --undefined=ctor_name

There are workarounds for this niche case, and it doesn't add up to ".a files were a bad idea", that's just clickbait. You'll appreciate static linkage more on the day after your program survives a dynamic linker exploit

> Every non-static function in the SDK is suddenly a possible cause of naming conflict

Has this person never written a C library before? Step 1: make all globals/functions static unless they're for export. Step 2: give all exported symbols and public header definitions a prefix, like "mylibname_", because linkage has a global namespace. C++ namespaces are just a formalisation of this

Joker_vD•4h ago
> This is a contrived example akin to "what if I only know the name of the function at runtime and have to dlsym()"?

Well, you just do what the standard Linux loader does: iterate through the .so's in your library path, loading them one by one and doing dlsym() until it succeeds :)

Okay, the dynamic loader actually only tries the .so's whose names are explicitly mentioned as DT_NEEDED in the .dynamic section but it still is an interesting design choice that the functions being imported are not actually bound to the libraries; you just have a list of shared objects, and a list of functions that those shared objects, in totality, should provide you with.

lokar•3h ago
Also, don’t use automatic module init, make the user call an init function at startup.

And prefix everything in your library with a unique string.

layer8•2h ago
What if you use two libraries A and B that both happen to use library C under the hood? Is the application expected to initialize all dependencies in the right order at the top level? Or is library initialization supposed to be idempotent?

This all works as long as libraries are “flat”, but doesn’t scale very well once libraries are built on top of each other and want to hide implementation details.

lokar•1h ago
The call to init should be idempotent
layer8•32m ago
That can be difficult in a multi-threaded environment with dynamically loaded shared libraries. Or at least it isn’t something that’s generally expected to be guaranteed to work.
eyalitki•1h ago
Agree, there should be a prefix. But if 2 of my dependencies didn't use a prefix, why is it my fault when I fail to link against them?

Also, some managers object to a prefix within non-api functions, and frankly I can understand them.

benreesman•7h ago
It is unclear to me what the author's point is. Its seems to center on the example of DPDK being difficult to link (and it is a bear, I've done it recently).

But its full of strawmen and falsehoods, the most notable being the claims about the deficienies of pkg-config. pkg-config works great, it is just very rarely produced correctly by CMake.

I have tooling and a growing set of libraries that I'll probably open source at some point for producing correct pkg-config from packages that only do lazy CMake. It's glorious. Want abseil? -labsl.

Static libraries have lots of game-changing advantages, but performance, security, and portability are the biggest ones.

People with the will and/or resources (FAANGs, HFT) would laugh in your face if you proposed DLL hell as standard operating procedure. That shit is for the plebs.

It's like symbol stripping: do you think maintainers trip an assert and see a wall of inscrutable hex? They do not.

Vendors like things good for vendors. They market these things as being good for users.

throwawayffffas•7h ago
Couldn't agree more with you the whole reason docker exists is to avoid having to deal with dynamic libraries we package the whole userland and ship it just to avoid dealing with different dynamic link libraries across systems.
benreesman•7h ago
Right, the popularity of Docker is proof of what users want.

The implementation of Docker is proof of how much money you're expected to pay Bezos to run anything in 2025.

ethin•7h ago
The only exception to this general rule (which, to be clear, I agree with) is when your code for whatever links to LGPL licensed code. A project I'm a major contributor of does this (we have no choice but to use these libraries, due to the requirements we have, though we do it via implib.so (well, okay, the plan is to do that)), and so dynamic linking/DLL hell is the only path we are able to take. If we link statically to the libraries, the LGPL pretty much becomes the GPL.
benreesman•6h ago
Sure, there are use cases. Extensions to e.g. Python are a perfectly reasonable usecase for `dlopen` (hooking DNS on all modern Linux is...probably not for our benefit).

There are use cases for dynamic linking. It's just user-hostile as a mandatory default for a bunch of boring and banal reasons: KitWare doesn't want `pkg-config` to work because who would use CMake if they had straightforward alternatives. The Docker Industrial complex has no reason to exist in a world where Linus has been holding the line of ABI compatibility for 30 years.

Dynamic linking is fine as an option, I think it's very reasonable to ship a `.so` alongside `.a` and other artifacts.

Forcing it on everyone by keeping `pkg-config` and `musl` broken is a more costly own goal for computing that Tony Hoare's famous billion dollar mistake.

sp1rit•7h ago
> Static libraries have lots of game-changing advantages, but performance, security, and portability are the biggest ones.

No idea how you come to that conclusion, as they are definitively no more secure than shared libraries. Rather the opposite is true, given that you (as end user) are usually able to replace a shared library with a newer version, in order to fix security issues. Better portability is also questionable, but I guess it depends on your definition of portable.

benreesman•6h ago
Knowing what code runs when i invoke an executable or grant it permissions is a fucking prerequisite for any kind of fucking security.

Portability is to any fucking kernel in a decade at the ABI level. You dont sound stupid, which means youre being dishonest. Take it somewhere else before this gets okd school Linus.

I have no fucking patience when it comes to eirher Drepper and his goons or the useful idiots parroting that tripe at the expense of less technical people.

edit: I don't like losing my temper anywhere, especially in a community where I go way back. I'd like to clarify that I see this very much in terms of people with power (technical sophistication) and their relationship to people who are more vulnerable (those lacking that sophistication) in matters of extremely high stakes. The stakes at the low end are the cost and availability of computing. The high end is as much oppressive regime warrantless wiretap Gestapo shit as you want to think about.

Hackers have a responsibility to those less technical.

l72•6h ago
I think from a security point of view, if a program is linked to its library dynamically, a malicious actor could replace the original library without the user noticing, by just setting the LD_LIBRARY_PATH to point to the malicious library. That wouldn't be possible with a program that is statically linked.
benreesman•6h ago
And unless you're in one of those happy jurisdictions where digital rights are respected, that malicious threat actor could range from a mundane cyber criminal to and advanced persistent threat, and that advanced persistent threat could trivially be your own government. Witness, the only part of `glibc` that really throws a fit if you yank it's ability to get silently replaced via `soname` is DNS resolution.
Brian_K_White•4h ago
You act as though the sales pitch for dynamically loaded shared libs is the whole story.

Obviously everything has some reason it was ever invented, and so there is a reason dynamic linking was invented too, and so congratulations, you have recited that reason.

A trivial and immediate counter example though is that a hacker is able to replace your awesome updated library just as easily with their own holed one, because it is loaded on the fly at run-time and the loading mechanism has lots of configurability and lots of attack surface. It actually enables attacks that wouldn't otherwise exist.

And a self contained object is inherently more portable than one with dependencies that might be either missing or incorrect at run time.

There is no simple single best idea for anything. There are various ideas with their various advantages and disadvantages, and you use whichever best services your priorities of the moment. The advantages of dynamic libs and the advantages of static both exist and sometimes you want one and sometimes you want the other.

KWxIUElW8Xt0tD9•4h ago
yes DLL hell is the issue with dynamic linking -- how many versions of given libraries are required for the various apps you want to install? -- and then you want to upgrade something and it requires yet another version of some library -- there is really no perfect solution to all this
benreesman•39m ago
You reconcile a library set for your application. That's happening whether you realize it or not, and whether you want to or not.

The question is, do you want it to happen under your control in an organized way that produces fast, secure, portable artifacts, or do you want it to happen in some random way controlled by other people at some later date that will probably break or be insecure or both.

There's an analogy here to systems like `pip` and systems with solvers in them like `uv`: yeah, sometimes you can call `pip` repeatedly and get something that runs in that directory on that day. And neat, if you only have to run it once, fine.

But if you ship that, you're externalizing the costs to someone else, which is a dick move. `uv` tells you on the spot that there's no solution, and so you have to bump a version bound to get a "works here and everywhere and pretty much forever" guarantee that's respectful of other people.

Orphis•3h ago
pkg-config works great in limited scenarios. If you try to do anything more complex, you'll probably run into some complex issues that require modifying the supplied .pc files from your vendor.

There's is a new standard that is being developed by some industry experts that is aiming to address this called CPS. You can read the documentation on the website: https://cps-org.github.io/cps/ . There's a section with some examples as to why they are trying to fix and how.

benreesman•45m ago
`pkg-config` works great in just about any standard scenario: it puts flags on a compile and link line that have been understood by every C compiler and linker since the 1970s.

Here's Bazel consuming it with zero problems, and if you have a nastier problem than a low-latency network system calling `liburing` on specific versions of the kernel built with Bazel? Stop playing.

The last thing we need is another failed standard further balkanizing an ecosystem that has worked fine if used correctly for 40+ years. I don't know what industry expert means, but I've done polyglot distributed builds at FAANG scale for a living, so my appeal to authority is as good as anyone's and I say `pkg-config` as a base for the vast majority of use cases with some special path for like, compiling `nginx` with it's zany extension mechanism is just fine.

https://gist.github.com/b7r6/316d18949ad508e15243ed4aa98c80d...

eyalitki•30m ago
If someone needs a wrapper for a technology, that modifies the output it provides (like meson and bazel do), maybe there is an issue with said technology.

If pkg-config was never meant to be consumed directly, and was always meant to be post processed, then we are missing this post processing tool. Reinventing it in every compilation technology again and again is suboptimal, and at least Make and CMake do not have this post processing support.

stabbles•7h ago
Much of the dynamic section of shared libraries could just be translated to a metadata file as part of a static library. It's not breaking: the linker skips files in archives that are not object files.

binutils implemented this with `libdep`, it's just that it's done poorly. You can put a few flags like `-L /foo -lbar` in a file `__.LIBDEP` as part of your static library, and the linker will use this to resolve dependencies of static archives when linking (i.e. extend the link line). This is much like DT_RPATH and DT_NEEDED in shared libraries.

It's just that it feels a bit half-baked. With dynamic linking, symbols are resolved and dependencies recorded as you create the shared object. That's not the case when creating static libraries.

But even if tooling for static libraries with the equivalent of DT_RPATH and DT_NEEDED was improved, there are still the limitations of static archives mentioned in the article, in particular related to symbol visibility.

TuxSH•7h ago
> This design decision at the source level, means that in our linked binary we might not have the logic for the 3DES building block, but we would still have unused decryption functions for AES256.

Do people really not know about `-ffunction-sections -fdata-sections` & `-Wl,--gc-sections` (doesn't require LTO)? Why is it used so little when doing statically-linked builds?

> Let’s say someone in our library designed the following logging module: (...)

Relying on static initialization order, and on runtime static initialization at all, is never a good idea IMHO

astrobe_•6h ago
Yes, these are really esoteric options, and IIRC GCC's docs say they can be counter-productive.
jeffbee•6h ago
-ffunction-sections has 750k hits on github. It is among the default flags for opt mode builds in Bazel. There are probably people who consider them defaults, in practice.
astrobe_•4h ago
Well, C and C++ together have around 7M repos, so about 10%. Actually not entirely esoteric, but Github is only a fraction of the world's codebase and users of these repos probably never looked in the makefile, so I'd say 10% of C/C++ developers knowing about this is a very optimistic estimate.
readmodifywrite•6h ago
One engineer's esoteric is another's daily driver. All 3 of those options are borderline mandatory in embedded firmware development.
TuxSH•4h ago
These options can easily be found by a Google Search or via LLM, whichever one prefers

> they can be counter-productive

Rarely[1]. The only side effect this can have is the constant pools (for ldr rX, [pc, #off] kind of stuff) not being merged, but the negative impact is absolutely minimal (different functions usually use different constants after all!)

([1] assuming elf file format or elf+objcopy output)

There are many other upsides too: you can combine these options with -Wl,-wrap to e.g. prune exception symbols from already-compiled libraries and make the resulting binaries even smaller (depending on platform)

The question is, why are function-sections and data-sections not the default?

It is quite annoying to have to deal with static libs (including standard libraries themselves) that were compiled with neither these flags nor LTO.

dmitrygr•3m ago
Esoteric? In embedded, people know of these from BEFORE they stop wearing diapers
pjmlp•6h ago
How can they be expected to learn this, when it is now fashionable to treat C and C++ as if they are scripting languages, shipping header only files?

We already had scripting engines for those languages in the 1990's, and the fact they are hardly available nowadays kind of tells of their commercial success, with exception of ROOT.

TuxSH•5h ago
> How can they be expected to learn this

It's the first thing Google and LLMs 'tell' you when you ask about reducing binary size with static libraries. Also LTO does most of the same.

pjmlp•4h ago
To learn, first one needs to want to learn, which was my whole point.
TuxSH•4h ago
Agreed, but article's author mentioned this as an issue, I would have expected him to find about and mention these flags as well.
asveikau•4h ago
It makes more sense for c++ due to templates, but the header only C library trend is indeed very strange. It's not surprising that people are coming up now who are writing articles about being confused by static linking behavior.
pjmlp•2h ago
Even with C++ templates, if you want faster builds, header files aren't the place to store external templates, which are instantiations for common type parameters.
TuxSH•2h ago
Header-only is simpler to integrate, so it makes sense for simple stuff, or stuff that is going to be used by only one TU there.

However, the semantics of inline are different between C and C++. To put it simply, C is restricted to static inline and, for variables, static const, whereas C++ has no such limitations (making them a superset); and static inline/const can sometimes lead to binary size bloat

CyberDildonics•1h ago
It's not strange at all. You only have one file to keep track of and it does everything, you put the functions in any compilation unit you want, C compilation is basically instant, and putting a bunch of single file libraries into one compilation unit simplifies things further.
flohofwoe•2h ago
An STB-style header-only library is actually quite perfect for eliminating dead code if the implementation and all code using that library is in the same compilation unit (since the compiler will not include static functions into the build that are not called).

...or build -flto for the 'modern' catch-all feature to eliminate any dead code.

...apart from that, none of the problems outlined in the blog post apply to header only libraries anyway since they are not distributed as precompiled binaries.

greenavocado•4h ago
> Do people really not know about $OBSCURE_GCC_FLAG?

Do you know what you sound like?

flohofwoe•3h ago
There's also the other 'old-school' method to compile each function into its own object file, I guess that's why MUSL has each function in its own source file:

https://github.com/kraj/musl/tree/kraj/master/src/stdio

...but these days -flto is simply the better option to get rid of unused code and data - and enable more optimizations on top. LTO is also exactly why static linking is strictly better than dynamic linking, unless dynamic linking is absolutely required (for instance at the operating system boundary).

EE84M3i•7h ago
Something I've never quite understood is why can't you statically link against an so file? What specific information was lost during the linking phase to create the shared object that presents that machine code from being placed into a PIE executable?
sherincall•7h ago
wcc can do that for you: https://github.com/endrazine/wcc
EE84M3i•7h ago
Woah this is really awesome! Thanks for sharing, this made my day.
LtWorf•7h ago
You can, but why?
EE84M3i•7h ago
At a fundamental level I don't understand why we have two separate file types for static and dynamic libraries. It seems primarily for historical reasons?

The author proposes introducing a new kind of file that solves some of the problems with .a filed - but we already have a perfectly good compiled library format for shared libraries! So why can't we make gcc sufficiently smart to allow linking against those statically and drop this distinction?

alexvitkov•6h ago
Because with the current compilation model shared libraries (.so/.dll) are the output of the linker, but static libraries are input for the linker. It is historical baggage, but as it currently stands they're fairly different beasts.
Brian_K_White•4h ago
This is succint, thank you.
Joker_vD•6h ago
Oh, it's even better on the Windows side of things, at least how the MSVC toolchain does it. You can only link a statically-linked .lib library, period. So if you want to statically link against a dynamic library (what a phrase!), you need to have a special version of that .lib library that essentially is just a collection of thunks (in MSVC-specific format) that basically say "oh, you actually want to add symbol Bar@8 from LIBFOO.DLL to your import section" [0]. So yeah, you'd see three binaries distributed as a result of building a library: libfoo_static.lib (statically-linked library), libfoo.dll (dynamic library), libfoo.lib (the shim library to link against when you want to link to libfoo.dll).

Amusingly, other (even MSVC-compatible) toolchains never had such problem; e.g. Delphi could straight up link against a DLL you tell it to use.

[0] https://learn.microsoft.com/en-us/cpp/build/reference/using-...

Brian_K_White•4h ago
"So if you want to statically link against a dynamic library (what a phrase!)"

Yes but like an artificially created remarkableness. "dynamic library" should just be "library", and then it's not remarkable at all.

It does seem obvious, and your Delphi example and the other comment wcc example shows, that if an executable can be assembled from .so at run time, then the same thing can also be done at any other time. All the pieces are just sitting there wondering why we're not using them.

convolvatron•3h ago
you could say historical reasons, in that dynamic libraries are generated using relocatable position independent code (-pic), which incurs some performance penalty vs code where the linker fills in all the relocations. my guess is thats somewhere around 10%? historical in the sense that that used to be enough to matter? idk that it still is

personally I think leaving the binding of libraries to runtime opens up alot of room for problems, and maybe the savings of having a single copy of a library loaded into memory vs N specialized copies isn't important anymore either.

alexvitkov•7h ago
Because I want my program to run on other people's computers.
tempay•6h ago
I think the question isn’t why statically link but rather why bother with .a files and instead use the shared libraries all the time (even if only to build a statically linked executable).
alexvitkov•6h ago
Yeah, I misunderstood the question. Although if you could statically link .so/.dlls and have it work reliably, it would still be a great convenience, as some libraries are really hard to build statically without rewriting half their build system.
accelbred•3h ago
so files require PIC code, which brings along symbol interpolation.
krackers•2h ago
I had this exact question a few months back - https://news.ycombinator.com/item?id=44084781
high_na_euv•7h ago
.so .o .a .pc holy shit, what a mess

Why things that are solved in other programming ecosystems are impossible in c cpp world, like sane building system

sparkie•6h ago
Because those other ecosystems assume that someone has already done the work on the base system and libraries that they don't have to worry about them, and can focus purely on their own little islands.
pjmlp•6h ago
More like, in other ecosystems, especially the compiled languages that weren't born as part of UNIX like C and C++, the whole infrastructure also takes building and linking as part of the whole language.

Note that ISO C and ISO C++ ignore the existence of compilers, linkers and build tools, as per legalese there is some magic way how the code gets turned into machine code, the standards don't even consider the existence of filesystems on header files and translation units locations, they are talked about in the abstract, and can in all standard compliant way be stored in a SQL database.

adev_•6h ago
> Why things that are solved in other programming ecosystems are impossible in c cpp world, like sane building system

This is such an ignorant comment.

Most other natively compiled languages have exactly the same concept behind: Object files, Shared Libraries, collection of object and some kind of configuration description of the compilation pipeline.

Even high level languages like Rust has that (to some extend).

The fact it is buried and hidden under 10 layers of abstraction and fancy tooling for your language does not mean it does not exist. Most languages currently do rely on the LLVM infrastructure (C++) for the linker and their object model anyway.

The fact you (probably) never had to manipulate it directly just mean your higher level superficial work never brought you deep enough where it starts to be a problem.

high_na_euv•5h ago
>The fact you (probably) never had to manipulate it directly just mean your higher level work never brought you deep enough where it starts to be a problem.

Did you just agree with me that other prog. ecosystems solved the building system challenge?

trinix912•4h ago
They solved it by building on top of what C (or LLVM, to be precise) does, not by avoiding or replacing it.

What should C do to solve it? Add another layer of abstraction on top of it? CMake does that and people complain about the extra complexity.

adev_•4h ago
> Did you just agree with me that other prog. ecosystems solved the building system challenge?

Putting the crap is a box with a user friendly handle on it to make it look 'friendlier' is never 'solving a problem'.

It is barely hiding the dust under the carpet.

uecker•1h ago
I think people today often do not understand any more that C like many other things in the UNIX world is a tool, not a complete framework. But somehow people expect a complete convenient framework with batteries included. They see it has a deficiency that C by itself does not provide many things. I see it as one of its major strengths and this is one of the reasons why I prefer it.
rixed•6h ago
Do people who write this kind of pieces with such peremptory titles really believe that they finally came about to understand everything better after decades of ignorance?

Chesterton’s Fence yada yada?

cap11235•3h ago
Well, it is on medium.com, so probably yes?
dale_glass•4h ago
Oh, static linking can be lots of "fun". I ran into this interesting issue once.

1. We have libshared. It's got logging and other general stuff. libshared has static "Foo foo;" somewhere.

2. We link libshared into libfoo and libbar.

3. libfoo and libbar then go into application.

If you do this statically, what happens is that the Foo constructor gets invoked twice, once from libfoo and once from libbar. And also gets destroyed twice.

layer8•4h ago
> Something like a “Static Bundle Object” (.sbo) file, that will be closer to a Shared Object (.so) file, than to the existing Static Archive (.a) file.

Is there something missing from .so files that wouldn’t allow them to be used as a basis for static linking? Ideally, you’d only distribute one version of the library that third parties can decide to either link statically or dynamically.

kazinator•4h ago
.a archives can speed up linking of very large software. This is because of assumptions as to the dependencies and the way the traditional Unix-style linker deals with .a files (by default).

When a bunch of .o files are presented to the linker, it has to consider references in every direction. The last .o file could have references to the first one, and the reverse could be true.

This is not so for .a files. Every successive .a archive presented on the linker command line in left-to-right order is assumed to satisfy references only in material to the left of it. There cannot be circular dependencies among .a files and they have to be presented in topologically sorted order. If libfoo.a depends on libbar.a then libfoo.a must be first, then libbar.a.

(The GNU Linker has options to override this: you can demarcate a sequence of archives as a group in which mutual references are considered.)

This property of archives (or of the way they are treated by linking) is useful enough that at some point when the Linux kernel reached a certain size and complexity, its build was broken into archive files. This reduced the memory and time needed for linking it.

Before that, Linux was linked as a list of .o files, same as most programs.

parpfish•4h ago
relic isn't the right word.

relics are really old things that are revered and honored.

i think they just want archaic which are old things that are likely obsolete

cryptonector•3h ago
It's not that .a files and static linking are a relic, but that static linking never evolved like dynamic linking did. Static linking is stuck with 1978 semantics, while dynamic linking has grown features that prevent the mess that static linking made. There are legit reasons for wanting static linking in 2025, so we really ought to evolve static linking like we did dynamic linking.

Namely we should:

  - make -l and -rpath options in
    .a generation do something:
    record that metadata in the .a

  - make link-edits use that meta-
    data recorded in .a files in
    the previous item
I.e., start recording dependency metadata in .a files and / so we can stop flattening dependency trees onto the final link-edit.

This will allow static linking to have the same symbol conflict resolution behaviors as dynamic linking.

flohofwoe•3h ago
Library files are not the problem, deploying an SDK as precompiled binary blobs is ;)

(I bet that .a/.lib files were originally never really meant for software distribution, but only as intermediate file format between a compiler and linker, both running as part of the same build process)

eyalitki•1h ago
Yeah, but when the product is an SDK, and customers develop on top of it (using their own toolchains) there isn't a lot left for me to play with.
triknomeister•15m ago
SDK could ship the source, lol, stop kneecapping your consumers.
jhallenworld•2h ago
On the private symbol issue... there is probably a solution to this already. You can partially link a bunch of object files into a single object file (see ld -r). After this is done, 'strip' the file except for those symbols marked with non-hidden visibility- I've not tried to do this, maybe 'strip -x' does the right thing? Not sure.
eyalitki•1h ago
1. "Advanced" compilation environments (meson) probably limit this ability to some extent. 2. Package managers (rpmbuild for instance) mandate build with debug symbols and they do the strip on their own so to create the debug packages. This limits our control of these steps.
kazinator•1h ago
> Yet, what if the logger’s ctor function is implemented in a different object file? Well, tough luck. No one requested this file, and the linker will never know it needs to link it to our static program. The result? crash at runtime.

If you have spontaneously called initialization functions as part of an initialization system, then you need to ensure that the symbols are referenced somehow. For instance, a linker script which puts them into a table that is in its own section. Some start-up code walks through the table and calls the functions.

This problem has been solved; take a look at how U-boot and similar projects do it.

This is not an archive problem because the linker will remove unused .o files even if you give it nothing but a list of .o files on the command line, no archives at all.

uecker•1h ago
Isn't this what partial linking is for, combining object files into a larger one?