frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

AirPods libreated from Apple's ecosystem

https://github.com/kavishdevar/librepods
411•moonleay•5h ago•90 comments

Our investigation into the suspicious pressure on Archive.today

https://adguard-dns.io/en/blog/archive-today-adguard-dns-block-demand.html
1434•immibis•19h ago•373 comments

IDEmacs: A Visual Studio Code clone for Emacs

https://codeberg.org/IDEmacs/IDEmacs
136•nogajun•4h ago•20 comments

libwifi: an 802.11 frame parsing and generation library written in C

https://libwifi.so/
84•vitalnodo•7h ago•6 comments

Things that aren't doing the thing

https://strangestloop.io/essays/things-that-arent-doing-the-thing
184•downboots•11h ago•95 comments

The inconceivable types of Rust: How to make self-borrows safe (2024)

https://blog.polybdenum.com/2024/06/07/the-inconceivable-types-of-rust-how-to-make-self-borrows-s...
48•birdculture•6h ago•6 comments

When UPS charged me a $684 tariff on $355 of vintage computer parts

http://oldvcr.blogspot.com/2025/11/when-ups-charged-me-684-tariff-on-355.html
163•goldenskye•5h ago•126 comments

When did people favor composition over inheritance?

https://www.sicpers.info/2025/11/when-did-people-favor-composition-over-inheritance/
122•ingve•1w ago•79 comments

Blocking LLM crawlers without JavaScript

https://www.owl.is/blogg/blocking-crawlers-without-javascript/
89•todsacerdoti•6h ago•41 comments

Boa: A standard-conforming embeddable JavaScript engine written in Rust

https://github.com/boa-dev/boa
192•maxloh•1w ago•63 comments

AsciiMath

https://asciimath.org/
65•smartmic•8h ago•16 comments

Transgenerational Epigenetic Inheritance: the story of learned avoidance

https://elifesciences.org/articles/109427
132•nabla9•11h ago•79 comments

Show HN: Unflip – a puzzle game about XOR patterns of squares

https://unflipgame.com/
101•bogdanoff_2•4d ago•28 comments

Linux on the Fujitsu Lifebook U729

https://borretti.me/article/linux-on-the-fujitsu-lifebook-u729
178•ibobev•14h ago•127 comments

Archimedes – A Python toolkit for hardware engineering

https://pinetreelabs.github.io/archimedes/blog/2025/introduction.html
63•i_don_t_know•10h ago•10 comments

Computing Across America (1983-1985)

https://microship.com/winnebiko/
15•austinallegro•1w ago•2 comments

JVM exceptions are weird: a decompiler perspective

https://purplesyringa.moe/blog/jvm-exceptions-are-weird-a-decompiler-perspective/
66•birdculture•1w ago•3 comments

Report: Tim Cook could step down as Apple CEO 'as soon as next year'

https://9to5mac.com/2025/11/14/tim-cook-step-down-as-apple-ceo-as-soon-as-next-year-report/
120•achow•8h ago•237 comments

I made a better DOM morphing algorithm

https://joel.drapper.me/p/morphlex/
73•joeldrapper•1w ago•38 comments

TCP, the workhorse of the internet

https://cefboud.com/posts/tcp-deep-dive-internals/
294•signa11•23h ago•140 comments

EyesOff: How I built a screen contact detection model

https://ym2132.github.io/building_EyesOff_part2_model_training
13•Two_hands•21h ago•2 comments

Why export templates would be useful in C++ (2010)

http://warp.povusers.org/programming/export_templates.html
10•PaulHoule•1w ago•0 comments

The computer poetry of J. M. Coetzee's early programming career (2017)

https://sites.utexas.edu/ransomcentermagazine/2017/06/28/the-computer-poetry-of-j-m-coetzees-earl...
51•bluejay2•11h ago•10 comments

Nevada Governor's office covered up Boring Co safety violations

https://fortune.com/2025/11/12/elon-musk-boring-company-tunnels-injuries-osha-citations-fines-res...
227•Chinjut•10h ago•41 comments

Weighting an average to minimize variance

https://www.johndcook.com/blog/2025/11/12/minimum-variance/
82•ibobev•14h ago•39 comments

Mag Wealth (2024)

https://saul.pw/mag/wealth/
129•andsoitis•13h ago•153 comments

How the Spoils of an Infamous Heist Traveled the World

https://nautil.us/how-the-spoils-of-an-infamous-heist-traveled-the-world-1247307/
3•curtistyr•4d ago•0 comments

AMD continues to chip away at Intel's x86 market share

https://www.tomshardware.com/pc-components/cpus/amd-continues-to-chip-away-at-intels-x86-market-s...
150•speckx•9h ago•71 comments

A new Google model is nearly perfect on automated handwriting recognition

https://generativehistory.substack.com/p/has-google-quietly-solved-two-of
516•scrlk•4d ago•292 comments

Trellis AI (YC W24) Is Hiring: Streamline access to life-saving therapies

https://www.ycombinator.com/companies/trellis-ai/jobs/f4GWvH0-forward-deployed-engineer-full-time
1•macklinkachorn•12h ago
Open in hackernews

Comparison of C/POSIX standard library implementations for Linux

https://www.etalabs.net/compare_libcs.html
142•smartmic•6mo ago

Comments

ObscureScience•6mo ago
That table is unfortunately quite old. I can't personally say what have changed, but it is hard to put much confidence in the relevance of the information.
lifthrasiir•6mo ago
Yeah, also it doesn't compare actual implementations, just plain checkboxes. I'm aware of two specific substantial performance regressions for musl: exact floating point printing (it uses Dragon4 but implemented it way slower than it could have been) and memory allocator (for a long time it didn't any sort of arena like pretty much every modern allocator---now it does with mallocng though).
snickerer•6mo ago
Fun libc comparison by the author of musl.

My getaway is: glibc is bloated but fast. Quite unexpected combination. Am I right?

kstrauser•6mo ago
It’s not shocking. More complex implementations using more sophisticated algorithms can be faster. That’s not always true, but it often is. For example, look at some of the string search algorithms used by things like ripgrep. They’re way more complex than just looping across the input and matching character by character, and they pay off.

Something like glibc has had decades to swap in complex, fast code for simple-looking functions.

weinzierl•6mo ago
In case of glibc I think what you said is orthogonal to its bloat. Yes, it has complex implementations but since they are for a good reason I'd hardly call them bloat.

Independently from that glibc implements a lot of stuff that could be considered bloat:

- Extensive internationalization support

- Extensive backward compatibility

- Support for numerous architectures and platforms

- Comprehensive implementations of optional standards

kstrauser•6mo ago
Ok, fair points, although internationalization seems like a reasonable thing to include at first glance.

Is there a fork of glibc that strips ancient or bizarre platforms?

dima55•6mo ago
What problem are you trying to solve? glibc works just fine for most use cases. If you have some niche requirements, you have alternative libraries you can use (listed in the article). Forking glibc in the way you describe is literally pointless
kstrauser•6mo ago
Nothing really. I was just curious and this isn’t something I know much about, but would like to learn more of.
SAI_Peregrinus•6mo ago
It's called glibc. Essentially all that "bloat" is conditionally compiled, if your target isn't an ancient or bizarre platform it won't get included in the runtime.
kstrauser•6mo ago
That’s mostly true, but not quite. For instance, suppose you aim to support all of 32/64-bit and little/big-endian. You’ll likely end up factoring straightforward math operations out into standalone functions. Granted, those will probably get inlined, but it may mean your structure is more abstracted than it would be otherwise. Just supporting the options has implications.

That’s not the strongest example. I just meant it to be illustrative of the idea.

jcranmer•6mo ago
The way glibc's source works (for something like math functions) is that essentially every function is implemented in its own file, and various config knobs can provide extra directories to compile and provide function definitions. This can make actually finding the implementation that's going to be used difficult, since a naive search for the function name can turn up like 20 different function definitions, and working out which one is actually in play can be difficult (especially since it's more than just the architecture name).

Math functions aren't going to be strongly impacted by diverse hardware support. In practice, you largely care about 32-bit and 64-bit IEEE 754 types, which means your macros to decompose floating-point types to their constituent sign/exponent/significand fields are already going to be pretty portable even across different endianness (just bitcast to a uint32_t/uint64_t, and all of the shift logic will remain the same). And there's not much reason to vary the implementation except to take advantage of hardware instructions that implement the math functions directly... which are generally better handled by the compiler anyways.

saagarjha•6mo ago
People don't typically implement math functions by pulling bits out of a reinterpreted floating point number. If you rely on the compiler, you get whatever it decides for you, which might be something dumb like float80.
int_19h•6mo ago
"Internationalization" is a very broad item that can include e.g. support for non-UTF-8 locales, which is something few Linux distros need today.
ape4•6mo ago
Yeah look at even strlen()

https://github.com/lattera/glibc/blob/master/string/strlen.c

GabrielTFS•6mo ago
That's the generic implementation - it's not used on most popular architectures (I think the most popular architecture it's used on would be RISC-V or MIPS) because they all have architecture-specific implementations. The implementation running on the average (x86) computer is likely to be https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86... (if you have AVX512), https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86... (if you have AVX2 and not AVX512) or https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86... (if you have neither AVX2 nor AVX512 - rather rare these days)
timeinput•6mo ago
My take away is that it's not a meaningful chart? Just in the first row musl looks bloated at 426k compared to dietlibc at 120k. Why were those colors chosen? It's arbitrary and up to the author of the chart.

The author of musl made a chart, that focused on the things they cared about and benchmarked them, and found that for the things they prioritized they were better than other standard library implementations (at least from counting green rows)? neat.

I mean I'm glad they made the library, that it's useful, and that it's meeting the goals they set out to solve, but what would the same chart created by the other library authors look like?

LeFantome•6mo ago
A lot of the “slowness” of MUSL is the default allocator. It can be swapped out.

For example, Chimera Linux uses MUSL with mimalloc and it is quite snappy.

jeffbee•6mo ago
That's a great combo. I like LLVM libc in overlay mode with musl beneath and mimalloc. Performance is excellent.
cyberax•6mo ago
Not quite correct. glibc is slow if you need to be able to fork quickly.

However, it does have super-optimized string/memory functions. There are highly optimized assembly language implementations of them that use SIMD for dozens of different CPUs.

userbinator•6mo ago
Microbenchmarks tend to favour extreme unrolling and other "speed at any cost" tricks that often show up as negatives in macrobenchmarks.
flohofwoe•6mo ago
Choice still matters IMHO. E.g. a very small but slow malloc/free may be preferable if your code only allocates infrequently. Also linking musl statically avoids the whole glibc dll version mess, admittedly only useful for cmdline tools though.
thrtythreeforty•6mo ago
It really ought to lead with the license of each library. I was considering dietlibc until I got to the bottom - GPLv2. I am a GPL apologist and even I can appreciate that this is a nonstarter; even GNU's libc is only LGPL!
LeFantome•6mo ago
musl seems to have displaced dietLibc. Much more complete yet fairly small and light.
yusina•6mo ago
Note that dietlibc is the project of a sole coder in the CCC sphere from Berlin (Fefe). His main objective was to learn how low level infra is implemented and started using it in some of his other projects after realizing that there is a lot of bloat he can skip with just implementing the bare essentials. Musl has a different set of objectives.
projektfu•6mo ago
I follow diet but it is definitely not ready for general use like musl and probably never will be. There aren't a lot of eyeballs on it.
yusina•6mo ago
That's what I'm saying. It's not Fefe's objective to make it fit for everybody...
jay-barronville•6mo ago
Please note that the linked comparison table has been unmaintained for a while. This is even explicitly stated on the legacy musl libc website[0][0] (i.e., “The (mostly unmaintained) libc comparison is still available on etalabs.net.”).

[0]: https://www.musl-libc.org

ethan_smith•6mo ago
This comparison was last updated around 2016-2017. Since then, glibc has improved its size efficiency (particularly with link-time optimization), musl has enhanced its POSIX compliance, and several performance optimizations have landed in both projects.
pizlonator•6mo ago
My own perf comparison: when I switched from Fil-C running on my system’s libc (recent glibc) for yololand to my own build of musl, I got a 1-2% perf regression. My best guess is that it’s because glibc’s memcpy/memmove/memset are better. Couldn’t have been the allocator since Fil-C’s runtime has its own allocator.
abnercoimbre•6mo ago
Interesting! Will you stick around with the musl build? And if so, why?
pizlonator•6mo ago
Not sure but in likely to because right now I to use the same libc in userland (the Fil-C compiled part) and yololand (the part compiled by normal C that is below the runtime) and the userland libc is musl.

Having them be the same means that if there is any libc function that is best implemented by having userland call a Fil-C runtime wrapper for the yololand implementation (say because what it’s doing requires platform specific assembly) then I can be sure that the yololand libc really implements that function the same way with all the same corner cases.

But there aren’t many cases of that and they’re hacks that I might someday remove. So I probably won’t have this “libc sandwich” forever

LukeShu•6mo ago
When I was working with Envoy Proxy, it was known that perf was worse with musl than with glibc. We went through silly hoops to have a glibc Envoy running in an Alpine (musl) container.
skissane•6mo ago
What's Fil-C? Okay, found it myself, looks cool: https://github.com/pizlonator/llvm-project-deluge/

What's yoyoland? All I can find is an amusement park in Bangkok, and some 1990s-era communication software for Classic Mac OS: https://www.macintoshrepository.org/39495-yoyo-2-1

pizlonator•6mo ago
The Fil-C stack is composed of :

- Userland: the place where you C code lives. Like the normal userland you're familiar with, but everything is compiled with Fil-C, so it's memory safe.

- Yololand: the place where Fil-C's runtime lives. Fil-C's runtime is about 100,000 lines of C code (almost entirely written by me), which currently has libc as a dependency (because the runtime makes syscalls using the normal C functions for syscalls rather than using assembly directly; also the runtime relies on a handful of libc utility functions that aren't syscalls, like memcpy).

So Fil-C has two libc's. The yololand libc (compiled with a normal C compiler, only there to support the runtime) and the userland libc (compiled with the Fil-C compiler like everything else in Fil-C userland, and this is what your C code calls into).

skissane•6mo ago
Why does yoyoland need to use libc’s memcpy? Can’t you just use __builtin_memcpy?

On Linux, if all you need is syscalls, you can just write your own syscall wrapper-like Go does.

Doesn’t work on some other operating systems (e.g. Solaris/Illumos, OpenBSD, macOS, Windows) where the system call interface is private to the system shared libraries

pizlonator•6mo ago
> Why does yoyoland need to use libc’s memcpy? Can’t you just use __builtin_memcpy?

Unless you do special things, the compiler turns __builtin_memcpy into a call to memcpy. :-)

There is __builtin_memcpy_inline, but then you're at the compiler's whims. I don't think I want that.

A faithful implementation of what you're proposing would have the Fil-C runtime provide a memcpy function so that whenever the compiler wants to call memcpy, it will call that function.

> On Linux, if all you need is syscalls, you can just write your own syscall wrapper-like Go does.

I could do that. I just don't, right now.

You're totally right that I could remove the yolo libc. This is one of like 1,000 reasons why Fil-C is slower than it needs to be right now. It's a young project so it has lots of this kind of "expedient engineering".

imcritic•6mo ago
You keep repeating the name wrong: yololand, not yololand.
cryptonector•6mo ago
GP was saying 'yoyoland', when it's 'yololand' (as in YOLO?).
pizlonator•6mo ago
Yeah YOLO.

I needed a fun term to refer to the C that isn’t Fil-C. I call it Yolo-C.

Hence yololand - the part of the Fil-C process that contains a bit of Yolo-C code for the Fil-C runtime.

cryptonector•6mo ago
Thanks. I went looking and saw this in the Fil-C manifesto:

> It's even possible to allocate memory using malloc from within a signal handler (which is necessary because Fil-C heap-allocates stack allocations).

Hmm, really? All stack allocations are heap-allocated? Doesn't that make Fil-C super slow? Is there no way to do stack allocation? Or did I misread what you meant by 'stack allocations'?

pizlonator•6mo ago
It’s a GC allocation, not a traditional malloc allocation. So slower than stack allocation but substantially faster than a malloc call.

And that GC allocation only happens if the compiler can’t prove that it’s nonescaping. The overwhelming majority of what look like stack allocations in C are proved nonescaping.

Consequently, while Fil-C does have overheads, this isn’t the one I worry about.

cryptonector•6mo ago
I see! Thanks for that answer. I'm sure I'll have lots of questions, like these:

You say you don't have to instrument malloc(), but somehow you must learn of the allocation size. How?

Are aliasing bugs detected?

I assume that Fil-C is a whole-program-only option. That is, that you can't mix libraries not compiled with Fil-C and ones compiled with Fil-C. Is that right?

So one might want a whole distro built with Fil-C.

How much are you living with Fil-C? How painful is it, performance-wise?

BTW, I think your approach is remarkable and remarkably interesting. Of course, to some degree this just highlights how bad C (and C++) is (are) at being memory-safe.

pizlonator•6mo ago
Malloc is just a wrapper for zgc_alloc and passes the size through. "Not instrumenting malloc" just means that the compiler doesn't have to detect that you're calling malloc and treat it specially (this is important as many past attempts to make C memory safe did require malloc instrumentation, which meant that if you called malloc via a wrapper, those implementations would just break; Fil-C handles that just fine).

Not sure exactly what you mean by aliasing bugs. I'm assuming strict aliasing violations. Fil-C allows a limited and safe set of strict aliasing optimizations, which end up having the effect of loads/stores moving according to a memory model that is weaker than maybe you'd want. So, Fil-C doesn't detect those. Like in any clang-based compiler, Fil-C allows you to pass `-fno-strict-aliasing` if you don't want those optimizations.

That's right, you have to go all in on Fil-C. All libs have to be compiled with Fil-C. That said, separate compilation of those modules and libraries just works. Dynamic linking just works. So long as everything is Fil-C.

Yes you could build a distro that is 100% Fil-C. I think that's possible today. I just haven't had the time to do that.

All of the software I've ported to Fil-C is fast enough to be usable. You don't notice the perf "problem" unless you deliberately benchmark compute workloads (which I do - I have a large and ever-growing benchmark suite). I wrote up my thoughts about this in a recent twitter discussion: https://x.com/filpizlo/status/1920848334429810751

A bunch of us PL implementers have long "joked" that the only thing unsafe about C are the implementations of C. The language itself is fine. Fil-C sort of proves that joke true.

cryptonector•6mo ago
> Not sure exactly what you mean by aliasing bugs.

I meant that if the same allocation were accessed as different kinds of objects, as if through a union, ... I guess what I really meant to ask is: does Fil-C know the types of objects being pointed to by a pointer, and therefore also the number of elements in arrays?

pizlonator•6mo ago
It’s a dynamically typed capability system.

So, if you store a pointer to a location in memory and then load from that location using pointer type, then you get the capability that was last stored. But if the thing stored at the location was an integer, you get an invalid capability.

So Fil-C’s “type” for an object is ever evolving. The memory returned from malloc will be nothing but invalid capabilities for each pointer width word in that allocation but as soon as you store pointers to it then the locations you stored those pointers to will be understood as being pointer locations. This makes unions and weird pointer casts just work. But you can ever type confuse an int with a pointer, or different pointer types, in a manner that would let you violate the capability model (ie achieve the kind of weird state where you can access any memory you like).

Lots of tricks under the hood to make this thread safe and not too expensive.

cryptonector•6mo ago
So Fil-C has two types: pointers, and everything else. Clever.
pjmlp•6mo ago
Are you sure they were being used at all?

GCC replaces memcpy/memmove/memset with its own intrisics, if compiling in high optimization levels.

pizlonator•6mo ago
Yes they were being used.
edam•6mo ago
Pretty obviously made by the musl authors.
deaddodo•6mo ago
> "I have tried to be fair and objective, but as I am the author of musl"

Yeah, pretty obvious when they state as much in the first paragraph.

moomin•6mo ago
No cosmopolitan, pity.
casey2•6mo ago
Where is the "# of regressions caused" box?
josephg•6mo ago
It’s amazing how much code gets pulled in for printf. Using musl, printf apparently adds 13kb of code to your binary. Given format strings are almost always static, it’s so weird to me that they still get parsed at runtime in all cases. Modern compilers even parse printf format strings anyway to check your types match.

This sort of thing makes me really appreciate zig’s comptime. Even rust uses a macro for println!().

messe•6mo ago
In larger programs, that compile time parsing can lead to even more code, as the function is essentially instantiated and compiled separately for each and every invocation. The type erasure provided by printf, can be a blessing in _some circumstances_.

That being said, in those larger programs, it's still likely going to be a negligible part of the binary size, and the additional code paths are unlikely to affect performance unless you're doing string formatting in multiple hot-paths which is generally a poor choice anyway.

jcelerier•6mo ago
If you use any level of compiler optimisation both clang and GCC will convert calls to printf into calls to puts (which is much simpler) if they detect there's no formatting done
weiwenhao•6mo ago
The static compilation of musl libc is a huge help for alpine linux and many system programming languages. My programming language https://github.com/nature-lang/nature is also built on musl libc.