frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny C Compiler

https://bellard.org/tcc/
102•guerrilla•3h ago•44 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
186•valyala•7h ago•34 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
110•surprisetalk•7h ago•116 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
43•gnufx•6h ago•45 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
130•mellosouls•10h ago•280 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
880•klaussilveira•1d ago•269 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
129•vinhnx•10h ago•15 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
166•AlexeyBrin•12h ago•29 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
97•zdw•3d ago•46 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
60•randycupertino•2h ago•90 comments

First Proof

https://arxiv.org/abs/2602.05192
96•samasblack•9h ago•63 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
265•jesperordrup•17h ago•86 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
167•valyala•7h ago•148 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
85•thelok•9h ago•18 comments

Eigen: Building a Workspace

https://reindernijhoff.net/2025/10/eigen-building-a-workspace/
4•todsacerdoti•4d ago•1 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
549•theblazehen•3d ago•203 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
49•momciloo•7h ago•9 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
26•mbitsnbites•3d ago•2 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
48•amitprasad•1h ago•47 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
24•languid-photic•4d ago•6 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
246•1vuio0pswjnm7•13h ago•388 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
80•josephcsible•5h ago•107 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
108•onurkanbkrc•12h ago•5 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
138•videotopia•4d ago•44 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
57•rbanffy•4d ago•17 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
215•limoce•4d ago•123 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
303•alainrk•12h ago•482 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
48•marklit•5d ago•9 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
121•speckx•4d ago•185 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
294•isitcontent•1d ago•39 comments
Open in hackernews

Measuring Latency (2015)

https://bravenewgeek.com/everything-you-know-about-latency-is-wrong/
46•dempedempe•2mo ago
https://archive.md/D8E5W

Comments

tomhow•2mo ago
One previous discussion at time of publication:

A summary of how not to measure latency - https://news.ycombinator.com/item?id=10732469 - Dec 2015 (3 comments)

Fripplebubby•2mo ago
> This is partly a tooling problem. Many of the tools we use do not do a good job of capturing and representing this data. For example, the majority of latency graphs produced by Grafana, such as the one below, are basically worthless. We like to look at pretty charts, and by plotting what’s convenient we get a nice colorful graph which is quite readable. Only looking at the 95th percentile is what you do when you want to hide all the bad stuff. As Gil describes, it’s a “marketing system.” Whether it’s the CTO, potential customers, or engineers—someone’s getting duped. Furthermore, averaging percentiles is mathematically absurd. To conserve space, we often keep the summaries and throw away the data, but the “average of the 95th percentile” is a meaningless statement. You cannot average percentiles, yet note the labels in most of your Grafana charts. Unfortunately, it only gets worse from here.

I think this is getting a bit carried away. I don't have any argument against the observation that that average of a p95 is not something that mathematically makes sense, but if you actually understand what it is, it is absolutely still meaningful. With time series data, there is always some time denominator, so it really means (say) "the p95 per minute averaged over the last hour", which is or can be meaningful (and useful at a glance).

Also, the claim that "[o]nly looking at the 95th percentile is what you do when you want to hide all the bad stuff" is very context dependent. As long as you understand what it actually means, I don't see the harm in it. The author makes this point that, because a load of a single webpage will result in 40 requests or so, you are much more likely to hit a p99 and so you should really care about p99 and up - more power to you, if that's the contextually appropriate, then that is absolutely right, but that really only applies to a webserver serving webpage assets which is only one kind of software that you might be writing. I think it is definitely important to know, for one given "eyeball" waiting on your service to respond, what the actual flow is - whether it's just one request, or multiple concurrent requests, or some kind of dependency graph of calls to your service all needed in sequence - but I don't really think that challenges the commonsense notion of latency, does it?

camel_gopher•2mo ago
Nearly all time series databases store single value aggregations (think p95) over a time period. A select few store actual serialized distributions (Atlas from Netflix, Apica IronDB, some bespoke implementations). Latency tooling is sorely overlooked mostly because the good tooling is complex, and requires corresponding visualization tooling. Most of the vendors have some implementation of heat map or histogram visualization but either the math is wrong or the UI can’t handle a non trivial volume of samples. Unfortunately it’s been a race to the bottom for latency measurement tooling, with the users losing.

Source: I’ve done this a lot

Fripplebubby•2mo ago
I take it as a given that what is stored and graphed is an information-destroying aggregate, but I think that aggregate is actually still useful + meaningful
camel_gopher•2mo ago
Someone smart I know coined it as “wrong but useful”
rdtsc•2mo ago
10 years old and still relevant. Gil created a wrk fork https://github.com/giltene/wrk2 to handle coordinated omission better. I used using his fork for many years. But I think he stopped updating it after a while.

Good load testing tools will have modes to send in data at a fixed rate regardless of other requests to handle coordinated omission. k6 for instance defined these modes are "open" and "closed": https://grafana.com/docs/k6/latest/using-k6/scenarios/concep.... They mention the term "coordinated omission" on the page however I feel like they could have given a nod to Gil for the inventing term.

10000truths•2mo ago
The table is a bit misleading. Most of the resources of a website are loaded concurrently and are not on the critical path of the "first contentful paint", so latency does not compound as quickly as the table implies. For web apps, much of the end-to-end latency hides lower in the networking stack. Here's the worst-case latency for a modern Chrome browser performing a cold load of an SPA website:

DNS-over-HTTPS-over-QUIC resolution: 2 RTTs

TCP handshake: 1 RTT

TLS v1.2 handshake: 2 RTTs

HTTP request/response (HTML): 1 RTT

HTTP request/response (bundled JS that actually renders the content): 1 RTT

That's 7 round trips. If your connection crosses a continent, that's easily a 1-2 second time-to-first-byte for the content you actually care about. And no amount of bandwidth will decrease that, since the bottlenecks are the speed of light and router hop latencies. Weak 4G/WiFi signal and/or network congestion will worsen that latency even further.

jiggawatts•2mo ago
The reason why using a CDN is so effective for improving the perceived performance of a web site is because it reduces the length (and hence speed of light delay) of these first 7 round trips by moving the static parts of the web app (HTML+JS) to the "edge", which is just a bunch of cache boxes scattered around the world.

The user no longer has to connect to the central app server, they can connect to their nearest cache edge box, which is probably a lot closer to them (1-10ms is typical).

Note that stateful API calls will still need to go back to the central app server, potentially an intercontinental hop.

10000truths•2mo ago
Indeed, at some point, you can't lower tail latencies any further without moving closer to your users. But of the 7 round trips that I mentioned above, you have control over 3 of them: 2 round trips can be eliminated by supporting HTTP/3 over QUIC (and adding HTTPS DNS records to your zone file), and 1 round trip can be eliminated by server-side rendering. That's a 40-50% reduction before you even need to consider a CDN setup, and depending on your business requirements, it may very well be enough.
pianom4n•2mo ago
For context this article was written when 95%+ of websites used HTTP 1.1 (and <50% used HTTPS).
hakkikonu•2mo ago
"How NOT to Measure Latency" by Gil Tene https://www.youtube.com/watch?v=lJ8ydIuPFeU