frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

BranchEducation

https://www.youtube.com/channel/UCdp4_l1vPmpN-gDbUwhaRUQ
1•casey2•3m ago•0 comments

Call Center Agoda Reschedule

https://sites.google.com/view/nomor-call-center-agoda/bio
1•Outreach_Refer•5m ago•3 comments

Auditing JDBC Drivers at Scale with AI led to 85000 bounty

https://www.hacktron.ai/blog/jdbc-audit-at-scale
2•Mohansrk•6m ago•0 comments

Gemini 3 Tools System Prompt

https://gist.github.com/sshh12/ec2c7eb1ae5f156a9cdc8e7f8fef512f
1•sshh12•16m ago•0 comments

Show HN: I made yet another AI headshot app because the world needed one more

https://apps.apple.com/pt/app/ai-headshot-editor-glowtap/id6754992714
1•CarlosArthurr•19m ago•0 comments

Is Apple Intelligence Smart? We Tested Every Feature

https://www.steaktek.com/tech/is-apple-intelligence-actually-smart-we-tested-every-feature/
2•genuser•26m ago•1 comments

Microsoft Will Preload Windows 11 File Explorer to Fix Bad Performance

https://blogs.windows.com/windows-insider/2025/11/21/announcing-windows-11-insider-preview-build-...
2•ksec•36m ago•0 comments

Show HN: 12K Reddit posts scraped and AI-scored for startup ideas

https://search.reddit-business-ideas.workers.dev/
1•shadowjones•37m ago•0 comments

React Suite v6: A Steady Step Toward Modernization

https://medium.com/rsuite/react-suite-v6-a-steady-step-toward-modernization-2af78029978d
2•simonguo•37m ago•1 comments

Ask HN: Where can you find old NetBSD packages?

2•GaryBluto•38m ago•1 comments

Show HN: MoodLens – Provide insights about your emotional state

https://moodlens.aiwith.me/
1•struy•40m ago•0 comments

Cryptographers Held an Election. They Can't Decrypt the Results

https://www.nytimes.com/2025/11/21/world/cryptography-group-lost-election-results.html
3•ceejayoz•41m ago•0 comments

Empathetic, Available, Cheap: When A.I. Offers What Doctors Don't

https://www.nytimes.com/2025/11/16/well/ai-chatbot-doctors-health-care-advice.html
1•uxhacker•41m ago•0 comments

On the Death of Tech Idealism (and Rise of the Homeless) in Northern California

https://lithub.com/on-the-death-of-tech-idealism-and-rise-of-the-homeless-in-northern-california/
26•pseudolus•46m ago•1 comments

MCP Apps: Extending servers with interactive user interfaces

http://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-apps/
2•raykyri•58m ago•0 comments

GrapheneOS accuses compeitors of sabotage, exits France over police threats

https://piunikaweb.com/2025/11/21/grapheneos-accuses-murena-iode-of-sabotage-pulls-servers-from-f...
5•RachelF•59m ago•0 comments

Preamble to a Psychofauna Bestiary

https://www.hopefulmons.com/p/preamble-to-a-psychofauna-bestiary
1•Anon84•1h ago•0 comments

Run0 –Empower

https://mastodon.social/@daandemeyer/115565105032166177
2•GalaxyNova•1h ago•0 comments

International Cloud Atlas

https://cloudatlas.wmo.int/en/home.html
3•bookofjoe•1h ago•0 comments

Hacked iPhone running iPadOS + a Mac-like experience on an external monitor

https://old.reddit.com/r/iphone/comments/1p3e2bf/my_hacked_iphone_running_ipados_and_running_a/
1•TechExpert2910•1h ago•0 comments

Infinibay LXD Container

https://github.com/Infinibay/lxd
3•angaroshi•1h ago•0 comments

Beyond Earnings Premia: Debt-Adjusted Returns to Postsecondary Education

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5375794
4•PaulHoule•1h ago•0 comments

A Tap-to-Pay Society Is Leaving New Yorkers Behind

https://www.nytimes.com/2025/11/21/nyregion/cashless-economy.html
5•perihelions•1h ago•1 comments

Graphics API is irrelevant [video]

https://www.youtube.com/watch?v=xNX9H_ZkfNE
2•ibobev•1h ago•0 comments

Nano Banana Pro Can Generate a Image from Lat Long

https://chat.vlm.run/c/8ff868bb-e188-4677-b38e-46301d30aac9
1•visioninmyblood•1h ago•1 comments

How to get started with the ed text editor (2022)

https://www.redhat.com/en/blog/introduction-ed-editor
2•todsacerdoti•1h ago•1 comments

C64 gets Doom-style shooter

https://hackaday.com/2025/11/21/commodores-most-popular-computer-gets-doom-style-shooter/
2•amichail•1h ago•0 comments

Wikidive: A tool to deep dive into Wikipedia rabbitholes

https://wikidive.net/dive?topic=Cognition
2•atulvi•1h ago•0 comments

How to use the internet, 1995 [video]

https://www.youtube.com/watch?v=Y0EXga2hEIs
2•monkey34•1h ago•0 comments

Bret Taylor's Sierra Reaches $100M ARR in Under Two Years

https://techcrunch.com/2025/11/21/bret-taylors-sierra-reaches-100m-arr-in-under-two-years/
1•karakoram•1h ago•0 comments
Open in hackernews

Measuring Latency (2015)

https://bravenewgeek.com/everything-you-know-about-latency-is-wrong/
43•dempedempe•1d ago
https://archive.md/D8E5W

Comments

tomhow•1d ago
One previous discussion at time of publication:

A summary of how not to measure latency - https://news.ycombinator.com/item?id=10732469 - Dec 2015 (3 comments)

Fripplebubby•1d ago
> This is partly a tooling problem. Many of the tools we use do not do a good job of capturing and representing this data. For example, the majority of latency graphs produced by Grafana, such as the one below, are basically worthless. We like to look at pretty charts, and by plotting what’s convenient we get a nice colorful graph which is quite readable. Only looking at the 95th percentile is what you do when you want to hide all the bad stuff. As Gil describes, it’s a “marketing system.” Whether it’s the CTO, potential customers, or engineers—someone’s getting duped. Furthermore, averaging percentiles is mathematically absurd. To conserve space, we often keep the summaries and throw away the data, but the “average of the 95th percentile” is a meaningless statement. You cannot average percentiles, yet note the labels in most of your Grafana charts. Unfortunately, it only gets worse from here.

I think this is getting a bit carried away. I don't have any argument against the observation that that average of a p95 is not something that mathematically makes sense, but if you actually understand what it is, it is absolutely still meaningful. With time series data, there is always some time denominator, so it really means (say) "the p95 per minute averaged over the last hour", which is or can be meaningful (and useful at a glance).

Also, the claim that "[o]nly looking at the 95th percentile is what you do when you want to hide all the bad stuff" is very context dependent. As long as you understand what it actually means, I don't see the harm in it. The author makes this point that, because a load of a single webpage will result in 40 requests or so, you are much more likely to hit a p99 and so you should really care about p99 and up - more power to you, if that's the contextually appropriate, then that is absolutely right, but that really only applies to a webserver serving webpage assets which is only one kind of software that you might be writing. I think it is definitely important to know, for one given "eyeball" waiting on your service to respond, what the actual flow is - whether it's just one request, or multiple concurrent requests, or some kind of dependency graph of calls to your service all needed in sequence - but I don't really think that challenges the commonsense notion of latency, does it?

camel_gopher•23h ago
Nearly all time series databases store single value aggregations (think p95) over a time period. A select few store actual serialized distributions (Atlas from Netflix, Apica IronDB, some bespoke implementations). Latency tooling is sorely overlooked mostly because the good tooling is complex, and requires corresponding visualization tooling. Most of the vendors have some implementation of heat map or histogram visualization but either the math is wrong or the UI can’t handle a non trivial volume of samples. Unfortunately it’s been a race to the bottom for latency measurement tooling, with the users losing.

Source: I’ve done this a lot

Fripplebubby•22h ago
I take it as a given that what is stored and graphed is an information-destroying aggregate, but I think that aggregate is actually still useful + meaningful
camel_gopher•21h ago
Someone smart I know coined it as “wrong but useful”
rdtsc•1d ago
10 years old and still relevant. Gil created a wrk fork https://github.com/giltene/wrk2 to handle coordinated omission better. I used using his fork for many years. But I think he stopped updating it after a while.

Good load testing tools will have modes to send in data at a fixed rate regardless of other requests to handle coordinated omission. k6 for instance defined these modes are "open" and "closed": https://grafana.com/docs/k6/latest/using-k6/scenarios/concep.... They mention the term "coordinated omission" on the page however I feel like they could have given a nod to Gil for the inventing term.

10000truths•21h ago
The table is a bit misleading. Most of the resources of a website are loaded concurrently and are not on the critical path of the "first contentful paint", so latency does not compound as quickly as the table implies. For web apps, much of the end-to-end latency hides lower in the networking stack. Here's the worst-case latency for a modern Chrome browser performing a cold load of an SPA website:

DNS-over-HTTPS-over-QUIC resolution: 2 RTTs

TCP handshake: 1 RTT

TLS v1.2 handshake: 2 RTTs

HTTP request/response (HTML): 1 RTT

HTTP request/response (bundled JS that actually renders the content): 1 RTT

That's 7 round trips. If your connection crosses a continent, that's easily a 1-2 second time-to-first-byte for the content you actually care about. And no amount of bandwidth will decrease that, since the bottlenecks are the speed of light and router hop latencies. Weak 4G/WiFi signal and/or network congestion will worsen that latency even further.

jiggawatts•20h ago
The reason why using a CDN is so effective for improving the perceived performance of a web site is because it reduces the length (and hence speed of light delay) of these first 7 round trips by moving the static parts of the web app (HTML+JS) to the "edge", which is just a bunch of cache boxes scattered around the world.

The user no longer has to connect to the central app server, they can connect to their nearest cache edge box, which is probably a lot closer to them (1-10ms is typical).

Note that stateful API calls will still need to go back to the central app server, potentially an intercontinental hop.

10000truths•18h ago
Indeed, at some point, you can't lower tail latencies any further without moving closer to your users. But of the 7 round trips that I mentioned above, you have control over 3 of them: 2 round trips can be eliminated by supporting HTTP/3 over QUIC (and adding HTTPS DNS records to your zone file), and 1 round trip can be eliminated by server-side rendering. That's a 40-50% reduction before you even need to consider a CDN setup, and depending on your business requirements, it may very well be enough.
pianom4n•7h ago
For context this article was written when 95%+ of websites used HTTP 1.1 (and <50% used HTTPS).
hakkikonu•18h ago
"How NOT to Measure Latency" by Gil Tene https://www.youtube.com/watch?v=lJ8ydIuPFeU