frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Raspberry Pi: More memory-driven price rises

https://www.raspberrypi.com/news/more-memory-driven-price-rises/
1•calcifer•4m ago•0 comments

Level Up Your Gaming

https://d4.h5go.life/
1•LinkLens•8m ago•1 comments

Di.day is a movement to encourage people to ditch Big Tech

https://itsfoss.com/news/di-day-celebration/
1•MilnerRoute•10m ago•0 comments

Show HN: AI generated personal affirmations playing when your phone is locked

https://MyAffirmations.Guru
1•alaserm•11m ago•1 comments

Show HN: GTM MCP Server- Let AI Manage Your Google Tag Manager Containers

https://github.com/paolobietolini/gtm-mcp-server
1•paolobietolini•12m ago•0 comments

Launch of X (Twitter) API Pay-per-Use Pricing

https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476
1•thinkingemote•12m ago•0 comments

Facebook seemingly randomly bans tons of users

https://old.reddit.com/r/facebookdisabledme/
1•dirteater_•13m ago•1 comments

Global Bird Count

https://www.birdcount.org/
1•downboots•14m ago•0 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
2•soheilpro•16m ago•0 comments

Jon Stewart – One of My Favorite People – What Now? With Trevor Noah Podcast [video]

https://www.youtube.com/watch?v=44uC12g9ZVk
2•consumer451•18m ago•0 comments

P2P crypto exchange development company

1•sonniya•31m ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
1•jesperordrup•36m ago•0 comments

Write for Your Readers Even If They Are Agents

https://commonsware.com/blog/2026/02/06/write-for-your-readers-even-if-they-are-agents.html
1•ingve•37m ago•0 comments

Knowledge-Creating LLMs

https://tecunningham.github.io/posts/2026-01-29-knowledge-creating-llms.html
1•salkahfi•37m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•44m ago•0 comments

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•52m ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
6•keepamovin•53m ago•1 comments

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•55m ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
2•sickthecat•58m ago•1 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•58m ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
2•imthepk•1h ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•1h ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•1h ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•1h ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
3•breve•1h ago•1 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•1h ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•1h ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•1h ago•1 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•1h ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
7•tempodox•1h ago•4 comments
Open in hackernews

HTTP3 Explained

https://http3-explained.haxx.se
192•weinzierl•3mo ago

Comments

ahoka•3mo ago
Anyone else blocks UDP 80/443 due to privacy concerns?
detaro•3mo ago
What privacy concern do you have that does not apply to TCP 80/443?
ahoka•3mo ago
Tracking sessions across different physical connections has some non-trivial privacy implications:

https://http3-explained.haxx.se/en/quic/quic-connections#con...

NavinF•3mo ago
How do you imagine other protocols handle switching physical connections? With HTTP 1, you send your session ID as a cookie after wasting time creating a new TCP connection
ahoka•3mo ago
Yes, obviously, but we already know how that is used. This is a more complex protocol that might enable attack vectors that were not possible before and we do not think about when accessing websites:

But see the notes taken from the HTTP/3 RFC itself, written by the authors:

10.11. Privacy Considerations

   Several characteristics of HTTP/3 provide an observer an opportunity
   to correlate actions of a single client or server over time.  These
   include the value of settings, the timing of reactions to stimulus,
   and the handling of any features that are controlled by settings.

   As far as these create observable differences in behavior, they could
   be used as a basis for fingerprinting a specific client.

   HTTP/3's preference for using a single QUIC connection allows
   correlation of a user's activity on a site.  Reusing connections for
   different origins allows for correlation of activity across those
   origins.

   Several features of QUIC solicit immediate responses and can be used
   by an endpoint to measure latency to their peer; this might have
   privacy implications in certain scenarios.
NavinF•3mo ago
HTTP 2 also does connection coalescing. Why are you quoting text instead of a specific concern? QUIC has been around for over a decade now
MallocVoidstar•3mo ago
No.
frmdstryr•3mo ago
Yes, no performance difference either.
ckbkr10•3mo ago
Sounds overly complicated, I doubt this will have a widespread adoption
ofrzeta•3mo ago
"As of September 2024, HTTP/3 is supported by more than 95% of major web browsers in use and 34% of the top 10 million websites."

https://en.wikipedia.org/wiki/HTTP/3

karel-3d•3mo ago
A lot of servers still don't support that.

Go http webserver doesn't support http 3 without external libraries. Nginx doesn't support http 3. Apache doesn't support http 3. node.js doesn't support http 3. Kubernetes ingress doesn't support http 3.

should I go on?

edit: even curl itself - which created the original document linked above - has http 3 just in an experimental build.

miyuru•3mo ago
>Nginx doesn't support http 3

nginx do support it.

https://nginx.org/en/docs/quic.html

karel-3d•3mo ago
ah okay i was wrong there, mea culpa
dotancohen•3mo ago
The guy's point still stands - lots of popular software do not yet support http3.
karel-3d•3mo ago
And I see I was not that wrong; the module is still marked as "experimental" and not built by default.

https://nginx.org/en/docs/http/ngx_http_v3_module.html

aleks_me2•3mo ago
Well this statement have to be precised.

caddyserver v2 supports HTTP/3 and it's an webserver written in go https://caddyserver.com/features

FYI: There is also an rust webserver which supports HTTP/3. https://v2.ferronweb.org/

karel-3d•3mo ago
Go built-in webserver.
samueloph•3mo ago
> edit: even curl itself - which created the original document linked above - has http 3 just in an experimental build.

It's not experimental when built with ngtcp2, which is what you will get on distros like Debian 13-backports (plain Debian 13 uses OpenSSL-QUIC), Debian 14 and onward, Arch Linux and Gentoo.

Reference: https://curl.se/docs/http3.html

pimterry•3mo ago
Yes and, at the same time practical support within programming language standard libraries & common tooling lags way behind: https://httptoolkit.com/blog/http3-quic-open-source-support-...
taffer•3mo ago
You will get most of the benefits of HTTP 3 even if your app libraries run HTTP 1.1, as long as the app is behind a reverse proxy that speaks HTTP 3.
theandrewbailey•3mo ago
I use HAproxy to get HTTP/3.

https://www.haproxy.org/

https://haproxy.debian.net/

https://www.haproxy.com/blog/how-to-enable-quic-load-balanci...

kunley•3mo ago
Yep, for example, Caddy (zero special configuration to enable HTTP 3)
gucci-on-fleek•3mo ago
About 30% percent of traffic to Cloudflare uses HTTP/3 [0], so it seems pretty popular already. For comparison, this is 3× as much traffic as HTTP/1.1.

[0]: https://radar.cloudflare.com/adoption-and-usage#http1x-vs-ht...

mgaunard•3mo ago
and then cloudflare converts that to http/2 or even 1.1 for the backend
vanviegen•3mo ago
So? Those protocols work fine within the reliable low latency network of a datacenter.
wongarsu•3mo ago
I'd even go as far as claiming that on reliable wired connections (like between cloudflare and your backend) HTTP/2 is superior to HTTP/3. Choosing HTTP/3 for that part of the journey would be a downgrade
klempner•3mo ago
At the very least, the benefits of QUIC are very very dubious for low RTT connections like inside a datacenter, especially when you're losing a bunch of hardware support and moving a fair bit of actual work to userspace where threads need to be scheduled etc. On the other hand Cloudflare to backend is not necessarily low RTT and likely has nonzero congestion.

With that said, I am 100% in agreement that the primary benefits of QUIC in most cases would be between client and CDN, whereas the costs are comparable at every hop.

hshdhdhehd•3mo ago
Is CF typically serving from the edge, or serving from the nearest to the server? I imagine it would be from the edge so that it can CDN what it can. So... most of the time it wont be a low latency connection from CF to backend. Unless your back end is globally distributed too.
immibis•3mo ago
Also, within a single server, you should not use HTTP between your frontend nginx and your application server - use FastCGI or SCGI instead, as they preserve metadata (like client IP) much better. You can also use them over the network within a datacenter, in theory.
OskarS•3mo ago
Is the protocol inherently inferior in situations like that, or is this because we've spent decades optimizing for TCP and building into kernels and hardware? If we imagine a future where QUIC gets that kind of support, will it still be a downgrade?
Veserv•3mo ago
There is no performance disadvantage at the normal speed of most implementations. With a good QUIC implementation and a good network stack you can drive ~100 Gb/s per core on a regular processor from userspace with 1500-byte MTU with no segmentation offload if you use a unencrypted QUIC configuration. If you use encryption, then you will bottleneck on the encryption/decryption bandwidth of ~20-50 Gb/s depending on your processor.

On the Linux kernel [1], for some benchmark they average ~24 Gb/s for unencrypted TCP from kernel space with 1500-byte MTU using segmentation offload. For encrypted transport, they average ~11 Gb/s. Even using 9000-byte MTU for unencrypted TCP they only average ~39 Gb/s. So there is no inherent disadvantage when considering implementations of this performance level.

And yes, that is a link to a Linux kernel QUIC vs Linux kernel TCP comparison. And yes, the Linux kernel QUIC implementation is only driving ~5 Gb/s which is 20x slower than what I stated is possible for a QUIC implementation above. Every QUIC implementation in the wild is dreadfully slow compared to what you could actually achieve with a proper implementation.

Theoretically, there is a small fundamental advantage to TCP due to not having multiple streams which could allow it maybe a ~2x performance advantage when comparing perfectly optimal implementations. But, you are comparing a per-core control plane throughput using 1500-byte MTU of, by my estimation, ~300 Gb/s on QUIC vs ~600 Gb/s on TCP at which point both are probably bottlenecking on your per-core memory bandwidth anyways.

[1] https://lwn.net/ml/all/cover.1751743914.git.lucien.xin@gmail...

frmdstryr•3mo ago
Also apparently slower over fast connections https://arxiv.org/pdf/2310.09423
DaSHacka•3mo ago
A decrease in throughput is a small price to pay for progress
code_martial•3mo ago
Here’s a conceptual background about how and why HTTP/3 came to be (recollected from memory):

HTTP/1.0 was built primarily as a textual request-response protocol over the very suitable TCP protocol which guaranteed reliable byte stream semantics. The usual pattern was to use a TCP connection to exchange a request and response pair.

As websites grew more complex, a web page was no longer just one document but a collection of resources stitched together into a main document. Many of these resources came from the same source, so HTTP/1.1 came along with one main optimisation — the ability to reuse a connection for multiple resources using Keep Alive semantics.

This was important because TCP connections and TLS (nee SSL) took many round-trips to get established and transmitting at optimal speed. Latency is one thing that cannot be optimised by adding more hardware because it’s a function of physical distance and network topology.

HTTP/2 came along as a way to improve performance for dynamic applications that were relying more and more on continuous bi-directional data exchange and not just one-and-done resource downloads. Two of its biggest advancements were faster (fewer round-trips) TLS negotiation and the concept of multiple streams over the same TCP connection.

HTTP/2 fixed pretty much everything that could be fixed with HTTP performance and semantics for contemporary connected applications but it was still a protocol that worked over TCP. TCP is really good when you have a generally stable physical network (think wired connections) but it performs really badly with frequent interruptions (think Wi-Fi with handoffs and mobile networks).

Besides the issues with connection reestablishment, there was also the challenge of “head of the line blocking” — since TCP has no awareness of multiplexed HTTP/2 streams, it blocks everything if a packet is dropped, instead of blocking only the stream to which the packet belonged. This renders HTTP/2 multiplexing a lot less effective.

In parallel with HTTP/2, work was also being done to optimise the network connection experience for devices on mobile and wireless networks. The outcome was QUIC — another L4 protocol over UDP (which itself is barebones enough to be nicknamed “the null protocol”). Unlike TCP, UDP just tosses data packets between endpoints without much consideration of their fate or the connection state.

QUIC’s main innovation is to integrate encryption into the transport layer and elevate connection semantics to the application space, and allow for the connection state to live at the endpoints rather than in the transport components. This allows retaining context as devices migrate between access points and cellular towers.

So HTTP/3? Well, one way to think about it is that it is HTTP/2 semantics over QUIC transport. So you get excellent latency characteristics over frequently interrupted networks and you get true stream multiplexing semantics because QUIC doesn’t try to enforce delivery order or any such thing.

Is HTTP/3 the default option going forward? Maybe not until we get the level of support that TCP enjoys at the hardware level. Currently, managing connection state in application software means that over controlled environments (like E-W communications within a data centre), HTTP/3 may not have as good a throughput as HTTP/2.

NeutralForest•3mo ago
Thanks for taking the time to make this, that was helpful!
code_martial•3mo ago
Glad you found it helpful! Most of it is distilled from High Performance Browser Networking (https://hpbn.co/). It’s a very well organised, easy to follow book. Highly recommended!

Unfortunately, it’s not updated to include QUIC and HTTP/3 so I had to piece together the info from various sources.

newpavlov•3mo ago
Thank you for a great overview! I wish HTTP3/QUIC was the "default option" and had much wider adoption.

Unfortunately, software implementations of QUIC suffer from dealing with UDP directly. Every UDP packet involves one syscall, which is relatively expensive in modern times. And accounting for MTU further makes the situation ~64 times worse.

In-kernel implementations and/or io-uring may improve this unfortunate situation, but today in practice it's hard to achieve the same throughput as with plain TCP. I also vaguely remember that QUIC makes load-balancing more challenging for ISPs, since they can not distinguish individual streams as with TCP.

Finally, QUIC arrived a bit too late and it gets blocked in some jurisdictions (e.g. Russia) and corporate environments similarly to ESNI.

lelanthran•3mo ago
> In-kernel implementations and/or io-uring may improve this unfortunate situation, but today in practice it's hard to achieve the same throughput as with plain TCP.

This would depend on how the server application is written, no? Using io-uring and similar should minimise context-switches from userspace to kernel space.

> I also vaguely remember that QUIC makes load-balancing more challenging for ISPs, since they can not distinguish individual streams as with TCP.

Not just for ISPs; IIRC (and I may be recalling incorrectly) reverse proxies can't currently distinguish either, so you can't easily put an application behind Nginx and use it as a load-balancer.

The server application itself has to be the proxy if you want to scale out. OTOH, if your proxy for UDP is able to inspect the packet and determine the corresponding instance to send a UDP packet too, it's going to be much fewer resources required on the reverse proxy/load balancer, as they don't have to maintain open connections at all.

It will also allow some things more easily; a machine that is getting overloaded can hand-off (in userspace) existing streams to a freshly created instance of the server on a different machine, because the "stream" is simply related UDP packets. TCP is much harder to hand-off, and even if you can, it requires either networking changes or kernel functions to hand-off.

kccqzy•3mo ago
Why would every UDP packet involve one syscall when you can use sendmmsg(2) instead of sendmsg(2)? And similarly recvmmsg(2) instead of recvmsg(2).

EDIT: I found https://news.ycombinator.com/item?id=45387462 which is a way better discussion than what I wrote.

vivzkestrel•3mo ago
stupid question: why do we need QUIC? why not just switch HTTP to UDP instead of TCP?
kevincox•3mo ago
That's basically what QUIC is? It is a UDP based protocol over which HTTP can be run.

How else would you consider "just" switching HTTP to UDP? There are minimum required features such as 1. congestion control 2. multiplexed streams 3. encryption and probably a few others that I forgot about.

GuB-42•3mo ago
QUIC is actually a level 4 protocol, on the same level as UDP and TCP, it could work on IP directly, making it QUIC/IP.

They chose to keep the UDP layer because of its minimal overhead over raw IP and for better adoption and anti-ossification reasons, but conceptually, forget about UDP, QUIC is a TCP replacement that happens to be built on top of UDP.

Now for the answers:

- Why not HTTP over UDP? UDP is an unreliable protocol unsuitable for HTTP. HTTP by itself cannot deal with packet loss, among other things.

- Why not keep HTTP/2? HTTP/2 is designed to work with TCP and work around some of its limitations, it could probably work over QUIC too, but you would lose most of the advantages of QUIC

- Why not got back to HTTP/1? I could turn out to be a better choice than HTTP/2, but it is not a drop-in replacement either, and you would lose all the intersting features introduced since HTTP/2

sebazzz•3mo ago
I also have em-dashes in memory.
akdor1154•3mo ago
Damn it's nice reading a simple static site like this. Links open instantly to the next fully laid out page of content. If only the rest of the web could be like this..
INTPenis•3mo ago
Agreed but where is the actual git repo? I see a text saying this "contents get updated automatically on every commit to this git repository" but where is "this git repository"?

I can't find a link to the source anywhere.

Cthulhu_•3mo ago
After a quick google: https://github.com/bagder/http3-explained

(using a search engine is faster than asking for a link on HN)

Zambyte•3mo ago
I found it on HN faster than I could have with a search engine because they asked :)
rchard2scout•3mo ago
The introduction has a "help out" section which links to the github repo: https://github.com/bagder/http3-explained
dobladov•3mo ago
https://github.com/bagder/http3-explained
madeofpalk•3mo ago
Worth nothing, that react application (using React Server Components?)! If you have javascript enabled, it renders as a single page app, fetching each additional page via an API. If you disable JS, it renders it all on the server.
tomalbrc•3mo ago
Wow almost as good as handwritten HTML!
mb2100•3mo ago
yes, that's why performance metric and on low-powered phones is so terrible. Look at that: https://pagespeed.web.dev/analysis/https-http3-explained-hax...
flykespice•3mo ago
That is a striking difference between mobile and desktop, why is that? (Also that is a very interesting site)
mb2100•3mo ago
That's because on mobile, PageSpeed (which is a hosted version of the Ligthhouse dev tools you also have in Chrome) simulates a low-end Android device on a slow 3G network, which is what a lot of website visitors actually use (as opposed to the web developer using the newest iPhone on great WiFi).

That's why content-driven websites should not be an SPA, and why I built https://mastrojs.github.io

cwillu•3mo ago
Ugh, that explains why it hangs for a quarter second any time I scroll with the mousewheel.
fkyoureadthedoc•3mo ago
Damn it's nice to log onto Hacker News and see yet another top comment on an interesting article be bike shedding about webshit. And also wrong because if you crack open your react dev tools and have a peak inside the 2MB of javascript you'll see that this site is still everything you despise.
Razengan•3mo ago
But how will the author know the last 500 websites you visited and where your eyes are looking right now and what you ate last Tuesday? They should put some AnAlYtIcS in.
fny•3mo ago
Gitbook is not a simple static site generator.

There are a also ton of outbound requests for JS on first load.

[0]: view-source:https://http3-explained.haxx.se/

Nifty3929•3mo ago
+1000

I need fancy javascript crap like I need a hole in my head.

thegrim33•3mo ago
I see literally two dozen JS scripts run when I open the page.
gramakri2•3mo ago
Where can I download the pdf? It seems the link points to itself
panki27•3mo ago
It's hidden in the "Copy" drop down at the top right.

https://http3-explained.haxx.se/~gitbook/pdf?limit=100

sedatk•3mo ago
The document is now five years old and full of statements like “we’ll see that in the upcoming years”. I think it’s due for an update.
esnard•3mo ago
Link for anyone willing to contribute: https://github.com/bagder/http3-explained

Looks unmaintained, though.

lsaferite•3mo ago
I was personally bugged by it claiming that QUIC wasn't an acronym.
bmicraft•3mo ago
Well, it seem like is was originally, but isn't now and hasn't been at date of publication.

Edit:

> The initial QUIC protocol was designed by Jim Roskind at Google and was initially implemented in 2012, announced publicly to the world in 2013 when Google's experimentation broadened.

> Back then, QUIC was still claimed to be an acronym for "Quick UDP Internet Connections", but that has been dropped since then.

from https://http3-explained.haxx.se/en/proc

derelicta•3mo ago
It's still crazy how quickly http3 got adopted by web actors. Can't wait til we do the same for IMAP and SMTP
immibis•3mo ago
Email is mostly dead - we use Gmail (or Microsoft 365) now. It is to email what Slack is to IRC. With only one or two vendors, the need for widely interoperable protocols is gone - they only need to interoperate between a few large service providers, and that can be done by private agreement.
lsaferite•3mo ago
You realize those ESPs use and support the industry standard open protocols under the hood, right? Slack is 100% proprietary and does not use industry standard protocols for interchange or federation. These are not even remotely comparable. Slack would need to use industry standard and open protocols (i.e. XMPP) to allow federation with products like Teams and Discord for the situations to be comparable.
immibis•3mo ago
The Slack API is an industry standard open protocol.
lsaferite•3mo ago
Can you name one other chat service that uses it? Does it allow interop between the chat services?

Publishing the spec for your proprietary API does not make it an industry standard.

immibis•3mo ago
It does if your proprietary API is the industry standard. Everything Microsoft puts out is both industry standard and proprietary. So is everything TSMC puts out (their processes). Most of the interconnects in any computer system as well. I actually bought a legal copy of the SATA standard, for $30ish.
stock_toaster•3mo ago
Well, there is jmap[1].

[1]: https://en.wikipedia.org/wiki/JSON_Meta_Application_Protocol

kevg123•3mo ago
> As the packet loss rate increases, HTTP/2 performs less and less well. At 2% packet loss (which is a terrible network quality, mind you), tests have proven that HTTP/1 users are usually better off - because they typically have up to six TCP connections to distribute lost packets over. This means for every lost packet the other connections can still continue.

Why doesn't HTTP/2 use more than one socket?

thwarted•3mo ago
Because one thing it tries to optimize for is avoiding TLS session negotiation.
kevg123•3mo ago
Makes sense. One idea would be if the browser could detect packet loss (e.g. netstat -s and look for TCP retransmissions, and equivalent on other OSes) and open more sockets if there is.
sharts•3mo ago
Will there be HTTP/4 ?