Jesus, that's bad. Does anyone know if userspace QUIC implementations are also this slow?
OTOH, TCP is like a quiet guy at the gym who always wears baggy clothes but does 4 plates on the bench when nobody is looking. Don't underestimate. I wasted months to learn that lesson.
QUIC getting hardware acceleration should close this gap, and keep all the benefits. But a kernel (software) implementation is basically necessary before it can be properly hardware-accelerated in future hardware (is my current understanding)
It does save 2 round-trips during connection compared to TLS-over-TCP, if Wikipedia's diagram is accurate: https://en.wikipedia.org/wiki/QUIC#Characteristics That is a decent latency win on every single connection, and with 0-RTT you can go further, but 0-RTT is stateful and hard to deploy and I expect it will see very little use.
Secondary you have the reduced RTT, multiple streams (prevents HOL blocking), datagrams (realtime video on same conn) and you can scale buffers (in userspace) to avoid BDP limits imposed by kernel. However.. I think in practice those haven’t gotten as much visibility and traction, so the original reason is still the main one from what I can tell.
- having a lower latency handshake
- avoiding some badly behaved ‘middleware’ boxes between users and servers
- avoiding resetting connections when user up addresses change
- avoiding head of line blocking / the increased cost of many connections ramping up
- avoiding poor congestion control algorithms
- probably other things too
And those are all things about working better with the kind of network situations you tend to see between users (often on mobile devices) and servers. I don’t think QUIC was meant to be fast by reducing OS overhead on sending data, and one should generally expect it to be slower for a long time until operating systems become better optimised for this flow and hardware supports offloading more of the work. If you are Google then presumably you are willing to invest in specialised network cards/drivers/software for that.
- bandwidth of the network
- how fast the nic on the server is
- how fast the nic on your device is
- whether the server response fits in the amount of data that can be sent given the client’s initial receive window or whether several round trips are required to scale the window up such that the server can use the available bandwidth
It could still be faster in real world situations where the client is a mobile device with a high latency, lossy connection.
This could be achieved by encapsulating TCP in UDP and running a custom TCP stack in userspace on the client. That would allow protocol innovation without throwing away 3 decades of optimizations in TCP that make it 4x as efficient on the server side.
Surely badly behaving middleboxes won't just ignore UDP traffic? If anything, they'd get confused about udp/443 and act up, forcing clients to fall back to normal TCP.
QUIC is basically about taking all of the information middleboxes like to fuck with in TCP, putting it under the encryption layer, and packaging it back up in a UDP packet precisely so it's either just dropped or forwarded. In practice this (i.e. QUIC either being just dropped or left alone) has actually worked quite well.
To be fair, the Linux kernel TCP implementation only gets ~4.5 Gbps at normal packets sizes and still only achieves ~24 Gbps with large segmentation offload [2]. Both of which are ridiculously slow. It is straightforward to achieve ~100 Gbps/core at normal packet sizes without segmentation offload with the same features as QUIC with a properly designed protocol and implementation.
[1] https://microsoft.github.io/msquic/
[2] https://lwn.net/ml/all/cover.1751743914.git.lucien.xin@gmail...
Without seeing actual benchmark code, it's hard to tell if you should even care about that specific result.
If your goal is to pipe lots of bytes from A to B over internal or public internet there probably aren't make things, if any, that can outperform TCP. Decades were spent optimizing TCP for that. If HOL blocking isn't an issue for you, then you can keep using HTTP over TCP.
Layer4 TCP is pretty much just slapped on top of Layer3 IPv4 or IPv6 in exactly the same way for both of them.
Outside of some little nitpicky things like details on how TCP MSS clamping works, it is basically the same.
To prove it:
1. Add a new OSI Layer 4 protocol called "QUIC" and give it a new protocol number, and just for fun, change the UDP frame header semantics so it can't be confused for UDP.
2. Then release kernel updates to support the new protocol.
Nobody's going to use it, right? Because internet routers, home wireless routers, servers, shared libraries, etc would all need their TCP/IP stacks updated to support the new protocol. If we can't ship it over a weekend, it takes too long!
But wait. What if ChatGPT/Claude/Gemini/etc only supported communication over that protocol? You know what would happen: every vendor in the world would backport firmware patches overnight, bending over backwards to support it. Because they can smell the money.
OTOH, you want to be in user land on the client, because modifying the kernel on clients is hard. If you were Google, maybe you could work towards a model where Android clients could get their in-kernel protocol handling to be something that could be updated regularly, but that doesn't seem to be something Google is willing or able to do; Apple and Microsoft can get priority kernel updates out to most of their users quickly; Apple also can influence networks to support things they want their clients to use (IPv6, MP-TCP). </rant>
If you were happy with congestion control on both sides of TCP, and were willing to open multiple TCP connections like http/1, instead of multiplexing requests on a single connection like http/2, (and maybe transfer a non-pessimistic bandwidth estimate between TCP connections to the same peer), QUIC still gives you control over retransmission that TCP doesn't, but I don't think that would be compelling enough by itself.
Yes, there's still ossification in middle boxes doing TCP optimization. My information may be old, but I was under the impression that nobody does that in IPv6, so the push for v6 is both a way to avoid NAT and especially CGNAT, but also a way to avoid optimizer boxes as a benefit for both network providers (less expense) and services (less frustration).
Ossification doesn't apply (or it shouldn't, IMHO, the point of Open Source software is that you can change it to fit your needs... if you don't like what upstream is doing, you should be running a local fork that does what you want... yeah, it's nicer if it's upstreamed, but try running a local fork of Windows or MacOS); you can make congestion control work for you when you control both sides; enterprise switches and routers aren't messing with tcp flows. If you're pushing enough traffic that this is an issue, the cost of QUIC seems way too high to justify, even if it helps with some issues.
As very few server administrators bother turning on features like MPTCP, QUIC has an advantage on mobile phones with moderate to bad reception. That's not a huge issue for me most of the time, but billions of people are using mobile phones as their only access to the internet, especially in developing countries that are practically skipping widespread copper and fiber infrastructure and moving directly to 5G instead. Any service those people are using should probably consider implementing QUIC, and if they use it, they'd benefit from an in-kernel server.
All the data center operators can stick to (MP)TCP, the telco people can stick to SCTP, but the consumer facing side of the internet would do well to keep QUIC as an option.
For what it's worth: Romania, one of the piss poorest countries of Europe, has a perfectly fine mobile phone network, and even outback small villages have XGPON fiber rollouts everywhere. Germany? As soon as you cross into the country from Austria, your phone signal instantly drops, barely any decent coverage outside of the cities. And forget about PON, much less GPON or even XGPON.
Germany should be considered a developing country when it comes to expectations around telecommunication.
It is mostly achieved by using encryption, and it is a reason why it is such an important and mandatory part of the protocol. The idea is to expose as little as possible of the protocol between the endpoints, the rest is encrypted, so that "middleboxes" can't look at the packet and do funny things based on their own interpretation of the protocol stack.
Endpoint can still do whatever they want, and ossification can still happen, but it helps against ossification at the infrastructure level, which is the worst. Updating the linux kernel on your server is easier than changing the proprietary hardware that makes up the network backbone.
The use of UDP instead of doing straight QUIC/IP is also an anti-ossification technique, as your app can just use UDP and a userland library regardless of the QUIC kernel implementation. In theory you could do that with raw sockets too, but that's much more problematic since because you don't have ports, you need the entire interface for yourself, and often root access.
The SeL4 kernel is 10k lines of code. OKL4 is 13k. QNX is ~30k.
"It's fun having access to everything."
— Terry A. Davis
I think this would get messy quick in an OS designed by more than one person
The Jevons Paradox is applicable in a lot of contexts.
More efficient use of compute and communications resources will lead to higher demand.
In games this is fine. We want more, prettier, smoother, pixels.
In scientific computing this is fine. We need to know those simulation results.
On the web this is not great. We don’t want more ads, tracking, JavaScript.
I'm benefiting from WebP, JS JITs, Flexbox, zstd, Wasm, QUIC, etc, etc
I would hope for something more explicit, where you get a connection object and then open streams from it, but I guess that is fine for now.
https://github.com/microsoft/msquic/discussions/4257 ah but look at this --- unless this is an extension, the server side can also create new streams, once a connection is established. The client creating new "connections" (actually streams) cannot abstract over this. Something fundamentally new is needed.
My guess is recvmsg to get a new file descriptor for new stream.
- Send specifies which stream by ordinal number? (Can't have different parts of a concurrent app independently open new streams)
- Receive doesn't specify which stream at all?!
How are socket APIs always such garbage....
still a draft though.
[1] - https://mosh.org/
[1] - https://papers.freebsd.org/2022/eurobsdcon/jones-making_free...
1. Kernel bypass combined with DMA and techniques like dedicating a CPU to packet processing improve performance.
2. What I think of as "removing userspace from the data plane" improves performance for things like sendfile and ktls.
To your point, Quic in the kernel seems to not have either advantage.
RDMA is directly from bus-to-bus, bypassing all the software.
In theory the likes of io_uring would bring these benefits across the board, but we haven't seen that delivered (yet, I remain optimistic).
Network stacks were moved to userspace because Google wanted to replace TCP itself (and upgrade TLS), but it only cared about the browser, so they just put the stack in the browser, and problem solved.
The copy itself is going at 200-400 Gbps so writing out a standard 1,500 byte (12,000 bit) packet takes 30-60 ns (in steady state with caches being prefetched). Of course you get slaughtered if you stupidly do a syscall (~100 ns hardware overhead) per packet since that is like 300% overhead. You just batch like 32 packets so the write time is ~1,000-2,000 ns then your overhead goes from 300% to 10%.
At a 1 Gbps throughput, that is ~80,000 packets per second or one packet per ~12.5 us. So, waiting for a 32 packet batch only adds a additional 500 us to your end-to-end latency in return for 4x efficiency (assuming that was your bottleneck; which it is not for these implementations as they are nowhere near the actual limits). If you go up to 10 Gbps, that is only 50 us of added latency, and at 100 Gbps you are only looking at 5 us of added latency for a literal 4x efficiency improvement.
You can work that angle by moving networking into user space... setting up the NIC queues so that user space can access them directly, without needed to context switch into the kernel.
Or you can work the angle by moving networking into kernel space ... things like sendfile which let a tcp application instruct the kernel to send a file to the peer without needing to copy the content into userspace and then back into kernel space and finally into the device memory, if you have in-kernel TLS with sendfile then you can continue to skip copying to userspace; if you have NIC based TLS, the kernel doesn't need to read the data from the disk; if you have NIC based TLS and the disk can DMA to the NIC buffers, the data doesn't need to even hit main memory. Etc
But most QUIC stacks don't get benefit from either side of that. They're reading and writing packets via syscalls, and they're doing all the packetization in user space. No chance to sendfile and skip a context switch and skip a copy. Batching io via io_uring or similar helps with context switches, but probably doesn't prevent copies.
It just offers people choice for the right solution at the right moment.
This approach works well when implementing a failover mechanism: if the default path to a server goes down, you can update DNS A records to point to a fallback machine running NGINX. That fallback instance can then route requests for specific domains to the original backend over an alternate path without needing to replicate the full TLS configuration locally.
However, this method won't work with HTTP/3. Since HTTP/3 uses QUIC over UDP and encrypts the SNI during the handshake, `ssl_preread_server_name` can no longer be used to route based on domain name.
What alternatives exist to support this kind of SNI-based routing with HTTP/3? Is the recommended solution to continue using HTTP/1.1 or HTTP/2 over TLS for setups requiring this behavior?
* I do actually consider it a feature, but do acknowledge https://xkcd.com/1172/
PS. HAProxy can proxy raw TLS, but can't direct based on hostname. Cloudflare tunnel I think has some special sauce that can proxy on hostname without terminating TLS but requires using them as your DNS provider.
PS: HAProxy definitely can do this too, something using req.ssl_sni like this:
frontend tcp-https-plain
mode tcp
tcp-request inspect-delay 10s
bind [::]:443 v4v6 tfo
acl clienthello req.ssl_hello_type 1
acl example.com req.ssl_sni,lower,word(-1,.,2) example.com
tcp-request content accept if clienthello
tcp-request content reject if !clienthello
default_backend tcp-https-default-proxy
use_backend tcp-https-example-proxy if example.com
Then tcp-https-example-proxy is a backend which forwards to a server listening for HTTPS (and using send-proxy-v2, so the client IP is kept). Cloudflare really isn't doing anything special here; there are also other tools like sniproxy[1] which can intercept based on SNI (a common thing commerical proxies do for filtering reasons).That's the theory anyway. You can't always rely on clients to do that (see how much of the HTTPS record Chromium actually supports[1]), but in general if QUIC fails for any reason clients will transparently fallback, as well as respecting the Alt-Svc[2] header. If this is a planned failover you could stop sending a Alt-Svc record and wait for the alternative to timeout, although it isn't strictly necessary.
If you do really want to route QUIC however, one nice property is the SNI is always in the first packet, so you can route flows by inspecting the first packet. See cloudflare's udpgrm[3] (this on its own isn't enough to proxy to another machine, but the building block is there).
Without Encrypted Client Hello (ECH) the client hello (including SNI) is encrypted with a known key (this is to stop middleboxes which don't know about the version of QUIC breaking it), so it is possible to decrypt it, see the code in udpgrm[4]. With ECH the "router" would need to have a key to decrypt the ECH, which it can then decrypt inline and make a decision on (this is different to the TLS key and can also use fallback HTTPS records to use a different key than the non-fallback route, although whether browsers currently support that is a different issue, but it is possible in the protocol). This is similar to how fallback with ECH could be supported with HTTP/2 and a TCP connection.
[1]: https://issues.chromium.org/issues/40257146
[2]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...
[3]: https://blog.cloudflare.com/quic-restarts-slow-problems-udpg...
[4]: https://github.com/cloudflare/udpgrm/blob/main/ebpf/ebpf_qui...
Won't you need to "replicate the TLS config" on the back end servers then? And how hard is it to configure TLS on the nginx side anyway, can't you just use ACME?
Otherwise, the TLS handshake would run into the same chicken/egg problem that you have: To derive the keys, it needs the certificate, but to select the certificate, it needs the domain name.
So you only need to replicate the eSNI key, not the entire cert store.
TLS terminating at your edge (which is presumably where the IP addresses attach) isn't any particular risk in a world of letsencrypt where an attacker (who gained access to that box) could simply request a new SSL certificate, so you might as well do it yourself and move on with life.
Also: I've been unable to reproduce performance and reliability claims of quic. I keep trying a couple times a year to see if anything's gotten better, but I mostly leave it disabled for monetary reasons.
> This approach works well when implementing a failover mechanism: if the default path to a server goes down...
I'm not sure I agree: DNS can take minutes for updates to be reflected, and dumb clients (like web browsers) don't failover.
So I use an onerror handler to load the second path. When my ad tracking that looks something like this:
<img src=patha.domain1?tracking
onerror="this.src='pathb.domain2?tracking';this.onerror=function(){}">
but with the more complex APIs, fetch() is wrapped up similarly in the APIs I deliver to users. This works much better than anything else I've tried.If speed is touted as the advantage for QUIC and it is in fact slower, why bother with this protocol ? The author of the PR itself attributes some of the speed issues to the protocol design. Are there other problems in TCP that need fixing ?
Those queues operate purely head-of-queue basis. If what is at the top of the queue 0 is blocked in any way, the whole queue behind it gets stuck, regardless of if it is talking to the same destination, or a completely different one.
I've seen situations where a glitching network card caused some serious knock on impacts across a whole cluster, because the card would hang or packets would drop, and that would end up blocking the qdisc on a completely healthy host that was in the middle of talking to it, which would have impacts on any other host that happened to be talking to that healthy host. A tiny glitch caused much wider impacts than you'd expect.
The same kind of effect would happen from a VM that went through live migration. The tiny, brief pause would cause a spike of latency all over the place.
There are classful alternatives like fq_codel that can be used, that can mitigate some fo this, but you do have to pay a small amount of processing overhead on every packet, because now you have a queuing discipline that actually needs to track some semblance of state.
As it always has been, and always will be.
> Long offers some potential reasons for this difference, including the lack of segmentation offload support on the QUIC side, an extra data copy in transmission path, and the encryption required for the QUIC headers.
All of these three reasons seem potentially very addressable.
It's worth noting that the benchmark here is on pristine network conditions, a drag race if you will. If you are on mobile, your network will have a lot more variability, and there TCP's design limits are going to become much more apparent.
TCP itself often has protocols run on top of it, to do QUIC like things. HTTP/2 is an example of this. So when you compare QUIC and TCP, it's kind of like comparing how fast a car goes with how fast an engine bolted to a frame with wheels on it goes. QUIC goes significantly up the OSI network stack, is layer 5+, where-as TCP+TLS is layer 3. Thats less system design.
QUIC also has wins for connecting faster, and especially for reconnecting faster. It also has IP mobility: if you're on mobile and your IP address changes (happens!) QUIC can keep the session going without rebuilding it once the client sends the next packet.
It's a fantastically well thought out & awesome advancement, radically better in so many ways. The advantages of having multiple non-blocking streams (alike SCTP) massively reduces the scope that higher level protocol design has to take on. And all that multi-streaming stuff being in the kernel means it's deeply optimizable in a way TCP can never enjoy.
Time to stop driving the old rust bucket jalopy of TCP around everywhere, crafting weird elaborate handmade shit atop it. We need a somewhat better starting place for higher level protocols and man oh man is QUIC alluring.
IP is layer 3 - network(ensures packets are routed to the correct host). TCP is layer 4 - transport(some people argue that TCP has functions from layer 5 - eg. establishing sessions between apps), while TLS adds a few functions from layer 6(eg. encryption), which QUIC also has.
You can't reuse a connection that doesn't exist yet. A lot of this is about reducing latency not overall speed.
Probably. According to Google, IPv6 has a measly 46% of internet traffic now [0], and growing at about 5% per year. QUIC is 40% of Chrome traffic, and is growing at 5% every two years [1]. So yeah, their fates do look similar, which is to say both are headed for world domination in a couple of decades.
[0] https://dnsmadeeasy.com/resources/the-state-of-ipv6-adoption...
[1] https://www.cellstream.com/2025/02/14/an-update-on-quic-adop...
To paraphrase: "when you remove all the new stuff being added, you will see all the old stuff is still using the old protocols". Sounds reasonable, but I don't believe it. These IoT devices usually have the simplest stack imaginable, of many of them implemented from the main loop. IPv6 isn't so bad, but QUIC/http2/http3 is a long, long way from simple.
A major driver of IPv6 is phones, which I wound not classify as IoT. Where I live they all receive an IPv6 address now. When I hotspot, they hand out a routable IPv6 address to the laptop / desktop. Modern Windows / Linux installations will use the IPv6 in preference to the double NAT'ed IPv4 address they also hand out. The funny thing is you don't even notice, or at least I didn't. I only twigged when I happened to be looking at packet capture from my tethered laptop and saw all this IPv6 traffic, and wondered what the heck was going on. It could have been happening for years without me noticing. Maybe it was.
It wasn't I surprise I didn't notice. I set up WiFi access for a conference of 100's of computing nerds and professionals many years ago. Partly for kicks, partly as a learning excise I made it IPv6 only. As a backup plan I had a IPv4 network (behind a NAT sadly, which the IPv6 wasn't) ready to go on a different SSID. To my utter disbelief there no complaints, literally not a single one. Again, no one noticed.
Similarly, splitting the networking/etc stacks out from the kernel into userspace can also be a performance improvement for some use cases.
At specific workloads (think: load balancers / proxy servers / etc), these things become extremely expensive.
If everything above IP was in userland, only one program at a time could use TCP.
TCP and UDP being intermediated by the kernel allow multiple programs to use the protocols at the same time because the kernel routes based on port to each socket.
QUIC sits a layer even higher because it cruises on UDP, so I think your point still stands, but it’s stuff on top of TCP/UDP, not IP.
The kernel should be as minimal as possible and everything that can be moved to userspace should be moved there. If you are afraid of performance issues then maybe you should stop using legacy processors with slow context switch timing.
By the same logic, we should never improve performance in software and just tell everyone to buy new hardware instead. A bit ridiculous.
WASDx•19h ago
Seems like this is a step in the right direction to resole some of those issues. I suppose nothing is preventing it from getting hardware support in future network cards as well.
miohtama•19h ago
For other use cases we can keep using TCP.
thickice•18h ago
yello_downunder•18h ago
jabart•18h ago
spwa4•15h ago
m00x•18h ago
extropy•18h ago
beeflet•18h ago
hdgvhicv•17h ago
unethical_ban•17h ago
Example: Janky way to get return routing for traffic when you don't control enterprise routes.
Source: FW engineer
hdgvhicv•5h ago
No doubt you think I should simply renumber all my VMs every time that happens, breaking internal connections. Or perhaps run a completely separate addrsssing in each vm in parallel and make sure each vm knows which connection to use. Perhaps the vms peer with my laptop and then the laptop decides what to push out which way via localprefs, as paths etc. that sounds so much simpler than a simple masquerade.
What happens when I want vm1 out of connection A, vm 3 out of connection B, vm 4-7 out of connection C. Then I want to change them quickly and easily. I’m balancing outbound and inbound rules, reaching for communities, and causing bgp dampening all over the place.
What when they aren’t VMs but instead physical devices. My $40 mifi is now processing the entire DFZ routing table?
What happens when I want a single physical device like a tv to contact one service via connection 1 and another via connection 2 but the device doesn’t support multiple routing tables or selection of that. What if it does support it but I just want to be able to shift my ssh sessions to a low latency higher loss link but keep my streaming ups on the high latency no loss link.
All this is trivial with nat. Now sure I can use NAT66, and do a 1:1 natting (no PAT here), but then I’m using nat and that breaks the ipv6 cult that believes translating network addresses is useless.
beeflet•15h ago
mightyham•12h ago
beeflet•12h ago
johncolanduoni•11h ago
immibis•3h ago
paulddraper•15h ago
What do you have in mind.
skissane•13h ago
And if 10.0.0.0/8 is not enough, there is always the old Class E, 240.0.0.0/4 - likely never going to be acceptable for use on the public Internet, but growing use as an additional private IPv4 address range - that gives you over 200 million more IPv4 addresses
lmm•7h ago
How is it "less hassle"? You've got to use a second, fiddlier protocol and you've got to worry about collisions and translations. Why not just use normal IPv6 and normal addresses for your whole network, how is that more hassle?
> You can always support IPv6 at the perimeter for ingress/egress. If your cluster is so big it can’t fit in 10.0.0.0/8, maybe the right answer is multiple smaller clusters-your service mesh (e.g. istio) can route inter-cluster traffic just based on names, not IPs.
You can work around the problems, sure. But why not just avoid them in the first place?
skissane•6h ago
Because, while less common than it used to be, software that has weird bugs with IPv6 is still a thing-especially if we are talking about internally developed software as opposed to just open source and major proprietary packages. And as long as IPv6 remains the minority in data centre environments, that’s likely to remain true - it is easy for bugs to linger (or even new ones to be introduced) when they are only triggered by a less popular configuration
lmm•5h ago
Iwan-Zotow•11h ago
mort96•4h ago
immibis•3h ago
mort96•1h ago
johncolanduoni•11h ago
dan-robertson•17h ago
exabrial•17h ago
20k•16h ago
Karrot_Kream•16h ago
viraptor•16h ago
It doesn't have to be one or the other. We've known for over a decade that the traffic between DCs was tapped https://www.theguardian.com/technology/2013/oct/30/google-re... Extending that to intra-DC wouldn't be surprising at all.
Meanwhile backdoored chips and firmware attacks are a constant worry and shouldn't be discounted regardless of the first point.
adgjlsfhk1•15h ago
heavyset_go•8h ago
Only a handful of people need to know what happens in Room 641A, and they're compelled or otherwise incentivized not to let anyone else know.
codedokode•7h ago
It might not be able to, if you use secure boot and your server is locked in a cage.
exabrial•16h ago
20k•9h ago
cherryteastain•15h ago
heavyset_go•8h ago
codedokode•7h ago
switchbak•16h ago
lll-o-lll•15h ago
sleepydog•14h ago
lll-o-lll•4h ago
mschuster91•14h ago
subscribed•27m ago
Any communication between the cage and the outside world is through the cross-connects.
Unless it's some state-adversary, no one taps us like this. This is not a shared hosting. No one runs serious workloads like this.
"Unserious"? Sure, everything is encrypted p2p.
kldx•5h ago
ssh3, based on QUIC is quicker at dropping into a shell compared to ssh. The latency difference was clearly visible.
QUIC with the unreliable dgram extension is also a great way to implement port forwarding over ssh. Tunneling one reliable transport over another hides the packer losses in the upper layer.
mort96•4h ago
szszrk•4h ago
Machine-to-machine is usually meant as traffic where neither of the sides is the client device (desktop, mobile etc). Often not initiated by user, but that's debatable.
I would say an server making a sync of database to passive node is machine-to-machine, while a user connection from his browser to webserver is not.