These are my kind of people!
https://github.com/n0-computer/iroh/blob/main/iroh/docs/loca...
https://github.com/n0-computer/iroh/blob/main/iroh-relay/src...
If you use iroh as a library, you can specify your own relays.
It is important to mention that relays are interoperable, so you don't have isolated bubbles of nodes using certain relay networks. I can have the n0 relays specified and still talk to another node that is using a different set of relays.
we can definitely add a config argument to skip the hardcoded relays & provide custom ones!
* all connections are always e2ee (even when traffic flows through a relay)
* relays are both for connection negotiation, and as a fallback when a direct connection isn't possible
* initial packet is always sent through the relay to keep a fast time-to-first-byte, while a direct connection is negotiated in parallel. typical connections send a few hundred bytes over the relay & the rest of the connection lifetime is direct
relays are the only thing among the things you listed that even have a chance of solving this problem
Wireguard doesn't, which is why tailscale took off so much, since it offers basically that at its core (with a bunch of auxiliary features on top).
Show me some wireguard discovery/relay servers if I'm wrong.
Also, QUIC is more language-agnostic. The canonical user-space implementation of wireguard is in Go, which can't really do C FFI bindings, and the abstractions are about dealing with "wireguard devices", not "a single dump pipe", so wireguards userspace library also makes it surprisingly difficult to implement this simple thing without also bringing a ton of baggage (like tun devices, gateways, ip address management, etc) along for the ride.
If you already have a robust wireguard setup, then of course you don't need this and can just use socat or whatever.
QUIC is all UDP, handling the encryption, resending lost packets, and reordering packets if they arrive out of order. The whole point of QUIC is to make it so you can get files transferred quickly.
WireGuard doesn't know the data you're sending, and netcat+TCP is stuck with the limitations of every packet needing to be sent and acknowledged sequentially.
In fact, it's one of the main reasons I use Wireguard. I can transition between mobile network and wifi without any of the applications noticing.
QUIC is a transport protocol that provides a stream abstraction (like TCP), with some improvements over TCP (like built-in support for multiplexing streams on the same connection, without head-of-line blocking issues).
Wireguard provides a network interface abstraction that acts as NIC. You can run TCP on top of a wireguard NIC (or QUIC for that matter).
https://github.com/samyk/pwnat
It has more edges and doesn't handle all cases, but it also avoids the need for any kind of intermediary.
My tool of choice is https://github.com/hyprspace/hyprspace
Too bad the broken nature of NAT means this approach will just ignore any firewall rules you have configured and any malicious device or program can leverage it to open inbound connections.
I attended Rüdiger's (N0) workshop 2 weeks ago at the web3 summit in Berlin and was left super inspired. The code for building something like this is available here https://github.com/rklaehn/iroh-workshop-web3summit2025 and I highly recommend checking out the slides too :)
I would love to see what people would build if they had a little bit more time with help from the n0 team. A one hour or even three hour workshop is too short.
That's a huge assumption I wouldn't make after reading "dumb".
And from the article:
> Easy, direct connections that punch through NATs & stay connected as network conditions change.
This sounds more like a pipe that is trying to be smart. According to your principle, not something to build a secure system with.
# receiver
socat UNIX-RECV:/tmp/foobar - | my-command
# sender
my-command | ssh host socat - UNIX-SENDTO:/tmp/foobar
You can relay through any other SSH server if your target is behind a firewall or subject to NAT (for example the public service ssh-j.com). This is end-to-end encrypted (SSH inside SSH): # receiver
ssh top-secret@ssh-j.com -N -R ssh:22:localhost:22
socat UNIX-RECV:/tmp/foobar - | my command
# sender
my-command | ssh -J top-secret@ssh-j.com ssh socat - UNIX-SENDTO:/tmp/foobar
(originally posted on the thread for "beam": https://news.ycombinator.com/item?id=42593135)WireGuard is more similar.
Dumb pipe punches through NATs, using on-the-fly node identifiers. It even keeps your machines connected as network conditions change.I want less magic, not more impenetrable ip table rulesets (in linux at least).
I've been using it (fairly simply, mind you) and it's been pretty solid for a number of years, and was as administrative relief in comparison to OpenVPN which I'd been using before wireguard existed. Single UDP port usage makes me query your comment about impenetrable IP table rulesets.
(OpenVPN was great for it's time too, the sales reps at the company where I introduced it loved the ability to work from the road, way back early 2000's)
Speaking just for myself, I expected it to be as easy to set up as Tailscale. Not to be set up in exactly the same manner as Tailscale, I understand they are not identical technologies, but I expected the difficulty to be within spitting distance of each other.
Instead I fussed with Wireguard for a few days without it ever working for even the simplest case and had Tailscale up and running in 5 minutes.
I think I recognize the pattern; it's one that has plagued Linux networking in general for decades. The internet is full of "this guy's configuration file that worked once", and then people banging on that without understanding, and the entire internet is just people banging on things they don't understand, 80% of which are for obsolete versions of obsolete features in obsolete kernels, until the search engines are so flooded with these things that if there is a perfect and beautiful guide to understanding exactly how this all works together and gives the necessary understanding to fix the problems yourself it's too buried to ever find. It also doesn't help that these networking technologies are some of the worst when it comes to error messages and diagnosis. Was I one character away from functionality, or was my entire approach fundamentally flawed and I was miles from it working? Who's to say, it all equally silently fails to work in the end.
Tailscale changes your dns lookups, adds a bunch of iptables, and then unfortunately broke features without adding them to the changelog (because security I guess).
While wireguard has more of a maintenance overhead tracking public and private keys and ip addresses, it does less magic -- and I really just want things to work these days.
I still get the chills at the deep and arcane configuration litanies you have to dictate over calls to get a tunnel configured. And sometimes, if you had to integrate different implementations of IPSec with each other, it just wouldn't work and eventually you'd figure out that one or two parameters on one side are just wrong.
And if you don't want to manage IPTables/nftables manually to firewall the traffic from the VPN (which is ugly, I agree), ufw or firewalld introduced forwarding rule management (route, and policies) recently.
Wireguard is a damn simple breath of fresh air. There's so little to configure and go wrong. The mental model took a little bit of time click for me (every endpoint is a peer, it's not client/server) but after that it was a breeze.
It doesn't even assume ssh.
I'm struggling to remember what but there's a simple http service called like patchbay or some such that's a store and forward pattern. This idea of very simple very generic http powered services has a high appeal to me.
Looking forward to a future version that can do WebTransport
[1]: https://gist.github.com/SMUsamaShah/fd6e275e44009b72f64d0570...
If LocalSend is running on iOS and Windows does LocalSend have the ability to send photos?
Yes, I use it all the time.
Both devices need to be on the same network (LAN / WiFi), however. LocalSend does not use Bluetooth.
The old Skype, the one that was a real p2p app and before it got bought by Microsoft, was very good slicing through firewalls and NATs and it offered a plugin api, so it was easy to implement a TCP tunnel with it.
Or whatever ftp thing they mentioned on the Dropbox show HN ;)
Remote:
$ socat TCP-LISTEN:4321,reuseaddr,fork EXEC:"bash -li",pty,stderr,setsid,sigint,rawer&
$ dumbpipe listen-tcp --host 127.0.0.1:4321
using secret key fe82...7efd
Forwarding incoming requests to '127.0.0.1:4321'.
To connect, use e.g.:
dumbpipe connect-tcp nodeabj...wkqay
Local: $ dumbpipe connect-tcp --addr 127.0.0.1:4321 nodeabj...wkqay&
using secret key fe82...7efd
$ nc 127.0.0.1 4321
root@localhost:~#I know there's something about USB A to USB A cables not existing in theory, but this would have been a good reason to have it exist, and USB C of course can do this
Also, Android to PC can sort of do it, and is arguably two computers in some form (but this was easier when Android still acted like a mass storage device). But e.g. two laptops can't do it with each other.
Or using Bluetooth? Or using local WiFi (direct or not).
If both machines have an Ethernet port.
> Or using Bluetooth?
Half the time I need a dumb pipe, it's from personal to work. Regrettably, work forces me to use macOS, and macOS's bluetooth implementation is just an utter tire fire, and doesn't work 90% of the time. I usually fall back to networks, for that reason.
Of course, MBPs also have the "no port" problem above.
> Or using local WiFi (direct or not)
If I'm home, yeah. But TFA is advertising the ability to hole-punch, and if I'm traveling, that'd be an advantage.
It looks like MS also had one, but only on Windows CE for some reason https://www.microsoft.com/en-us/download/details.aspx?id=933...
This dumb pipe thing is certainly interesting but it will run into the same problem as the myriad other solutions that already exist. If you're trying to give a 50MB file to a Windows user they have no way to receive it via any method a Linux user would have to send it unless the Windows user has gone out of their way to install something most people have never heard of.
If this was a real thing you needed to do, and it is too much work to get them to install WSL, you could probably just send them the link to install Git and use git bash to run that curl install sh script for dumbpipe.
And if this seemed like a very useful thing, it couldn’t be too hard to package this all up into a little utility that gets windows to do it.
But alas, it remains “easier” to do this with email or a cloud service or a usb stick/sd card.
I guess now you can find the solution that you need by telling the requirements to LLMs who have now indexed a lot of the tradeoffs
If we put the requirements of,
1. E2EE
2. Does not rely on Google. (Or ideally, any other for profit corporation.)
That eliminates like 90% of the recent trend of WebRTC P2P file transfer things that have graced HN over the last decade, as all WebRTC code seems to just copy Google's STUN/TURN servers between each other.But as you say,
> but certain people don't want that problem to ever be solved without cloud services involved.
ISPs seem to be that in set. IPv6 would obsolete NAT, but my ISP was kind enough to ship an IPv6 firewall that by default drops incoming packets. It has four modes: drop everything, drop all inbound, a weird intermediate mode that is useless¹, and allow everything.
(¹this is Verizon fios; they claim, "This feature enables "outside-to-inside" access for IPv6 services so that an "outside" Internet service (gaming, video, etc.) can access a specific "inside" home client device & port in your local area network."; but the feature, AFAICT, requires the external peer's address. I.e., I need to know what my roaming IP will be before I leave the house, somehow, and that's obviously impossible. It seems utterly clearly slapped on to say "it comes with a firewall" but was never used by anyone at Verizon in the real world prior to shipping…)
My starlink is such that i cannot install/set up things like pfsense/opnsense because the connection drops sometimes, and when either of those installers fail, they fail all the way back to "format the drive y/n?" Also, things like ipcop and monowall et al don't seem to support ipv6.
I looked in to managing ipv6 from a "i am making my own router" and no OS makes this simple. i tried with debian, and could not get it to route any packets. I literally wrote the guide for using a VM for ipcop and one of the "wall" distros; but something about ipv6 just evades me.
If you've not got an Internet[-routable] address, are you truly connected to the Internet?
> I looked in to managing ipv6 from a "i am making my own router" and no OS makes this simple. i tried with debian, and could not get it to route any packets. I literally wrote the guide for using a VM for ipcop and one of the "wall" distros; but something about ipv6 just evades me.
TBH, I would think that this is just enabling v6 forwarding. That wouldn't do RA or DHCP, I don't think, but I don't think you'd want that, either. (That would be the responsibility of the upstream network.)
# On the upstream network.
[Network]
DHCP=yes
[DHCPv6]
PrefixDelegationHint=::/56
# On each downstream network.
[Network]
IPv6SendRA=yes
DHCPPrefixDelegation=yes
If you don't want systemd-networkd, look at https://wiki.debian.org/IPv6PrefixDelegation#Using_ifupdown_.... Firewalling is the same as v4, just without the NAT.One frustrating part is that as far as I can tell nothing supports easy downstream DHCPv6-PD delegation, so machines on the downstream network that want their own prefix won't be able to get one automatically. OpenWRT's network config daemon supports it, but nothing on regular Linux does.
> however many /64s that is (at least 8);
256!
The use-case of a wired connection between two PCs was already solved years before USB --- with Ethernet.
You only get Link-Local addresses by default, which I recall as somewhat annoying if you want to use SSH or whatever, but if you have something that does network discovery it should probably work pretty seamlessly.
See https://christian.kellner.me/2018/05/24/thunderbolt-networki... or https://superuser.com/a/1784608
The same thing happens with two machines connected via an Ethernet cable, which appears to be what this USB4 network feature does - an Ethernet NIC to software, but with different lower layer protocols.
https://en.wikipedia.org/wiki/Medium-dependent_interface#Aut...
ssh fe80::2%eth0
where fe80::2 is the peer's address, and eth0 is the local name of the interface they're on.Unfortunately browsers have decided that link-local is pointless and refuse to support it, so HTTP is much more difficult.
After TCP/IP became standard on personal computers, I used Ethernet crossover cable to transfer large files between compuers. I always have some non-networked computers. USB sticks were not yet available.
Today the Ethernet port is removed from many personal computers perhaps in hopes computer owners will send ("sync") their files to third party computers on the internet (renamed "the cloud") as a means of transferring files between the owner's computers.
Much has changed over the years. Expect replies about those changes. There are many, many different ways to transfer files today. Expect comments advocating those other methods. But the crossover cable method still works. With a USB-to-Ethernet adapter it can work even on computers with no Ethernet port. No special software is needed. No router is needed. No internet is needed. Certainly no third party is needed. Just TCP/IP which is still a standard.
Pretty sure one can set up an ad hoc wifi network for this.
Ad hoc requires the machines be in "WiFi shouting range".
macOS does not have any offline documentation like pretty much every OS used to. When I turn off my WiFi and then open "Mac User Guide" or "Tips for your Mac", they both tell me they require an internet connection.
When I re-enable my internet connection, neither of those apps have information about how to set up an ad-hoc wifi network.
When I looked up how to create an ad-hoc network in other sources, I discovered that the ability to create an ad-hoc network was apparently removed from the GUI in macOS 11, and now requires CLI commands.
I hate how modern tech companies assume that everybody always has access to a high speed internet connection.
I suspect it's deliberate, especially when said company also sells cloud services.
Perhaps that's mea culpa, and I suppose perhaps I should try NM again, but I also sort of thought this wouldn't be rocket science, until I tried to do it and failed.
I’m not buying it.
They have normal, consumer OSes on them. Whatever one might reasonably already have preinstalled.
I'm sitting at an macOS machine presently. If I poke around the Wi-Fi menu, and the Wi-Fi settings … IDK, I come up empty handed.
So let's cheat, and Google it. But the entire point of my post above is that needing to Google it defeats the point; if I have an Internet connection (which would be required to Google something) — I can just network the various machines using that Internet connection. In every situation I've wanted to form an ad hoc network, it is because I do not have any access to the Internet, period, but I still have the need to network two machines together.
Anyways, Gemini's answer:
> To set up an ad-hoc Wi-Fi network on macOS, you can use the "Create Network" option in the Wi-Fi menu.
Apparent hallucination, since there is no such menu item.
The first result says the same thing:
> 1. Click the wifi icon on the menu bar. 2. Click “Create network. . .”
(… I suppose I see where the training data came from).
The next result is a reddit thread; the thread is specifically about ad hoc WiFi. The only answer is a link to a macOS support article; that article tells us to go to General → Sharing, and use "Internet Sharing". But AFAICT, that's for sharing an existing WiFi connection over a secondardy medium: i.e., if you have WiFi, you could share that connection over a TB cable, or some other wired medium. And "To Devices Using" conspicuously lacks "also over WiFi", or similar. I.e., this also isn't what we're looking for.
The rest of the results are mostly all similarly confused, and I've given up.
So even if I had Internet, … I still can't do it. So if I'm actually in a situation where I need an ad hoc, it definitely isn't happening.
Wow, tell me you don’t know how computer networks work without telling me you don’t know how computer networks work.
I think we are done here.
I think there must be some misunderstanding? I think deathanatos just wants an easy way to send files between computers when the internet is down, which seems decently reasonable.
Oh come on, this isn't a conspiracy. For the last decade, every single laptop computer I've used has been thinner than an ethernet port, and every desktop has shipped with an ethernet port. I think the last few generations of MacBook Pros (which were famously thicker than prior generations) are roughly as thick as an ethernet port, but I'm not sure it'd practically fit.
And I know hacker news hates thin laptops, but most people prefer thin laptops over laptops with ethernet. My MacBook Air is thin and powerful and portable and can be charged with a USB-C phone charger. It's totally worth it for 99% of people to not have an ethernet port.
The XJACK and similar designs have been around long enough they can vote
Up to 2MB/s effective throughput, better than 10M Ethernet. Likely it was slower for you due to other limitations.
This could be done on Amiga too, using parnet https://crossconnect.tripod.com/PARNET.HTML
I recall it being easier to set up than a dialup modem (since the latter also required installing a TCP/IP stack)
usb probably works too if you google a bit
Brought to you by the same people that made "peer-to-peer" a dirty word.
Receiver (listening to port 31337):
`nc -l -p 31337`
Sender (connecting to receiver IP):
`nc <receiver_ip> 31337`
Want to send a message to the receiver:
`echo "Hello from Kocial" | nc <receiver_ip> 31337`
== if you want to send a file ==
Receiver:
`nc -l -p 31337 > hackernews.pdf`
Sender:
`nc <receiver_ip> 313337 < hackernews.pdf`
Edit: after digging a little, Iroh uses QUIC which looks like a reliable, ordered protocol as opposed to the unreliable, unordered nature of UDP which is what many games need.
Now what I'd love to figure out is if there's a way to use their relay hopping and connection management but send/receive data through a dumb UDP pipe.
QUIC can do both reliable & unreliable streams, as can iroh
This isn't right, as a sibling comment mentions. QUIC is a UDP-based protocol that handles stream multiplexing and encryption, but you can send individual, unordered, unreliable datagrams over the QUIC connection, which effectively boils down to UDP with a bit of overhead for the QUIC header. The relevant method in Iroh is send_datagram: https://docs.rs/iroh-net/latest/iroh_net/endpoint/struct.Con...
A better solution would be to expose the iroh send_datagram and read_datagram calls somehow. Maybe if dumbpipe accepted a datagram flag like -d, then a second connection to a peer could be opened. It would recognize that the peer has already been found and maybe reuse the iroh instance. Then the app could send over either stream when it needs to be reliable or best effort.
This missing datagram feature was the first thing I thought of too when I read the post, so it's disappointing that it doesn't discuss it. Mostly all proof of concept tools like this are MVP, so don't attempt to be feature-complete, which forces the user to either learn the entirety of the library just to use it, or fork it and build their own.
IMHO that's really disappointing and defeats the purpose of most software today, since developers are programmed to think that the "do one thing and do it well" unix philosophy is the only philosophy. It's a pet peeve of mine because nearly the entirety of the labor I'm forced to perform is about working around these artificial and unintentional limitations.
Ok I just looked at https://www.dumbpipe.dev/install.sh
if [ "$OS" = "Windows_NT" ]; then
echo "Error: this installer only works on linux & macOS." 1>&2
exit 1
else
So it appears to be linux and macOS only, which is of little use for games. I'm shocked, just shocked that I'll have to write my own..I believe this would be even more unreliable than UDP, since Iroh is also capable of using a relay server for when hole punching can't be performed, and Iroh also handles IP migration.
> it appears to be linux and macOS only
Iroh should work on Windows, IIUC, just the installer and possibly prebuilt binaries aren't provided. But dumbpipe isn't designed for UDP anyways, it's closer to a competitor for socat/nc.
arecord - | openssl aes-128-cbc -pass:'secretstring' - | nc <dest ip> <dest port>
on the receiving end nc -l <dest port> | openssl aes-128-cbc -pass:'secretstring' | aplay -
I don't remember exactly which audio device I used back then. It worked okay-ish, but there was definitely lag from somewhere. Just kind of neat that you can build something so useful without a bloated app, just chaining a few commands together.Projects or companies that use iroh can either run their own relays or use our service https://n0des.iroh.computer/ , which among many other things allows spinning up a set of dedicated relays.
ok but my network stack doesn't speak nodeID, it speaks tcp/ip -- so something has to resolve your public keys to a host and port that I can actually connect to.
this is roughly the same use case that DNS solves, except that domain names are generally human-compatible, and DNS servers are maintained by an enormous number of globally-distributed network engineers
it seems like this system rolls its own public key string to actual IP address and port mapping/discovery system, and offers a default implementation based on dns which the authors own and operate, which is fine. but the authors kind of hand-wave that part of the system away, saying hey you don't need to use this infra, you can use your own, or do whatever you want!
but like, for systems like this, discovery is basically the entire ball game and the only difficult problem that needs to be solved! if you ignore the details of node discovery and name mapping/resolution like this, then of course you can build any kind p2p network with content-addressable identifiers or whatever. it's so easy a cave man can do it, just look at ipfs
$ ./dumbpipe listen
...
To connect use: ./dumbpipe connect nodeecsxraxj...
that `nodeecsxraxj...` is a serialized form of some data type that includes the IP address(es) that the client needs to connect to?forgive me for what is maybe a dumb question, but if this is the case, then what is the value proposition here? is it just the smushing together of some IPs with a public key in a single identifier?
We have a tool https://ticket.iroh.computer/ that allows you to see exactly what's in a ticket.
if you need to go thru a relay to do resolution, and relays are specified in terms of DNS names, then that's not much different than just a plain URL
if the string embeds direct IPs then that's great, but IPs are ephemeral, so the string isn't gonna be stable (for users) over time, and therefore isn't really useful as an identifier for end users
if the string represents some value that resolves to different IPs over time (like a DNS entry) but can be resolved via different channels (like thru a relay, or via a blockchain, or over mdns, or whatever) then that string only has meaning in the context of how (and when) it was resolved -- if you share "abcd" with alice and bob, but alice resolves it according to one relay system, and bob resolves it according to mdns, they will get totally different results. so then what purpose does that string serve?
And, as somebody else remarked, the ticket contains the direct IP addresses for the case where the two nodes are either in the same private subnet or publicly reachable. It also contains the relay URL of the listener, so as long as the listener remains in the same geographic region, dumbpipe won't have to use node discovery at all even if the listener ip changes or is behind a NAT.
we also have an option for node discovery that uses pkarr.org, which is using the bittorrent mainline DHT and therefore is fully decentralised
if users access that bittorrent mainline DHT thru a third party server then it's obviously not decentralized, right? that server is the central point to which clients delegate trustSee https://docs.rs/iroh/latest/iroh/discovery/pkarr/dht/struct....
If you use this discovery mechanism the iroh node will directly publish to and resolve from the DHT.
I've been writing raw POSIX net\ code today. A lot of variables shorten "socket" to "sock". And my brain was like.. um, bad news! This is trying to sell us on their special sock(et)s!
E.g. you got a local development webserver running on 127.0.0.1:3000. You can expose this via dumbpipe using
dumbpipe listen-tcp --host 127.0.0.1:3000
You get a node ticket that contains details on how to connect. Put it into https://ticket.iroh.computer/ if you want to know what's in it.
Then on the other side, e.g. on a small box in the cloud, you can do this:
dumbpipe connect-tcp --addr 0.0.0.0:80 <ticket>
Any TCP request to the cloud box on port 80 will be forwarded to the dev webserver.
[maintainer of iroh here]
It's not. Use tailscale.
Once connected, the connection is encrypted using TLS with the raw public keys in TLS extension ( https://datatracker.ietf.org/doc/html/rfc7250 ).
Let me know if my understanding is incorrect, I don't have much experience with QUIC :)
QUIC is specifying TLS, specifically TLS 1.3 or larger. From the RFC 9001 (Using TLS to Secure QUIC): "Clients MUST NOT offer TLS versions older than 1.3.".
For the first request, brute forcing would mean guessing a 32 byte Ed25519 public key. That is not realistically possible.
For subsequent requests, even eavesdropping on the first request does not allow you to guess the public key, since the part of the handshake that contains the public key is already encrypted in TLS 1.3.
With all that being said, if you want to have a long running dumbpipe listen, you might want to restrict the set of nodes that are allowed to connect to it. We got a PR for this, but it is not yet merged.
How is multiplexing used here? On the surface it looks like a single stream. Is the file broken into chunks and the chunks streamed separately?
In other iroh based protocols the ability to have many cheap QUIC streams without head-of-line blocking is very useful. E.g. we got various request/response style protocols where a large number of requests can be in flight concurrently, and each request just maps to a single QUIC stream.
What I wonder is this, is there a clever and simple way to share the secret phrase between two devices? The example is pretty long to manually enter "nodeecsxraxjtqtneathgplh6d5nb2rsnxpfulmkec2rvhwv3hh6m4rdgaibamaeqwjaegplgayaycueiom6wmbqcjqaibavg5hiaaaaaaaaaaabaau7wmbq"
https://trog.qgl.org/20081217/the-why-your-anti-spam-idea-wo...
I'll start:
Your solution..
( ) Can't punch through NAT
( ) Isn't fully cross-platform
( ) Must be installed at the OS level and can't be used standalone by an executable
( ) Only provides reliable or best-effort streams but not both
( ) Can't handle when the host or peer IP address changes
( ) Doesn't checksum data
( ) Doesn't automatically use encryption or default to using it
( ) Doesn't allow multiple connections to the same peer for channels or load balancing
( ) Doesn't contain window logic to emulate best-effort datagrams over about 1500 bytes
( ) Uses a restrictive license like GPL instead of MIT
Please add more and/or list solutions that pass the whole checklist!I think iroh checks all the boxes but one.
( ) Doesn't contain window logic to emulate best-effort datagrams over about 1500 bytes
So you want a way to send unreliable datagrams larger than one MTU. We don't have that, since we only support datagrams via https://datatracker.ietf.org/doc/html/rfc9221 .
You could just use streams - they are extremely lightweight. But those would then be reliable datagrams, which comes with some overhead you might not want.
So how hard would it be to implement window logic on top of RFC9221 datagrams?
There are some limitations regarding some double NATs or very strictly configured corporate firewalls. This is why there is always the relay path as a fallback.
If you have a specific situation in mind and want to know if hole punching works, we got a tool iroh-doctor to measure connection speed and connection status (relay, direct, mixed):
https://crates.io/crates/iroh-doctor , can be installed using cargo install iroh-doctor if you have rust installed.
The use case I have in mind is for realtime data synchronization. Say we want to share a state larger than 1500 bytes, then we have to come up with a clever scheme to compress the state or do partial state transfer, which could require knowledge of atomic updates or even database concepts like ACID, which feels over-engineered.
I'd prefer it if the protocol batched datagrams for me. For example, if we send a state of 3000 bytes, that's 2 datagrams at an MTU of 1500. Maybe 1 of those 2 fails so the message gets dropped. When we send a state again, for example in a game that sends updates 10 times per second, maybe the next 2 datagrams make it. So we get the most recent state in 3 datagrams instead of 4, and that's fine.
I'm thinking that a large unreliable message protocol should add a monotonically increasing message number and index id to each datagram. So sending 3000 bytes twice might look like [0][0],[0][1] and [1][0],[1][1]. For each complete message, the receiver could inspect the message number metadata and ignore any previous ones, even if they happen to arrive later.
Looks like UDP datagram loss on the internet is generally less than 1%:
https://stackoverflow.com/questions/15060180/what-are-the-ch...
So I think this scheme would generally "just work" and hiccup every 5 seconds or so when sending 10 messages per second at 2 datagrams each and a 99% success rate, and the outage would only last 100 ms.
We might need more checklist items:
( ) Doesn't provide a way to get the last known Maximum Transmission Unit (MTU)
And optionally: ( ) Doesn't provide a way to get large unreliable message number metadata
Liftyee•6mo ago
mpalmer•6mo ago
danenania•6mo ago
b_fiive•6mo ago
max-privatevoid•6mo ago
binary132•6mo ago
cr125rider•6mo ago
nine_k•6mo ago
odo1242•6mo ago
The real feature of Tailscale is being able to connect to devices without worrying about where they are.
bradfitz•6mo ago
snapplebobapple•6mo ago
rollcat•6mo ago
homebrewer•6mo ago
odo1242•6mo ago
scosman•6mo ago
kiitos•6mo ago
benreesman•6mo ago
TechDebtDevin•6mo ago
homebrewer•6mo ago
api•6mo ago
Prior to Tailscale there were companies -- ZeroTier and before it Hamachi -- and as you say many FOSS projects and academic efforts. Overlay networks aren't new. VPNs aren't new. Automated P2P with relay fallback isn't new. Cryptographic addressing isn't new. They just put a good UX in front of it, somewhat easier to onboard than their competitors, and as you say had a really big marketing budget due to raising a lot when money was cheap.
Very few things are totally new. In the past ten years LLMs are the only actually new thing I've seen.
Shill disclosure: I'm the founder of ZeroTier, and we've pivoted a bit more into the industrial space, but we still exist as a free thing you can use to build overlays. Still growing too. Don't have any ill will toward Tailscale. As I said nobody "owns" P2P and they're doing something a bit different from us in terms of UX and target market.
These "dumb pipe" tools -- CLI tooling for P2P pipes -- are cool and useful and IMHO aren't exactly the same thing as ZT or TS etc. They're for a different set of use cases.
The worst thing about the Internet is that it evolved into a client-server architecture. I remain very cautiously optimistic that we might fix this eventually, or at least enable the other paradigm to a much greater extent.
rollcat•6mo ago
udev4096•6mo ago
api•6mo ago
udev4096•6mo ago
gavinray•6mo ago
NoIP was also the recommended "easy" option for configuring RAT (Trojan) host addresses at the time IIRC.
sergiotapia•6mo ago
dandellion•6mo ago
It's good as long as everything works out of the box, but it's a nightmare when something doesn't work. Or at least that has been my experience. I'm used to always troubleshoot first when I have any issue, but with Tailscale I decided I'm done trying to fight it, next time something doesn't work I'll just open a ticket and make it the ops team problem.
api•6mo ago
opello•6mo ago
[1] https://playit.gg/
physicles•6mo ago
flub•6mo ago
I love the fact we can make different tools learning from each other and approaching making p2p usable in different ways.
senko•6mo ago
Tailscale makes it even more convenient and adds some goodies on top. I'm a happy (free tier) user.
[0] I also managed an OpenVPN setup with a few hundred nodes a few decades back. Boy do we have it easy now...
benreesman•6mo ago
It doesn't solve problems on my personal infrastructure that I couldnt solve myself, but it solves my work problem of getting real networking accepted by a diverse audience with competing priorities. And its like 20 bucks a seat with all the trimmings. Idk, maybe its 50, I don't really check because its the cheapest thing on my list of cloud stuff by an order of magnitude or so.
Its getting more enterprise and less hackerish with time, big surprise, and I'm glad there's younger stuff in the pipe like TFA to keep it honest, but of all the necessary evils in The Cloud? I feel rather fondly towards tailscale rather than with cold rage like most everything else on the Mercury card.
conradev•6mo ago
It’s a bit like gRPC except you control each byte stream and can use one for, say, a voice call while you use another for file transfer and yet another for simple RPC. It’s probably most similar to WebRTC but you have more options than SCTP and RTMP(?).
rklaehn•6mo ago
Dumbpipe is meant to be an useful standalone tool, but also a very simple showcase for what you can do with iroh.