Is there a way to be selective about what ports are exposed from the host to the target? The target could handle it but fine grained control is nice.
Certain UDP-based protocols may be hairier, though.
> you now have multiple real-world IP addresses that people can get to
In your new situation that is not the one in the article, you can just use different ports.
Works like magic :)
On the other hand, Cloudflare is a pretty easy solution against spam bots and scrapers. Probably a better choice if that's something you need protection against.
Maybe if you're on a limited data plan (like in Belgium or on mobile data), you'd want to prevent unnecessary pageloads? Afaik that doesn't apply to most home connections
Or if you want to absolutely prevent that LLMs eat your content for moral/copyright reasons, then it can't be on the open internet no matter who your gateway is
I also selectively expose via the Cloudflare Tunnel. Most things are tailscale only.
For a while I also thought that regular SSH tunnels would be enough but they kept failing occasionally even with autossh.
Oh and I got bitten by Docker default MTU settings when trying to add everything to the same Swarm cluster.
Then doing straight-forward iptables or L7, or reverse proxy via Caddy, Nginx, etc, directly to the routable IP address.
The outcome is the ~same, bonus is not having to handle the lower level component, negative is an extra "thing" to manage.
But this is how I do the same thing, and i'm quite happy with the result. I can also trivially add additional devices, and even use it for egress, giving me a good pool of exit-IP addresses.
(Note, I was going to add this as a comment on the blog, but it seems their captcha service is broken would not display - so it was blocked)
What I've done is that the VPS Nginx can talk over Tailscale to the server in question and the Cloudflare Tunnel lets those not on Tailscale (which is me sometimes) access the VPS.
I think I've seen some scripts floating around to automate this process but can't remember where. There are lots of good related tools listed here: https://github.com/anderspitman/awesome-tunneling
One of the biggest ISPs in my country has been promising IPv6 since 2016. Another, smaller, competitor, advertised on "World IPv6 Day" in 2011 that it was way ahead of the competition on supplying IPv6; but in fact does not supply it today.
One of the answers I see given a lot over the years is: Yes, I know that I could do this simply with IPv6. But ISPs around here don't route IPv6, or even formally provide statically-assigned IPv4 to non-business customers. So I have had to build this Heath Robinson contraption instead.
I use a static HE (Hurricane Electric) IPv6 tunnel there, and it works great.
The only issue is that YouTube thinks the IPv6 block is commercial or an AI dev scraping their content, so I can't look at videos unless I'm logged in to YouTube.
Still, it let me tear down the HE IPv6 tunnel I was also running, since the sole reason I needed IPv6 was so our household game consoles could all play online without cursed firewall rules and IP reservations. I’m pretty chuffed with the present status quo, even if it’s far from perfect.
One other thing I’d note about OPs article (for folks considering it as a way to work around shitty ISP policies) is that once you have this up and running, you also have a perfect setup for a reverse proxy deployment for your public services. Just make sure you’re watching your bandwidth so you don’t get a surprise bill.
Ah, I see you also watched that video yesterday on manufacturing a tiny electric rotor.
IPv4 is getting CGNAT'd more and more, on the other hand. One national ISP basically lets you pick between IPv4 CGNAT and IPv6 support (with IPv6 being the default). Another has been rolling out CGNAT IPv4 for new customers (at first without even offering IPv6, took them a few months to correct that).
This isn't even an "America and Western Europe" thing. It's a "whatever batshit insane approach the local ISP took" thing. And it's not just affecting IPv6 either.
Even if you decide to try to make it a Frankenstein's monster of a protocol, making it a two IPv4 packets wrapped in each other to create a v4+v4=v8 address space, you'll need a whole new routing solution for the Internet, as those encapsulations would have issues with NATs. And that'll be way more error prone (and thus, less secure), because it'll be theoretically possible to accidentally mix up v4 and inner-half-of-v8 traffic.
Nah, if we can't get enough people to adopt IPv6, there's no chance we'll get even more people to adopt some other IPvX (unless something truly extraordinary happens that would trigger such adoption, of course).
It also can't be backwards compatible with IPv4 networking and software. The network gear will drop extra address, the OS will ignore it, and software will blow up.
It would be much better to make a new version. But if going to make new protocol, might as well make the address big enough to not need expansion again.
Then you have to update every networking device to support the new standard. And update all the protocols (DHCP, etc) for more address space. That part is what took a lot of the time for IPv6. Then you have to update all of the software to support 64-bit addresses. Luckily, most of the work was already done for IPv6.
Then you have to support a transition mechanism to talk to IPv4. Except there isn't enough space in new address. IPv6 on the other hand, has enough address space to stuff the IPv4 host and port in the IPv6 address for stateless NAT.
> Where are you going to put the extra address bits in the IPv4 header?
The optional part. EIP proposed using 16 bits (minimum) to bump the address space to 40 bits (the EIP extension portion is variable-sized so it can go higher until you reach header option limits): https://archive.org/details/rfc1385/page/4/mode/2upAlso with the extra octet added we'd get ~254 current-IPv4-sized clusters of addresses. If a unit inside one of these doesn't really care about the others they can skip supplying this information entirely, i.e. not all participants need to understand the extension. LANs, internal networks and residential use comes to mind as examples in which case only the gateway has to be updated just like the RFC says.
With IPv6 participation is all or nothing or dual stack, but then this is ~1.1 stack :)
Unless you're planning on doing all IP communications in user space (or within your network "cluster"), the OS and IP stack still needs to be updated, you need a new addressing format, applications need to be aware of it, etc. If you want to actually make use of the new address space, it all needs to be updated... just like IPv6.
So no, good attempt but it's pretty much still a 'upgrade all the routers and switches' kind of issue just like IPv6.
There's no straightforward way of getting old hosts to be able to address anything in the new expanded space, or older routers to be able to route to it.
So you have to basically dual-stack it, and, oops, you've created the exact same situation with IPv6...
I frequently see this claim made but it simply isn't true. NAT isn't inherent to a protocol it's something the user does on top of it. You can NAT IPv6 just fine there just isn't the same pressure to do so.
You can NAT IPv6 but it is rarely done since there is simply no need.
Your IPv8 is what IPv6 should have been. Instead, IPv6 decided to re-invent way too much, and is why we can't have nice things are are stuck with IPv4 and NAT. Just doubling the address width would have given us 90% of the benefit of V6 with far less complexity and would have been adopted much, much, much faster.
I just ported some (BSD) kernel code from V4 to V6. If the address width was just twice as big, and not 4x as big, a lot of the fugly stuff you have to deal with in C would never have happened. A sockaddr could have been expanded by 2 bytes to handle V6. There would not be all these oddball casts between V4/V6, and entirely different IP packet handling routines because of data size differences and differences in stuff like route and mac address lookup.
Another pet peeve of mine from my days working on hardware is IPv6 extension headers. There is no limit in the protocol to how many extension headers a packet can have. This makes verifying ASICS hard, and leads to very poor support for them. I remember when we were implementing them and we had a nice way to do it, we looked at what our competitors did. We found most of them just disabled any advanced features when more than 1 extension header was present.
The main disagreements have been above what to do with the new addresses e.g. some platforms insist on SLAAC. (Which is good because it forces your ISP to give you a /64).
Devices operating at the IP layer aren't allowed to care about extension headers other than hop-by-hop, which must be the first header for this reason. Breaking your stupid middlebox is considered a good thing because these middleboxes are constantly breaking everyone's connections.
Your sockaddr complaints WOULD apply at double address length on platforms other than your favorite one. The IETF shouldn't be in charge of making BSD's API slightly more convenient at the expense of literally everything else. And with addresses twice as long, they wouldn't be as effectively infinite. You'd still need to be assigned one from your ISP. They'd still probably only give you one, or worse, charge you based on your number of devices. You'd still probably have NAT.
There's nothing nicer than being able to BGP peer and just handle everything yourself. I really miss old Level 3 (before the Lumen/CenturyLink buyout).
Kind of kicking myself for selling my netblock but it was a decent amount of money ($6000).
What if I you will your netblock to me? I'll will you my camaro and my collection of amiga parts.
(I really want your netblock)
At the time, my FiOS was about $80/month, but they wanted $300/month for a static IP. I used a VPS (at the time with CrystalTech), which was less than $50/month. Net savings: $170/month.
So ridiculous.
It’s fast, far quicker than I can use, and the static IP was a one off $10 or similar.
"Factors leading to a successful installation: Safe access to the roof without need for a helicopter."
[1] https://www.monkeybrains.net/residential.php#residential
The vps and each host are each nebula nodes. I can put the nodes wherever i want. Some are on an additional vps, some are running on proxmox locally. I even have one application running as a geo-isolated and redundant application on a small computer at my friend’s house in another state.
Setting up Pangolin on the VPS, and Newt on your lan, connecting them and adding e.g. a small demo website as a resource on Pagolin will take you about half an hour (unless your domain propagation is slow, so always start by defining the name in DNS and point it to your VPS IP to start with. You can use a wildcard if you do not want to manually make a new DNS entry each time)
You can run the Pangolin also on the lan, but you will need to open a few ports then on your lan firewall, and manage your ddns etc. if you do not have a fixed IP at home.
For less than 4€/month I opted for the VPS route.
What is going on here with these addresses? I'm used to seeing stuff like this in movies – where it always destroys my immersion because now I have to think about the clueless person who did the computer visuals – but surely this author knows about IPv4 addresses?
I think I will be done with sites that require me to solve captchas to visit for simple reading, just as I am done with sites that require me to run javascript to read their text.
Now after doing the captcha ~5 times and getting nothing a different captcha pops up that actually works and lets one in.
It's not good but it's a hell of a lot better than their old system.
I keep waiting for someone trying to use the web on an older computer who's sitting there for 30 seconds every time they click another search result. Or a battery-powered device that now uses a lot of inefficient high frequency clock cycles to get these computations out of the way
But so far I've heard nobody! And they've been fast on my phone. Does this really keep bots out? I'm quite surprised in both directions (how little computation apparently already helps and how few people run into significant issues)
When this came up 15 years ago (when PoW was hyping due to Bitcoin and Litecoin) the conversation was certainly different than how people regard this today. Now we just need an email version of it and I'm curious if mass spam becomes a thing of the past as well
I'm lucky enough to have a weird niche ISP available to me, so I'm paying $35 a month for around 600MBit symmetric data. Unfortunately they don't offer static IP addresses to residential customers, and nor do they allow multiple IP addresses per connection, and I'm the sort of person who'd like to run a bunch of stuff myself, so I've been looking for ways to manage this.
What I've ended up doing is renting a cheap VPS from a vendor that lets me add multiple IP addresses for minimal extra cost. The precise nature of the VPS isn't relevant - you just want a machine (it doesn't need much CPU, RAM, or storage) that has multiple world routeable IPv4 addresses associated with it and has no port blocks on incoming traffic. Ideally it's geographically local and peers with your ISP in order to reduce additional latency, but that's a nice to have rather than a requirement.
By setting that up you now have multiple real-world IP addresses that people can get to. How do we get them to the machine in your house you want to be accessible? First we need a connection between that machine and your VPS, and the easiest approach here is Wireguard. We only need a point-to-point link, nothing routable, and none of the IP addresses involved need to have anything to do with any of the rest of your network. So, on your local machine you want something like:
[Interface] PrivateKey = privkeyhere ListenPort = 51820 Address = localaddr/32
[Peer] Endpoint = VPS:51820 PublicKey = pubkeyhere AllowedIPs = VPS/0
And on your VPS, something like:
[Interface] Address = vpswgaddr/32 SaveConfig = true ListenPort = 51820 PrivateKey = privkeyhere
[Peer] PublicKey = pubkeyhere AllowedIPs = localaddr/32
The addresses here are (other than the VPS address) arbitrary - but they do need to be consistent, otherwise Wireguard is going to be unhappy and your packets will not have a fun time. Bring that interface up with wg-quick and make sure the devices can ping each other. Hurrah! That's the easy bit.
Now you want packets from the outside world to get to your internal machine. Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005. On the VPS, you're going to want to do:
iptables -t nat -A PREROUTING -p tcp -d 321.985.520.309 -j DNAT --to-destination 867.420.696.005
Now, all incoming packets for 321.985.520.309 will be rewritten to head towards 867.420.696.005 instead (make sure you've set net.ipv4.ip_forward to 1 via sysctl!). Victory! Or is it? Well, no.
What we're doing here is rewriting the destination address of the packets so instead of heading to an address associated with the VPS, they're now going to head to your internal system over the Wireguard link. Which is then going to ignore them, because the AllowedIPs statement in the config only allows packets coming from your VPS, and these packets still have their original source IP. We could rewrite the source IP to match the VPS IP, but then you'd have no idea where any of these packets were coming from, and that sucks. Let's do something better. On the local machine, in the peer, let's update AllowedIps to 0.0.0.0/0 to permit packets form any source to appear over our Wireguard link. But if we bring the interface up now, it'll try to route all traffic over the Wireguard link, which isn't what we want. So we'll add table = off to the interface stanza of the config to disable that, and now we can bring the interface up without breaking everything but still allowing packets to reach us. However, we do still need to tell the kernel how to reach the remote VPN endpoint, which we can do with ip route add vpswgaddr dev wg0. Add this to the interface stanza as:
PostUp = ip route add vpswgaddr dev wg0 PreDown = ip route del vpswgaddr dev wg0
That's half the battle. The problem is that they're going to show up there with the source address still set to the original source IP, and your internal system is (because Linux) going to notice it has the ability to just send replies to the outside world via your ISP rather than via Wireguard and nothing is going to work. Thanks, Linux. Thinux.
But there's a way to solve this - policy routing. Linux allows you to have multiple separate routing tables, and define policy that controls which routing table will be used for a given packet. First, let's define a new table reference. On the local machine, edit /etc/iproute2/rt_tables and add a new entry that's something like:
1 wireguard
where "1" is just a standin for a number not otherwise used there. Now edit your wireguard config and replace table=off with table=wireguard - Wireguard will now update the wireguard routing table rather than the global one. Now all we need to do is to tell the kernel to push packets into the appropriate routing table - we can do that with ip rule add from localaddr lookup wireguard, which tells the kernel to take any packet coming from our Wireguard address and push it via the Wireguard routing table. Add that to your Wireguard interface config as:
PostUp = ip rule add from localaddr lookup wireguard PreDown = ip rule del from localaddr lookup wireguard and now your local system is effectively on the internet.
You can do this for multiple systems - just configure additional Wireguard interfaces on the VPS and make sure they're all listening on different ports. If your local IP changes then your local machines will end up reconnecting to the VPS, but to the outside world their accessible IP address will remain the same. It's like having a real IP without the pain of convincing your ISP to give it to you.
Source: https://web.archive.org/web/20250618061131/https://mjg59.dre...
just host it on the VPS directly
Your local machine holds and processes the raw data, while your VPS forwards the much smaller processed data to the Internet.
Saying this as someone who's hosted from at home for like 15 years
Also realise that you're sending the IP address to every website you visit, and in most VoIP software, to those you call. Or if you use a VPN 24/7 on all devices, then it's the VPN's IP address in place of the ISP's IP address...
Visiting sites and sending the IP address is not the problem, the router has firewall and basically blocking unwanted attention. But when you expose something without protection and allow someone to burn your CPU, or, in a worse case, figure out your password for a not properly secured service, is a totally another issue.
I saw people setting up honey pot SSH and there are so many unauthorized access and I got scared. I think exposing entire machine to network is like you drive car without insurance. Sure you might be OK, but when trouble comes, it will be a lot of trouble.
... sure. You'd think I'd have noticed that in nearly two decades of hosting all different kinds of services if this were a thing
Run unattended upgrades, or the equivalent for whatever update mechanism you use, and you'll be fine. I've seen banks with more outdated running services than me at home... (I do security consulting, hence)
In addition to myself, I know some people who self host but not any who ever had a meaningful DDoS. If you're hosting an unpopular website or NAS, nobody is going to be interested in wasting their capacity on bothering you for no reason
Anything that requires custom effort (not just sending the same packets to every host) doesn't scale either. You can host an SQL injection pretty much indefinitely with nobody discovering it, so long as it's not in standard software that someone might scan for, and if it is, then there'll be automatic updates for it. Not that I'd recommend hosting custom vulnerable software, but either way: in terms of `risk = chance × impact` the added risk of self hosting compared to datacentre hosting is absolutely negligible, so long as you apply the same apt upgrade policy in either situation
Online voting has nothing to do with these phantom risks of self hosting
The IPs they're talking about exposing are ones which are on a VPS, not their home router, or the internal IPs identifying a device in Wireguard.
But I still have so much to consider when doing local hosting. Redundant electricity? IP connectivity? What if some hardware dies? What if I get DDoS'ed? How do I get all relevant security fixes applied asap? How do I properly isolate the server from other home networking like kid's laptops and TV with Netflix? ...?
All solvable of course, but that's what I'd have expected in such an article.
Am I just seeing ipv6 in an unusually familiar format? Or is it an intentionally malformed format used by wireguard for internal routing?
E.g. I have a PiZero attached to my router and it’s exposed to the internet via Cloudflare tunnels.
I want to love them, but sadly I only get an unreliable 80mbps/40mbps connection from them. With occasional latency spikes that make it much worse. To make up for this I run a multi-WAN gateway connecting to my neighbor/friend's Comcast as well. Here's the monkeybrains (https://i.imgur.com/FaByZbw.jpeg) vs comcast (https://i.imgur.com/jTa6Ldk.jpeg) latency log.
Curious if the author had to do anything special to get a symmetric 600mbps from Monkeybrains. They make no guarantees about speed at all, but are quite cheap, wholesome, and have great support. Albeit support hasn't been able to get me anywhere close to the author's speeds.
Interesting you're getting such slow speeds. Ask them if a tech can stop by and troubleshoot with you.
On principle, I prefer to poke a hole in my firewall than allow surveillance of the plaintext traffic.
I think in theory you could get it without it but that would be a lot more work on the recipient side.
Me I'm less worried about that so I accept the convenience of not having to setup a reverse proxy and poke a hole in my router.
1: https://stosb.com/blog/using-an-external-server-and-a-vpn-to...
It's pretty rare that you would need more than one.
If you're running different types of services (e.g. http, mail, ftp) then they each use their own ports and the ports can be mapped to different local machines from the same public IP address.
The most common one where you're likely to have multiple public services using the same protocol is http[s], and for that you can use a reverse proxy. This is only a few lines of config for nginx or haproxy and then you're doing yourself a favor because adding a new one is just adding a single line to the reverse proxy's config instead of having to configure and pay for another IPv4 address.
And if you want to expose multiple private services then have your clients use a VPN and then it's only the VPN that needs a public IP because the clients just use the private IPs over the VPN.
To actually need multiple public IPs you'd have to be doing something like running multiple independent public FTP servers while needing them all to use the official port. Don't contribute to the IPv4 address shortage. :)
What kind of IP addresses are these?
Put the wg interface in a new vrf, and spawn your self-hosted server in that vrf (ip vrf exec xxx command).
DougN7•7mo ago
mjg59•7mo ago
rkagerer•7mo ago
mjg59•7mo ago
chgs•7mo ago
mjg59•7mo ago
v5v3•7mo ago
herbst•7mo ago
mjg59•7mo ago
koolba•7mo ago
mjg59•7mo ago
mindslight•7mo ago
I've got a similar setup to what you've done here, with the policy routing and wireguard tunnels being part of a larger scheme that lets me granularly choose which Internet horizon each particular host sees. So I can have a browsing VM that goes out a rotating VPS IP, torrent traffic out a commercial VPN, Internet of Trash out a static VPS IP (why not separate from my infrastructure IP), visitors' devices going out a different rotating VPS IP (avoid associating with me), Windows VMs that can only access the local network (they have personal data), etc.
I'm currently hosting email/etc on a VPS, but the plan is to bring those services back on-prem using VPS IPs with DNAT just like you're doing. Any day now...
mnw21cam•7mo ago
Seriously thinking about switching to a setup similar to the article. I mean, my setup works for now, but it's un-pretty.
mvanbaak•7mo ago
tialaramex•7mo ago
mvanbaak•7mo ago
tialaramex•7mo ago
mystified5016•7mo ago
It's really not onerous or complicated at all. It's about as simple as it gets. I'm hosting a dozen web services behind a single IP4 address. Adding a new service is even easier than without the proxy setup. Instead of dicking around with my firewall and port forwarding, I just add an entry to my reverse proxy. I don't even use IPs, I just let my local DNS resolve hostnames for me. Easy as.
mjg59•7mo ago
mysteria•7mo ago
messe•7mo ago
neepi•7mo ago
immibis•7mo ago
neepi•7mo ago
dkjaudyeqooe•7mo ago
immibis•7mo ago
jeroenhd•7mo ago
That's part of the reason why countries like India are getting so many CAPTCHAs: websites don't care for the reason behind lackluster IP plans from CGNAT ISPs. If the ISP offered IPv6 support, people wouldn't have so many issues, but alas, apparently there's money for shitty CGNAT boxes but not IPv6 routers.
dkjaudyeqooe•7mo ago
Actually all it does is get everyone behind the CGNAT banned. I've lost access to the WSJ and NYT recently, and other websites over time. For every Cloudflare backed website, I have to pass a captcha on every access.
Fuck those people doing "adversarial interoperability as a client", AI scraping, et al, who take away from thousands of people for profit, then move on to the next pool of victims.
rzzzt•7mo ago
It also messes a bit with geolocation, we frequently teleport to different places within the country.
immibis•7mo ago
jaoane•7mo ago
(inb4 but the internet was made to receive connections! Well yes, decades ago maybe. But that’s not the way things have evolved. Get with the times.)
juergbi•7mo ago
Full IPv6 support should be a requirement for both ISPs as well as websites and other servers.
jaoane•7mo ago
They would be, but thankfully CGNAT doesn’t cause that.
messe•7mo ago
jaoane•7mo ago
orangeboats•7mo ago
Putting CF aside, anyone who has tried to edit Wikipedia anonymously should understand the pain of CGNAT.
dkjaudyeqooe•7mo ago
(now n=2)
jeroenhd•7mo ago
You can ask your ISP for your own IPv6 subnet if you don't want to be lumped in with the people whose computers and phones are part of a scraping/spamming botnet.
throw0101d•7mo ago
Unless they're playing video games:
* https://steamcommunity.com/sharedfiles/filedetails/?id=27339...
* https://www.checkmynat.com/posts/optimizing-nat-settings-for...
The video game industry is bigger than movies, television, and music combined:
* https://www.marketing-beat.co.uk/2024/10/22/dentsu-gaming-da...
So I think CGNAT / double-NAT can hit a lot of folks.
> Well yes, decades ago maybe. But that’s not the way things have evolved. Get with the times.
Why? Why should I accept the enshittification of the Internat that has evolved to this point? Why cannot people push for something better?
jaoane•7mo ago
throw0101d•7mo ago
* https://store.steampowered.com/curator/41339173-Self-Hosted-...
At the very least if a game publisher wants to power down their own servers because they don't feel it's "worth" supporting their customers, they should post the server code so that the customers can continue to use the product they 'bought'.
jaoane•7mo ago
rubatuga•7mo ago
High quality IPv4 + a whole /56 IPv6 for $8/month
messe•7mo ago
I also don't need to worry about the additional latency of a VPN, and have symmetric gigabit speeds, rather than 100Mbps up/down.
thedanbob•7mo ago
globular-toast•7mo ago