frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
624•klaussilveira•12h ago•182 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
926•xnx•18h ago•548 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
32•helloplanets•4d ago•24 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
109•matheusalmeida•1d ago•27 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
9•kaonwarb•3d ago•7 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
40•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
219•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
210•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
322•vecti•15h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
370•ostacke•18h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
358•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
477•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
272•eljojo•15h ago•160 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
402•lstoll•19h ago•271 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•20 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
14•jesperordrup•2h ago•6 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
3•theblazehen•2d ago•0 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
12•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
244•i5heu•15h ago•188 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
52•gfortaine•10h ago•21 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
140•vmatsiiako•17h ago•62 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
280•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1058•cdrnsf•22h ago•433 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
132•SerCe•8h ago•117 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•7h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
176•limoce•3d ago•96 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•22 comments
Open in hackernews

Locally hosting an internet-connected server

https://mjg59.dreamwidth.org/72095.html
182•pabs3•7mo ago

Comments

DougN7•7mo ago
Why not use a dynamic DNS service instead? I’ve been using dyn.com (now oci.dyn.com) for years and it has worked great. A bonus is many home routers have support built in.
mjg59•7mo ago
I have multiple devices on my internal network that I want to exist outside, and dynamic DNS is only going to let me expose one of them
rkagerer•7mo ago
If they don't all need distinct external IP addresses of their own, port forwarding is a typical approach.
mjg59•7mo ago
That doesn't work well if you want to run the same service on multiple machines. For some you can proxy that (eg, for web you can just run nginx to proxy everything based on either the host header or SNI data), but for others you can't - you're only going to be able to have one machine accepting port 22 traffic for ssh.
chgs•7mo ago
Select an isp that gives you multiple ip v4 addresses. Or host on ipv6.
mjg59•7mo ago
Yes, if I had multiple IPv4 addresses already it wouldn't be necessary to tunnel in additional IPv4 addresses, but since I don't and since there are no ISPs who will provide that to me at this physical address, tunneling is where I am.
v5v3•7mo ago
In many countries, unless you buy a business broadband package (more expensive),residential internet does not come with such options.
herbst•7mo ago
You can port forward SSH to other internal machines, just like nginx + web.
mjg59•7mo ago
I can port forward port 22 to a single machine. I can't proxy port 22 in a way that directs the incoming connection to the correct machine, at least not without client configuration.
koolba•7mo ago
You only need one inbound machine as your bastion. Then hop from there to the rest using local address. Once you set up the proxy config in ssh it’s completely transparent.
mjg59•7mo ago
Right yes but I (for various reasons) end up using a lot of different client systems and I don't want to have to configure all of them to transparently jumphost or use different port numbers and why are people spending so much time trying to tell me that I should make my life complicated in a different way to the one I've chosen?
mindslight•7mo ago
It's weird how much pushback you're getting for a few simple firewall rules, but I guess it's just another bikeshed. Basically all of the options for doing this are simple if you already know them, and have some annoying complexity otherwise. So everyone has a favorite.

I've got a similar setup to what you've done here, with the policy routing and wireguard tunnels being part of a larger scheme that lets me granularly choose which Internet horizon each particular host sees. So I can have a browsing VM that goes out a rotating VPS IP, torrent traffic out a commercial VPN, Internet of Trash out a static VPS IP (why not separate from my infrastructure IP), visitors' devices going out a different rotating VPS IP (avoid associating with me), Windows VMs that can only access the local network (they have personal data), etc.

I'm currently hosting email/etc on a VPS, but the plan is to bring those services back on-prem using VPS IPs with DNAT just like you're doing. Any day now...

mnw21cam•7mo ago
Yeah, I currently have a VPS with various SSH port forwards allowing me to direct incoming connections of various types to my home computer which is behind NAT. It's evil and horrible and nasty for various reasons, not least of which that all your incoming connections look to your inner server like they come from the same IP address, preventing you from logging or filtering the source of any request. And you need to make sure if you forward incoming connections to your SMTP server that it doesn't think they are local trusted connections that it can relay onwards, turning your setup into an open relay.

Seriously thinking about switching to a setup similar to the article. I mean, my setup works for now, but it's un-pretty.

mvanbaak•7mo ago
ipv6 has solved this. Too bad it's not yet a common thing.
tialaramex•7mo ago
The Google data strongly suggests that at this point it's probably available to a majority of home users. Corporate remains significantly worse. My employer, which paid me to do IPv6 stuff last century in a very different role, today has IPv6 for random outsiders but if you have a corporate issued laptop IPv6 is disabled and they cheerfully explained that it's "difficult" in a call this week right before I pointed out what I was paid to do and where a quarter century ago. Embarrassing for them.
mvanbaak•7mo ago
A lot of consumer connections do indeed provide ipv6. But some are unstable, some change addresses every X days, some have weird routing etc etc.
tialaramex•7mo ago
Meta IIRC is one of several outfits which unsurprisingly discovered that (as a corporation) the cure is just purchasing policy. When your new Doodad vendor sells you a product that is IPv4 only instead of saying "Oh, shame, OK, set all corporate systems to IPv4-only" you point them to the line in your purchase contract which says you require IPv6 and it's not your problem it's their problem, do they want to fix it or refund you ?
mystified5016•7mo ago
Yes, that's how it works when you only have a single IP. The standard way to deal with this is a reverse proxy for web requests. Other services require different workarounds. I have a port 22 SSH server for git activities, and another on a different port that acts as a gateway. From that machine I can SSH again to anywhere within my local network.

It's really not onerous or complicated at all. It's about as simple as it gets. I'm hosting a dozen web services behind a single IP4 address. Adding a new service is even easier than without the proxy setup. Instead of dicking around with my firewall and port forwarding, I just add an entry to my reverse proxy. I don't even use IPs, I just let my local DNS resolve hostnames for me. Easy as.

mjg59•7mo ago
The entire point of this is that I don't want to deal with non-standard port numbers or bouncing through hosts. I want to be able to host services in the normal boring way, and this approach lets me do that without needing to worry about dynamic DNS updates whenever my public IP changes.
mysteria•7mo ago
Same for me, I actually like having a reverse proxy as a single point of entry for all my web services. I also run OpenVPN on 443 using the port share feature and as a result I only need one IP address and one open port for everything.
messe•7mo ago
Only works if you're not behind CGNAT, which has problems in and of itself. I pay my ISP an extra 29 DKK (about 4.50 USD at the moment) for a static address; my IPv4 connections and downloads in-general became way more stable after getting out from behind CGNAT.
neepi•7mo ago
CGNAT is hell. Here I had to choose between crap bandwidth or CGNAT. I chose crap bandwidth.
immibis•7mo ago
Hell for hosting, but if you're doing adversarial interoperability as a client, it does help you avoid being IP-banned. (At least in Western countries. I hear that Africa and Latin America tend to just get their CGNAT gateways banned because site operators don't give a shit about whether users from those regions can use their sites)
neepi•7mo ago
Not quite. I'm in the UK and some of our customers get blocked by overzealous CDNs and they're all on CGNAT.
dkjaudyeqooe•7mo ago
It's not really overzealous since not banning the CGNAT IPs just gives the abusers safe harbor.
immibis•7mo ago
And banning them makes an entire country unable to use your site. That might be tolerable (to the site owner, but not in general) if the country is Argentina. Not if the site is France. Which is why Argentina gets blocked a lot more than France and if you want to scrape things you'd do better on a CGNAT network in France.
jeroenhd•7mo ago
The client feature only works for websites that care about making exceptions for CGNAT users. Plenty of them simply ban the shared addresses.

That's part of the reason why countries like India are getting so many CAPTCHAs: websites don't care for the reason behind lackluster IP plans from CGNAT ISPs. If the ISP offered IPv6 support, people wouldn't have so many issues, but alas, apparently there's money for shitty CGNAT boxes but not IPv6 routers.

dkjaudyeqooe•7mo ago
> it does help you avoid being IP-banned

Actually all it does is get everyone behind the CGNAT banned. I've lost access to the WSJ and NYT recently, and other websites over time. For every Cloudflare backed website, I have to pass a captcha on every access.

Fuck those people doing "adversarial interoperability as a client", AI scraping, et al, who take away from thousands of people for profit, then move on to the next pool of victims.

rzzzt•7mo ago
YouTube showed me a "this household watches suspiciously many videos" once when the provider moved us behind a CGNAT (because 100 households were suddenly watching from the same address, not just one).

It also messes a bit with geolocation, we frequently teleport to different places within the country.

immibis•7mo ago
Yeah, but in this case, the negotiation between the ISP and YouTube typically ends with the IP being unblocked. It's not like when one household is actually watching as much as 100 people, when the IP stays blocked.
jaoane•7mo ago
CGNAT is completely irrelevant to the average person. It’s only an issue if you expect others to connect to you, which is something that almost all people don’t need.

(inb4 but the internet was made to receive connections! Well yes, decades ago maybe. But that’s not the way things have evolved. Get with the times.)

juergbi•7mo ago
Cloudflare sometimes preventing access to some sites and annoying CAPTCHA challenges due to CGNAT are relevant to the average person.

Full IPv6 support should be a requirement for both ISPs as well as websites and other servers.

jaoane•7mo ago
> Cloudflare sometimes preventing access to some sites and annoying CAPTCHA challenges due to CGNAT are relevant to the average person.

They would be, but thankfully CGNAT doesn’t cause that.

messe•7mo ago
It contributes to it, because now you're behind the same public IP address as X other people. You're then X-times more likely to get flagged as suspicious and need to enter a CAPTCHA X-times more frequently.
jaoane•7mo ago
Cloudflare easily detects that using your discrete external port range and knows better than to show you a CAPTCHA.
orangeboats•7mo ago
Anecdotal experience (I know, of course... this is sample size n=1) tells me that you can't be further from the truth.

Putting CF aside, anyone who has tried to edit Wikipedia anonymously should understand the pain of CGNAT.

dkjaudyeqooe•7mo ago
Someone should tell Cloudflare that because it's not been my experience at all.

(now n=2)

jeroenhd•7mo ago
It's not a direct cause, but if an IP is hitting my website with spam, I don't care if it's a spam bot or a CGNAT exit point. The only way to stop the spam is to take action against the IP address. For CGNAT customers, that means extra CAPTCHAs or worse.

You can ask your ISP for your own IPv6 subnet if you don't want to be lumped in with the people whose computers and phones are part of a scraping/spamming botnet.

throw0101d•7mo ago
> It’s only an issue if you expect others to connect to you, which is something that almost all people don’t need.

Unless they're playing video games:

* https://steamcommunity.com/sharedfiles/filedetails/?id=27339...

* https://www.checkmynat.com/posts/optimizing-nat-settings-for...

The video game industry is bigger than movies, television, and music combined:

* https://www.marketing-beat.co.uk/2024/10/22/dentsu-gaming-da...

So I think CGNAT / double-NAT can hit a lot of folks.

> Well yes, decades ago maybe. But that’s not the way things have evolved. Get with the times.

Why? Why should I accept the enshittification of the Internat that has evolved to this point? Why cannot people push for something better?

jaoane•7mo ago
Pathetic that in 2025 there still are games that rely on p2p connections, to the detriment of the experience because cheating can’t be detected server-side. GTA 5 is one of them.
throw0101d•7mo ago
If I've purchased a video game, why should I have to be reliant on the publisher's servers being up? Self-hosting should be a thing:

* https://store.steampowered.com/curator/41339173-Self-Hosted-...

At the very least if a game publisher wants to power down their own servers because they don't feel it's "worth" supporting their customers, they should post the server code so that the customers can continue to use the product they 'bought'.

jaoane•7mo ago
Completely agree with the last paragraph.
rubatuga•7mo ago
If you're behind a CGNAT - check out hoppy.network

High quality IPv4 + a whole /56 IPv6 for $8/month

messe•7mo ago
That's way more expensive than what I already have. My ISP, by default, provides me a /56, of which I'm only using two /64 subnets at the moment. For an extra 29 DKK (4.50 USD), I get a static IPv4 as well.

I also don't need to worry about the additional latency of a VPN, and have symmetric gigabit speeds, rather than 100Mbps up/down.

thedanbob•7mo ago
This is what I do, except the dynamic DNS service is just a script on my server that updates Cloudflare DNS with my current external IP. In practice my address is almost static, I've never seen it change except when my router is reset/reconfigured.
globular-toast•7mo ago
Many DNS registrars support updates via API these days. I use Porkbun and ddclient to update it. Slight rub is I couldn't get it to work for the apex domain. Not sure where the limitation lies.
kinduff•7mo ago
This is an interesting solution and wouldn't mind using one of my existing servers as a gateway or proxy (?).

Is there a way to be selective about what ports are exposed from the host to the target? The target could handle it but fine grained control is nice.

mjg59•7mo ago
You could just set a default deny iptables policy for forwarding to that host, and then explicitly open the ports you want
baobun•7mo ago
iptables is legacy now and if you're not already well-versed in it, better go straight to nftables (which should be easier to get started with anyway). On modern systems, iptables commands are translated to nftables equivalents by transitional package.
lazylizard•7mo ago
you can also run a proxy on the vps instead of the nat.
mjg59•7mo ago
Depends on the protocol. For web, sure - for ssh, nope, since the protocol doesn't indicate which machine it's trying to connect to and so you don't know where to proxy it to.
baobun•7mo ago
You can still TCP proxy SSH just fine (one port per target host obv)

Certain UDP-based protocols may be hairier, though.

PhilipRoman•7mo ago
Socket based proxying is better for this, since you eliminate one point from your attack surface (if your proxy server gets compromised, it's just encrypted ssh/TLS)
remram•7mo ago
I don't know what you mean by "the protocol". There is a destination IP address on every packet... getsockname() will tell the proxy which local IP the client dialed, allowing it to create "virtual hosts" (or you can actually run multiple proxies bound on different local addresses).
mjg59•7mo ago
I have one public IP address. I have three machines behind it that I want to SSH into. How does the machine with the public address know where to route an incoming port 22 packet? For HTTPS this is easy - browsers send the desired site in the SNI field of the TLS handshake, so the frontend can look at that and route appropriately. For SSH there's no indication of which host the packet is intended for.
remram•7mo ago
Well you can't, but that wouldn't work with routing either, and it is not the situation at hand: in the article there are multiple IPs on the VPS:

> you now have multiple real-world IP addresses that people can get to

In your new situation that is not the one in the article, you can just use different ports.

zzo38computer•7mo ago
HTTPS and any other protocol that uses TLS has virtual hosting (because TLS has virtual hosting), and so does unencrypted HTTP (with the "Host" header), and some "small web" protocols such a Spartan and Scorpion. (In the case of Spartan, the domain name is the first thing the client sends to the server, which should make it easy to implement.) Like you mention, SSH does not. IRC and NNTP also do not have virtual hosting as far as I can tell, although I had suggested to add a HOST command to these protocols to implement virtual hosting.
remram•7mo ago
Note that this is not the only meaning of "virtual hosting". It is very commonly used with different addresses or ports. For example, the Apache `<VirtualHost addr:port>` block. It gets confusing because this is the same block that was used for "named-based virtualhost" (different `ServerName` in the same `<VirtualHost>`). See https://en.wikipedia.org/wiki/Virtual_hosting
v5v3•7mo ago
I would suggest putting a disclaimer on the article to warn any noobs that prior to opening up a server on the internet basic security needs to be in place.
politelemon•7mo ago
Another alternative could be a cloudflare tunnel. It requires installing their Daemon on the server and setting up DNS in their control panel. No ports need opening from the outside in.
troupo•7mo ago
I used to expose a site hosted on my home NAS through it, and now I do the same from a server at Hetzner.

Works like magic :)

jeroenhd•7mo ago
The downside of the Cloudflare approach is that yet more websites are behind Cloudflare's control. The VPS approach works pretty much the same way Cloudflare does, but without the centralized control.

On the other hand, Cloudflare is a pretty easy solution against spam bots and scrapers. Probably a better choice if that's something you need protection against.

PaulKeeble•7mo ago
Everyone does these days, although its really the AI scrapers you need defence from and Cloudflare isn't doing so good at that yet.
Aachen•7mo ago
As someone who actually hosts stuff at home, I'm not sure everyone does. I don't, for one

Maybe if you're on a limited data plan (like in Belgium or on mobile data), you'd want to prevent unnecessary pageloads? Afaik that doesn't apply to most home connections

Or if you want to absolutely prevent that LLMs eat your content for moral/copyright reasons, then it can't be on the open internet no matter who your gateway is

areyourllySorry•7mo ago
ai scrapers are truly this year's boogeyman
0xCMP•7mo ago
I think both are great options. Personally I do split-dns so I can access things "directly" while using Tailscale and via Cloudflare Tunnel when I am not.

I also selectively expose via the Cloudflare Tunnel. Most things are tailscale only.

KronisLV•7mo ago
Lovely write up! Personally, I just settled on Tailscale so I don’t have to manage WireGuard and iptables myself.

For a while I also thought that regular SSH tunnels would be enough but they kept failing occasionally even with autossh.

Oh and I got bitten by Docker default MTU settings when trying to add everything to the same Swarm cluster.

Daviey•7mo ago
The commentents suggest Tailscale, but the author assumes this could only mean Funnel, but you could use Tailscale/Headscale for handling the wiregiard and low-level networking / IP Allocation.

Then doing straight-forward iptables or L7, or reverse proxy via Caddy, Nginx, etc, directly to the routable IP address.

The outcome is the ~same, bonus is not having to handle the lower level component, negative is an extra "thing" to manage.

But this is how I do the same thing, and i'm quite happy with the result. I can also trivially add additional devices, and even use it for egress, giving me a good pool of exit-IP addresses.

(Note, I was going to add this as a comment on the blog, but it seems their captcha service is broken would not display - so it was blocked)

0xCMP•7mo ago
I haven't actually used Funnel, but I do use Cloudflare Tunnels + a VPS.

What I've done is that the VPS Nginx can talk over Tailscale to the server in question and the Cloudflare Tunnel lets those not on Tailscale (which is me sometimes) access the VPS.

zokier•7mo ago
Yeah, this is the way to do this. I'm pretty sure that if you for some reason do not want to run wireguard on all your servers you could fairly easily adjust this recipe to have a centralized wg gateway on your local network instead.

I think I've seen some scripts floating around to automate this process but can't remember where. There are lots of good related tools listed here: https://github.com/anderspitman/awesome-tunneling

eqvinox•7mo ago
I would highly recommend reading up on VRFs and slotting that into the policy routing bits. It's really almost the same thing (same "ip route" commands with 'table' even), but better encapsulated.
JdeBP•7mo ago
This and the comments highlight how bad many ISPs in North America and Western Europe are at IPv6, still, in 2025, and the lengths to which people will go to treat that as damage and literally route around it.

One of the biggest ISPs in my country has been promising IPv6 since 2016. Another, smaller, competitor, advertised on "World IPv6 Day" in 2011 that it was way ahead of the competition on supplying IPv6; but in fact does not supply it today.

One of the answers I see given a lot over the years is: Yes, I know that I could do this simply with IPv6. But ISPs around here don't route IPv6, or even formally provide statically-assigned IPv4 to non-business customers. So I have had to build this Heath Robinson contraption instead.

mjg59•7mo ago
Pretty much! My ISP was founded by https://en.wikipedia.org/wiki/Rudy_Rucker and is somewhat cheap and delightful and happily routes me a good amount of IPv6, and every 48 hours or so it RAs me an entirely different range even though I still have validity on the lease for the old one and everything breaks, so I've had to turn IPv6 off entirely (I sent dumps of the relevant lease traffic to support, they said they'd look into it, and then the ticket auto closed after being inactive for two years). I spent a while trying to make things work with IPv6 but the combination of it being broken at my end and also there still being enough people I want to provide access to who don't have it means it just wasn't a good option.
anonymousiam•7mo ago
One of my places uses Frontier FiOS (soon to become Verizon again). They have zero support for IPv6, and it isn't even on their roadmap.

I use a static HE (Hurricane Electric) IPv6 tunnel there, and it works great.

The only issue is that YouTube thinks the IPv6 block is commercial or an AI dev scraping their content, so I can't look at videos unless I'm logged in to YouTube.

stego-tech•7mo ago
I’m also on FiOS, and despite repeated statements to the effect I’d never get IPv6 on my (20 year) old ONT, I’ve got a nice little /56 block assigned on my kit via DHCPv6. Problem is that, as it’s a DHCP block, it changes, and Namecheap presently does not offer any sort of Dynamic DNS for IPv6 addresses.

Still, it let me tear down the HE IPv6 tunnel I was also running, since the sole reason I needed IPv6 was so our household game consoles could all play online without cursed firewall rules and IP reservations. I’m pretty chuffed with the present status quo, even if it’s far from perfect.

One other thing I’d note about OPs article (for folks considering it as a way to work around shitty ISP policies) is that once you have this up and running, you also have a perfect setup for a reverse proxy deployment for your public services. Just make sure you’re watching your bandwidth so you don’t get a surprise bill.

jxjnskkzxxhx•7mo ago
> Heath Robinson contraption

Ah, I see you also watched that video yesterday on manufacturing a tiny electric rotor.

JdeBP•7mo ago
I actually learned the expression when I was a child, via the Professor Branestawm books.
jxjnskkzxxhx•7mo ago
Ok so this is genuinely a case of I see an expression for the first time, learn an expression it, and then see it again immediately after. Fun.
57473m3n7Fur7h3•7mo ago
The Baader–Meinhof phenomenon strikes again!
jxjnskkzxxhx•7mo ago
I just learned about this yesterday.
grndn•7mo ago
Fellow Branestawm enthusiast here. That is the first time anyone has ever mentioned Professor Branestawm on HN, as far as I can tell! It's triggering deep memories.
Joeboy•7mo ago
"Heath Robinson" is British English for "Rube Goldberg".
jxjnskkzxxhx•7mo ago
TIL
algernonramone•7mo ago
Me too, at first I thought it was a takeoff on "Heathkit". Silly me, I guess.
jeroenhd•7mo ago
I'm in western Europe and every ISP but the ultra cheap ones and the niche use case ones have stable IPv6 prefixes. Some do /48, others /56.

IPv4 is getting CGNAT'd more and more, on the other hand. One national ISP basically lets you pick between IPv4 CGNAT and IPv6 support (with IPv6 being the default). Another has been rolling out CGNAT IPv4 for new customers (at first without even offering IPv6, took them a few months to correct that).

This isn't even an "America and Western Europe" thing. It's a "whatever batshit insane approach the local ISP took" thing. And it's not just affecting IPv6 either.

PaulKeeble•7mo ago
Mine officially supports it. However having configured the Prefix as they define and using SLAAC etc all my devices get their IPv6 addresses and can access the internet, I can even connect from outside the network so it all "works", but I have a bunch of issues. Neither of my ISPs defined DNS servers is available, I can't route one of the OpenDNS routers but the other works fine and then I have these periods where the entirity of IPv6 routing breaks for about a minute and then restores. Having done this with two different routers on completely different firmware now I can't help but think my official support from my ISP is garbage and they have major problems with it. I had to turn it off because it causes all sorts of problems.
emilfihlman•7mo ago
Once again I voice the only sane option: Skip IPv6 and the insanity that it is, and do IPv8 and simply double (or quadruple) the address space without introducing other new things.
acdha•7mo ago
This is a pipe dream in the current century. IPv6 adoption has been slow but it’s approaching 50% and absolutely nobody is going to go through the trouble of implementing a new protocol; updating every operating system, network, and security tool; and waiting a decade for users to upgrade without a big advantage. “I don’t want to learn IPv6” is nowhere near that level of advantage.
bigstrat2003•7mo ago
That is not a sane option. IPv6 isn't actually that hard, companies are just lazy and refuse to implement it (or implement it correctly).
drdaeman•7mo ago
It'll be objectively worse. IPv6 is at least sort of supported by a non-negligible number of devices, software and organizations. This IPv8 would be a whole new protocol, that no one out there supports. The fact that version 8 was already defined in [an obsolete] RFC1621 doesn't help either.

Even if you decide to try to make it a Frankenstein's monster of a protocol, making it a two IPv4 packets wrapped in each other to create a v4+v4=v8 address space, you'll need a whole new routing solution for the Internet, as those encapsulations would have issues with NATs. And that'll be way more error prone (and thus, less secure), because it'll be theoretically possible to accidentally mix up v4 and inner-half-of-v8 traffic.

Nah, if we can't get enough people to adopt IPv6, there's no chance we'll get even more people to adopt some other IPvX (unless something truly extraordinary happens that would trigger such adoption, of course).

MintPaw•7mo ago
Are you saying you believe it's truly impossible to create a new backwards compatible standard that expands the address space and doesn't require everyone to upgrade for it to work?
hypeatei•7mo ago
If it's possible, why has no one done it? Most of the backwards compatible "solutions" that are presented just run into the same issues as IPv6 but with a more quirky design.
ianburrell•7mo ago
It isn't possible to make backwards compatible standard that expands the address space. Where are you going to put the extra address bits in the IPv4 header?

It also can't be backwards compatible with IPv4 networking and software. The network gear will drop extra address, the OS will ignore it, and software will blow up.

It would be much better to make a new version. But if going to make new protocol, might as well make the address big enough to not need expansion again.

Then you have to update every networking device to support the new standard. And update all the protocols (DHCP, etc) for more address space. That part is what took a lot of the time for IPv6. Then you have to update all of the software to support 64-bit addresses. Luckily, most of the work was already done for IPv6.

Then you have to support a transition mechanism to talk to IPv4. Except there isn't enough space in new address. IPv6 on the other hand, has enough address space to stuff the IPv4 host and port in the IPv6 address for stateless NAT.

rzzzt•7mo ago

  > Where are you going to put the extra address bits in the IPv4 header?
The optional part. EIP proposed using 16 bits (minimum) to bump the address space to 40 bits (the EIP extension portion is variable-sized so it can go higher until you reach header option limits): https://archive.org/details/rfc1385/page/4/mode/2up
icedchai•7mo ago
If you read page 9, phase 1 mentions "update all backbone routers and border routers." This is the same problem as IPv6.
rzzzt•7mo ago
The effort is a bit smaller because existing stacks can already read what's in the EIP part as it disguises itself as an option header. The change is behavioral not structural.

Also with the extra octet added we'd get ~254 current-IPv4-sized clusters of addresses. If a unit inside one of these doesn't really care about the others they can skip supplying this information entirely, i.e. not all participants need to understand the extension. LANs, internal networks and residential use comes to mind as examples in which case only the gateway has to be updated just like the RFC says.

With IPv6 participation is all or nothing or dual stack, but then this is ~1.1 stack :)

icedchai•7mo ago
That RFC glosses over a LOT of details. I'm skeptical the effort would be a bit smaller, once you consider what is required for routing and the "translation service." That's totally glossed over in the RFC, by the way.

Unless you're planning on doing all IP communications in user space (or within your network "cluster"), the OS and IP stack still needs to be updated, you need a new addressing format, applications need to be aware of it, etc. If you want to actually make use of the new address space, it all needs to be updated... just like IPv6.

stephen_g•7mo ago
No, sorry. Very few switches and not all routers do that in software. If all that is in an ASIC then that part just can't be added to the address without new hardware.

So no, good attempt but it's pretty much still a 'upgrade all the routers and switches' kind of issue just like IPv6.

icedchai•7mo ago
That standard was IPv4 with NAT ;) Unfortunately, it doesn't allow for end-to-end connectivity.
stephen_g•7mo ago
I'm not going to say it's truly impossible, but it's practically just-about impossible.

There's no straightforward way of getting old hosts to be able to address anything in the new expanded space, or older routers to be able to route to it.

So you have to basically dual-stack it, and, oops, you've created the exact same situation with IPv6...

Nextgrid•7mo ago
The reason IPv6 adoption is lacking is that there's no business case for it from consumer-grade ISPs, not that there's an inherent problem with IPv6. Your proposed IPv8 standard would have the exact same adoption issues.
icedchai•7mo ago
IPv6 is often simpler to administer than IPv4. Subnetting is simpler for the common cases. SLAAC eliminates the need for DHCP on many local networks. There's no NAT to deal with (a good thing!) Prefix delegation can be annoying if the prefix changes (my /56 hasn't in almost 3 years.) Other than that, it's mostly the same.
fc417fc802•7mo ago
> There's no NAT to deal with

I frequently see this claim made but it simply isn't true. NAT isn't inherent to a protocol it's something the user does on top of it. You can NAT IPv6 just fine there just isn't the same pressure to do so.

icedchai•7mo ago
Technically, you are correct. Practically speaking, NAT is an inherent part of using IPv4 for 99.99% of end users. I haven't seen an end user or business with a public IP on the desktop in nearly 25 years.

You can NAT IPv6 but it is rarely done since there is simply no need.

MartijnBraam•7mo ago
I nat IPv6 on one of my servers because having seperate ipv6 for VMS but the same IPv4 has caused some issues with running mail servers and certificates. If only I could drop IPv4 completely
drewg123•7mo ago
IPv6 is the reason why we can't have IPv6

Your IPv8 is what IPv6 should have been. Instead, IPv6 decided to re-invent way too much, and is why we can't have nice things are are stuck with IPv4 and NAT. Just doubling the address width would have given us 90% of the benefit of V6 with far less complexity and would have been adopted much, much, much faster.

I just ported some (BSD) kernel code from V4 to V6. If the address width was just twice as big, and not 4x as big, a lot of the fugly stuff you have to deal with in C would never have happened. A sockaddr could have been expanded by 2 bytes to handle V6. There would not be all these oddball casts between V4/V6, and entirely different IP packet handling routines because of data size differences and differences in stuff like route and mac address lookup.

Another pet peeve of mine from my days working on hardware is IPv6 extension headers. There is no limit in the protocol to how many extension headers a packet can have. This makes verifying ASICS hard, and leads to very poor support for them. I remember when we were implementing them and we had a nice way to do it, we looked at what our competitors did. We found most of them just disabled any advanced features when more than 1 extension header was present.

immibis•7mo ago
IPv6 reinvented hardly anything. It's pretty much IPv4, with longer addresses, and a handful of trivial things people wished were in IPv4 by consensus (e.g. fragmentation only at end hosts; less redundant checksums).

The main disagreements have been above what to do with the new addresses e.g. some platforms insist on SLAAC. (Which is good because it forces your ISP to give you a /64).

Devices operating at the IP layer aren't allowed to care about extension headers other than hop-by-hop, which must be the first header for this reason. Breaking your stupid middlebox is considered a good thing because these middleboxes are constantly breaking everyone's connections.

Your sockaddr complaints WOULD apply at double address length on platforms other than your favorite one. The IETF shouldn't be in charge of making BSD's API slightly more convenient at the expense of literally everything else. And with addresses twice as long, they wouldn't be as effectively infinite. You'd still need to be assigned one from your ISP. They'd still probably only give you one, or worse, charge you based on your number of devices. You'd still probably have NAT.

FuriouslyAdrift•7mo ago
For a long time, I operated from home with a auction-purchased IPv4 /24 just so I could get around all this BS and have my own AS.

There's nothing nicer than being able to BGP peer and just handle everything yourself. I really miss old Level 3 (before the Lumen/CenturyLink buyout).

Kind of kicking myself for selling my netblock but it was a decent amount of money ($6000).

icedchai•7mo ago
I'm doing exactly this. I got my netblock for free in 1993, back in the Internic days before ARIN existed. I have a couple of VPSes running BGP and tunnel traffic back to my home over wireguard.
b112•7mo ago
Hey!

What if I you will your netblock to me? I'll will you my camaro and my collection of amiga parts.

(I really want your netblock)

icedchai•7mo ago
hah! If I wasn't actively using it, I'd consider renting it out. I bet you could find some early Internet dude that has a /24 they're not using
jekwoooooe•7mo ago
I feel like it’s malicious. They don’t want to support it because it means they can’t charge high prices for static IPs
anonymousiam•7mo ago
I did the same thing 20 years ago, but I used vtun because Wireguard didn't exist yet. It's a cool way to get around the bogus limitations on residential static IP addresses.

At the time, my FiOS was about $80/month, but they wanted $300/month for a static IP. I used a VPS (at the time with CrystalTech), which was less than $50/month. Net savings: $170/month.

lostlogin•7mo ago
> At the time, my FiOS was about $80/month, but they wanted $300/month for a static IP.

So ridiculous.

It’s fast, far quicker than I can use, and the static IP was a one off $10 or similar.

xiconfjs•7mo ago
Quote from OPs ISP [1]:

"Factors leading to a successful installation: Safe access to the roof without need for a helicopter."

[1] https://www.monkeybrains.net/residential.php#residential

uncircle•7mo ago
I wish I had access to a small ISP. It is comforting to know that if something goes wrong, on the other end of the line there is someone with a Cisco shell open ready to run a traceroute.
xiconfjs•7mo ago
For sure…in case of reaction times and flexibility they are great…Until something serious happens outside of their scope.
uncircle•7mo ago
Like what? Their scope is being an ISP, i.e. routing packets.
xiconfjs•7mo ago
Usually it‘s about standing, resources and options. Serious problems like fibre cuts, local power outages and DDoS attacks are usually not in their scope and they have to wait for 3rd parties to fix these problems with little iptions to speed up those processes. Bigger ISP usually have teams/departments which have well established processes/solutions to tackle these problems. That said, I‘m totally aware that each of them (small or big ISP) have their pros and cons - as always it mainly depends on your use case and requirements.
ghoshbishakh•7mo ago
There are tools specifically built for hosting stuff without public IP such as https://pinggy.io
crtasm•7mo ago
There are a number of paid services like that yes.
dismalpedigree•7mo ago
I do something similar. I run a nebula network. The vps has haproxy and is passing the encrypted data to the hosts using sni to figure out the specific host. No keys on the vps.

The vps and each host are each nebula nodes. I can put the nodes wherever i want. Some are on an additional vps, some are running on proxmox locally. I even have one application running as a geo-isolated and redundant application on a small computer at my friend’s house in another state.

remram•7mo ago
This Nebula? https://github.com/slackhq/nebula
dismalpedigree•7mo ago
Yes. Thats the one. Works really well. Basically a free version of tailscale. A bit more of a learning curve.
duskwuff•7mo ago
Headscale [1] has a stronger claim to "free version of Tailscale" - it's literally a self-hosted version of Tailscale's coordination server. It's even compatible with the Tailscale client.

[1]: https://headscale.net/

remram•7mo ago
The problem with Headscale is that it has absolutely no documentation. All of it is described in relation to Tailscale, which is what I don't want to use. Here are the Tailscale features we have, here are the differences with Tailscale. It's very weird.
lostmsu•7mo ago
I switched from Nebula to Yggdrasil (IPv6, global, but not the same as public IPv6). https://news.ycombinator.com/item?id=43967082
PeterStuer•7mo ago
I run a very small VPS at Hetzner with Pangolin on it that takes care of all the Traefic Wireguard tunneling to my home servers. Very easy to set up and operate.

https://fossorial.io/

thatcherc•7mo ago
Cool! Do you like that approach? I've thought about setting up that exact thing but I wasn't sure how well it would work in practice. Are there any pitfalls you ran into early on? I might give it a shot after your "very easy to set up and operate" review!
PeterStuer•7mo ago
Honestly it was very easy. Their documentation is decent, and the defaults are good.

Setting up Pangolin on the VPS, and Newt on your lan, connecting them and adding e.g. a small demo website as a resource on Pagolin will take you about half an hour (unless your domain propagation is slow, so always start by defining the name in DNS and point it to your VPS IP to start with. You can use a wildcard if you do not want to manually make a new DNS entry each time)

wredcoll•7mo ago
What is the vps for? Just the static ip?
PeterStuer•7mo ago
To have a public front that is outside of the lan firewall. The idea is that you do not have to open your lan to anything. The only communication will be the encrypted wireguard tunnel between the VPS and your Newt instance.

You can run the Pangolin also on the lan, but you will need to open a few ports then on your lan firewall, and manage your ddns etc. if you do not have a fixed IP at home.

For less than 4€/month I opted for the VPS route.

fainpul•7mo ago
> Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005.

What is going on here with these addresses? I'm used to seeing stuff like this in movies – where it always destroys my immersion because now I have to think about the clueless person who did the computer visuals – but surely this author knows about IPv4 addresses?

l-p•7mo ago
The author did not want to use real addresses and was not aware of the 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24 ranges specified in RFC 5737 - IPv4 Address Blocks Reserved for Documentation.
bzmrgonz•7mo ago
TIL!!!
sneak•7mo ago
This article was not worth having to solve a captcha to read.

I think I will be done with sites that require me to solve captchas to visit for simple reading, just as I am done with sites that require me to run javascript to read their text.

superkuh•7mo ago
At least it is technically possible to complete the dreamwidth captchas now. For many years (well before the modern corporate spidering insanity) dreamwidth was just completely inaccessible no matter how many times one completed their captchas. You'd have to be running a recent version of Chrome or the like.

Now after doing the captcha ~5 times and getting nothing a different captcha pops up that actually works and lets one in.

It's not good but it's a hell of a lot better than their old system.

bzmrgonz•7mo ago
how do you feel about proof of work human detection mechanisms? I think those are more tolerable given that it's just a short pause in browsing.
Aachen•7mo ago
(Not the person you asked)

I keep waiting for someone trying to use the web on an older computer who's sitting there for 30 seconds every time they click another search result. Or a battery-powered device that now uses a lot of inefficient high frequency clock cycles to get these computations out of the way

But so far I've heard nobody! And they've been fast on my phone. Does this really keep bots out? I'm quite surprised in both directions (how little computation apparently already helps and how few people run into significant issues)

When this came up 15 years ago (when PoW was hyping due to Bitcoin and Litecoin) the conversation was certainly different than how people regard this today. Now we just need an email version of it and I'm curious if mass spam becomes a thing of the past as well

CaptainFever•7mo ago
I can't even access the article, I get a 403. Here's a text mirror:

I'm lucky enough to have a weird niche ISP available to me, so I'm paying $35 a month for around 600MBit symmetric data. Unfortunately they don't offer static IP addresses to residential customers, and nor do they allow multiple IP addresses per connection, and I'm the sort of person who'd like to run a bunch of stuff myself, so I've been looking for ways to manage this.

What I've ended up doing is renting a cheap VPS from a vendor that lets me add multiple IP addresses for minimal extra cost. The precise nature of the VPS isn't relevant - you just want a machine (it doesn't need much CPU, RAM, or storage) that has multiple world routeable IPv4 addresses associated with it and has no port blocks on incoming traffic. Ideally it's geographically local and peers with your ISP in order to reduce additional latency, but that's a nice to have rather than a requirement.

By setting that up you now have multiple real-world IP addresses that people can get to. How do we get them to the machine in your house you want to be accessible? First we need a connection between that machine and your VPS, and the easiest approach here is Wireguard. We only need a point-to-point link, nothing routable, and none of the IP addresses involved need to have anything to do with any of the rest of your network. So, on your local machine you want something like:

[Interface] PrivateKey = privkeyhere ListenPort = 51820 Address = localaddr/32

[Peer] Endpoint = VPS:51820 PublicKey = pubkeyhere AllowedIPs = VPS/0

And on your VPS, something like:

[Interface] Address = vpswgaddr/32 SaveConfig = true ListenPort = 51820 PrivateKey = privkeyhere

[Peer] PublicKey = pubkeyhere AllowedIPs = localaddr/32

The addresses here are (other than the VPS address) arbitrary - but they do need to be consistent, otherwise Wireguard is going to be unhappy and your packets will not have a fun time. Bring that interface up with wg-quick and make sure the devices can ping each other. Hurrah! That's the easy bit.

Now you want packets from the outside world to get to your internal machine. Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005. On the VPS, you're going to want to do:

iptables -t nat -A PREROUTING -p tcp -d 321.985.520.309 -j DNAT --to-destination 867.420.696.005

Now, all incoming packets for 321.985.520.309 will be rewritten to head towards 867.420.696.005 instead (make sure you've set net.ipv4.ip_forward to 1 via sysctl!). Victory! Or is it? Well, no.

What we're doing here is rewriting the destination address of the packets so instead of heading to an address associated with the VPS, they're now going to head to your internal system over the Wireguard link. Which is then going to ignore them, because the AllowedIPs statement in the config only allows packets coming from your VPS, and these packets still have their original source IP. We could rewrite the source IP to match the VPS IP, but then you'd have no idea where any of these packets were coming from, and that sucks. Let's do something better. On the local machine, in the peer, let's update AllowedIps to 0.0.0.0/0 to permit packets form any source to appear over our Wireguard link. But if we bring the interface up now, it'll try to route all traffic over the Wireguard link, which isn't what we want. So we'll add table = off to the interface stanza of the config to disable that, and now we can bring the interface up without breaking everything but still allowing packets to reach us. However, we do still need to tell the kernel how to reach the remote VPN endpoint, which we can do with ip route add vpswgaddr dev wg0. Add this to the interface stanza as:

PostUp = ip route add vpswgaddr dev wg0 PreDown = ip route del vpswgaddr dev wg0

That's half the battle. The problem is that they're going to show up there with the source address still set to the original source IP, and your internal system is (because Linux) going to notice it has the ability to just send replies to the outside world via your ISP rather than via Wireguard and nothing is going to work. Thanks, Linux. Thinux.

But there's a way to solve this - policy routing. Linux allows you to have multiple separate routing tables, and define policy that controls which routing table will be used for a given packet. First, let's define a new table reference. On the local machine, edit /etc/iproute2/rt_tables and add a new entry that's something like:

1 wireguard

where "1" is just a standin for a number not otherwise used there. Now edit your wireguard config and replace table=off with table=wireguard - Wireguard will now update the wireguard routing table rather than the global one. Now all we need to do is to tell the kernel to push packets into the appropriate routing table - we can do that with ip rule add from localaddr lookup wireguard, which tells the kernel to take any packet coming from our Wireguard address and push it via the Wireguard routing table. Add that to your Wireguard interface config as:

PostUp = ip rule add from localaddr lookup wireguard PreDown = ip rule del from localaddr lookup wireguard and now your local system is effectively on the internet.

You can do this for multiple systems - just configure additional Wireguard interfaces on the VPS and make sure they're all listening on different ports. If your local IP changes then your local machines will end up reconnecting to the VPS, but to the outside world their accessible IP address will remain the same. It's like having a real IP without the pain of convincing your ISP to give it to you.

Source: https://web.archive.org/web/20250618061131/https://mjg59.dre...

cesarb•7mo ago
At least you get a CAPTCHA. All I get is a "403 Forbidden" with zero extra information. Tried from two different ISPs and different devices.
dreamcompiler•7mo ago
Putting a privkey on your VPS seems like asking for trouble.
mrbluecoat•7mo ago
A similar simple option: https://github.com/hyprspace/hyprspace
1317•7mo ago
Things like this that go through some external VPS always seem a bit pointless to me.

just host it on the VPS directly

orangeboats•7mo ago
A VPS that relays traffic and a VPS that runs services are very different.
dboreham•7mo ago
I have workloads that need 32T of enterprise nvme that I run on a machine in my garage.
sntran•7mo ago
How much for a VPS that supports bandwidth to access those 32T data frequently?
orangeboats•7mo ago
Sometimes you don't have to access those 32TB data in raw. You often need only the processed data, which can be magnitudes smaller.

Your local machine holds and processes the raw data, while your VPS forwards the much smaller processed data to the Internet.

bzmrgonz•7mo ago
This is an interesting usecase for a jumpbox. So what if we install a reverse proxy on the vps and use wireguard to redirect to services at home(nonstatic)? Would that work too? any risks that you can see?
dboreham•7mo ago
I do something similar but using GRE since I don't need encryption. Then I have OSPF on the resulting overlay network (there are several sites) to deal with ISP outages. One hop is via Starlink and that does use Wireguard because Elon likes to block tunnel packets but we gets through.
chazeon•7mo ago
Why would you want to expose your IP to the internet? I still feel that's dangerous, susceptible to DDoS attack, and I avoid that as much as possible. I put everything behind a Tailscale for internal use and behind Cloudflare for external use.
Aachen•7mo ago
What the heck? That's like not wanting a street address because people might come to block your front door somehow, or burglars might find your building and steal from it. The big brothers you mention would be like gated/walled communities in this analogy I guess

Saying this as someone who's hosted from at home for like 15 years

Also realise that you're sending the IP address to every website you visit, and in most VoIP software, to those you call. Or if you use a VPN 24/7 on all devices, then it's the VPN's IP address in place of the ISP's IP address...

chazeon•7mo ago
I don’t think this is the right analogue. Having someone come to your door breaking things would take much larger effort, and easy to be caught. But DDoS or attack your service has minimal cost.

Visiting sites and sending the IP address is not the problem, the router has firewall and basically blocking unwanted attention. But when you expose something without protection and allow someone to burn your CPU, or, in a worse case, figure out your password for a not properly secured service, is a totally another issue.

I saw people setting up honey pot SSH and there are so many unauthorized access and I got scared. I think exposing entire machine to network is like you drive car without insurance. Sure you might be OK, but when trouble comes, it will be a lot of trouble.

Aachen•7mo ago
> basically blocking unwanted attention. But when you expose something without protection and allow someone to burn your CPU

... sure. You'd think I'd have noticed that in nearly two decades of hosting all different kinds of services if this were a thing

chazeon•7mo ago
Yeah and of course it will be depend on your personality and risk model. Compared to other things I don’t want to risk my data, whether leaked or damaged. And I make mistakes, a lot. If you are very meticulous and can ensure that you can put up all the security measures yourself and won’t expose something you don’t want to. I am just not that kind of person.
Aachen•7mo ago
I'm not meticulous either. I had one responsible disclosure and a few times where I noticed issues myself but never that an attacker discovered it first. There's not that many malicious people. The only scenario where you realistically get pwned is when there is a stable and automated exploit for a widely spread service that can be automatically discovered, something like Heartbleed or maybe if a WordPress plugin has an SQL injection or so

Run unattended upgrades, or the equivalent for whatever update mechanism you use, and you'll be fine. I've seen banks with more outdated running services than me at home... (I do security consulting, hence)

rtkwe•7mo ago
To do that people have to physically come to my house and there are solutions to that, people can fuck with my internet from anywhere in the world. It's similar to why remote internet voting is such a pandora's box of issues.
Aachen•7mo ago
There's 4 billion front doors on the v4 internet. Sending you a DDoS is transient (not like doing something to you physically) and doesn't scale to lots of websites, especially for no gain

In addition to myself, I know some people who self host but not any who ever had a meaningful DDoS. If you're hosting an unpopular website or NAS, nobody is going to be interested in wasting their capacity on bothering you for no reason

Anything that requires custom effort (not just sending the same packets to every host) doesn't scale either. You can host an SQL injection pretty much indefinitely with nobody discovering it, so long as it's not in standard software that someone might scan for, and if it is, then there'll be automatic updates for it. Not that I'd recommend hosting custom vulnerable software, but either way: in terms of `risk = chance × impact` the added risk of self hosting compared to datacentre hosting is absolutely negligible, so long as you apply the same apt upgrade policy in either situation

Online voting has nothing to do with these phantom risks of self hosting

0xCMP•7mo ago
In this case they're re-exposing the server(s) to the public internet, but their actual IP Address is still very much hidden behind the Wireguard connection to the VPS.

The IPs they're talking about exposing are ones which are on a VPS, not their home router, or the internal IPs identifying a device in Wireguard.

yusina•7mo ago
Um that article is not at all about what I expected. It solves a particular problem, which is not having a static IP address. I happen to have one, so that's not an issue.

But I still have so much to consider when doing local hosting. Redundant electricity? IP connectivity? What if some hardware dies? What if I get DDoS'ed? How do I get all relevant security fixes applied asap? How do I properly isolate the server from other home networking like kid's laptops and TV with Netflix? ...?

All solvable of course, but that's what I'd have expected in such an article.

nurettin•7mo ago
Too lazy to set up wireguard. I just use ssh -L. And if there is another server in the way I hop with ssh -J -L
jojohohanon•7mo ago
I feel like I missed a preread that teaches me about these strangle super-numeric ip addresses. Eg 400.564.987.500

Am I just seeing ipv6 in an unusually familiar format? Or is it an intentionally malformed format used by wireguard for internal routing?

cameroncooper•7mo ago
Looks like modified placeholder addresses because the author didn't want to use real addresses. I don't think it could be used for internal routing since each octet is represented with a single byte (0-255) so having larger numbers for some internal routing would likely break the entire IP stack.
Arrowmaster•7mo ago
Yes the author needs to be beaten over the head with RFC 5737.
FlyingSnake•7mo ago
How is it different from self hosting locally with Cloudflare tunnels or Tailscale?

E.g. I have a PiZero attached to my router and it’s exposed to the internet via Cloudflare tunnels.

tehlike•7mo ago
i also do cloudflare tunnel.
varenc•7mo ago
How is the author getting a symmetric 600mbps connection with Monkeybrains? They're an awesome local ISP and provide internet via roof mounted PtM wireless connections.

I want to love them, but sadly I only get an unreliable 80mbps/40mbps connection from them. With occasional latency spikes that make it much worse. To make up for this I run a multi-WAN gateway connecting to my neighbor/friend's Comcast as well. Here's the monkeybrains (https://i.imgur.com/FaByZbw.jpeg) vs comcast (https://i.imgur.com/jTa6Ldk.jpeg) latency log.

Curious if the author had to do anything special to get a symmetric 600mbps from Monkeybrains. They make no guarantees about speed at all, but are quite cheap, wholesome, and have great support. Albeit support hasn't been able to get me anywhere close to the author's speeds.

btucker•7mo ago
I love Monkeybrains! I had something in the neighborhood of a 600mbps symmetric connection through them in the late 2010s when I lived in SF. The only issue was when it rained hard the speeds would deteriorate.

Interesting you're getting such slow speeds. Ask them if a tech can stop by and troubleshoot with you.

rtkwe•7mo ago
I've taken the easier solution of Cloudflare's free Tunnel service so my IP is less exposed and I don't have to poke holes in my firewall.
saltspork•7mo ago
Last I checked Cloudflare insisted on terminating TLS on the free tier.

On principle, I prefer to poke a hole in my firewall than allow surveillance of the plaintext traffic.

rtkwe•7mo ago
I think they have to for tunnels to work. They rewrite the headers so the target server doesn't have to have any special config to recognize abc.example.xyz as being itself.

I think in theory you could get it without it but that would be a lot more work on the recipient side.

Me I'm less worried about that so I accept the convenience of not having to setup a reverse proxy and poke a hole in my router.

tasn•7mo ago
I wrote about doing the same thing in 2016[1], crazy to think that we STILL don't have IPv6.

1: https://stosb.com/blog/using-an-external-server-and-a-vpn-to...

zrm•7mo ago
> multiple world routeable IPv4 addresses

It's pretty rare that you would need more than one.

If you're running different types of services (e.g. http, mail, ftp) then they each use their own ports and the ports can be mapped to different local machines from the same public IP address.

The most common one where you're likely to have multiple public services using the same protocol is http[s], and for that you can use a reverse proxy. This is only a few lines of config for nginx or haproxy and then you're doing yourself a favor because adding a new one is just adding a single line to the reverse proxy's config instead of having to configure and pay for another IPv4 address.

And if you want to expose multiple private services then have your clients use a VPN and then it's only the VPN that needs a public IP because the clients just use the private IPs over the VPN.

To actually need multiple public IPs you'd have to be doing something like running multiple independent public FTP servers while needing them all to use the official port. Don't contribute to the IPv4 address shortage. :)

TacticalCoder•7mo ago
> Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005.

What kind of IP addresses are these?

protocolture•7mo ago
Wont sell you a static ip? Not even ipv6? Thats just incompetence.
JXzVB0iA•7mo ago
Recommend trying vrf (l3mdev) for this (dual interface w/ 0.0.0.0/0 route) setup.

Put the wg interface in a new vrf, and spawn your self-hosted server in that vrf (ip vrf exec xxx command).

nickzelei•7mo ago
Hm, 600 symmetric with monkeybrains?? I’ve had monkeybrains for over 3 years and have never seen over 200 down. In fact, I reached out to them today because for the last 3 months it’s been about 50 down or less. Like, I can barely stream content slow. What gives? I am in a 6 unit in lower haight. Most of the units also have MB. The hardware is relatively new (2019?). What gives?
neurostimulant•7mo ago
My only concern is logging. Will the webserver on your local server log the real ip address of your visitors, or will it log all traffics as coming from your vps?