No your service does not need the extra .099% availability for 100x the price...
Make your own VPN while you are at it, wireguard is basically the same config.
For personal a cheap vps will end up costing around the same as something you can do on your own, without the risk of messing up your machine/network from a vulnerable endpont
Businesses need to maintain focus and allocate resources toward delivering their core product.
Software is highly profitable and even with inflated cloud computing costs, it makes some level of sense to not over-optimize and spread teams thin reinventing the wheel.
If I can deliver my product or feature to the market 20% faster that’s going to make more money than if I optimize my cloud infrastructure costs to save 50%.
As a business owner I don’t want to have to hire high-paid specialists who understand the deep intricacies of data center infrastructure, I want to be able to pay people with highly common skills who can quickly translate my business logic to working software.
Shoutout to Hacker News for having IPv6 support!
And virtually everything inside of AWS still requires IPv4 so even if you have zero need to reach out to WAN, if you need any number of private AWS endpoints, you're going to be allocating some ipv4 blocks to your VPC :(.
2.) Market segmentation: keeps home users from easily hosting their own services without spending $$$ on an upgraded plan.
3.) Adding on to #2, I've seen claims of providers putting IPv6 behind NAT, so don't think full IPv6 acceptance will solve this problem.
I get annoyed even when what's offered is a single /64 prefix (rather than something like a /56 or even /60), but putting IPv6 behind NAT is just ridiculous.
Port forwarding, external/internal address split, split horizon DNS, SNI proxies, NAT, hairpin routing - some of the hacks made mostly because of shortage in IP space.
Using both GUA/ULA together solves enough to get by, but its not ideal
Instead I played with IPv6 at home to make sure I understood it well enough should it ever come up at work. We'll see!
For someone just getting started with networking and learning things, this seems rhe best way to go forward.
It is a damn service, which is defined as "you pay someone to do it".
(your second sentence is a bit confusing)
2) Companies will happily pay thousands in recurring fees for the built-in NAT gateway, but if an engineer asks for even half that as a one-off sum to motivate them to learn Linux networking/firewalling, they'd get a hard no, so why should they bother?
i've had to look at my nat gateway zero times since i set it up a couple years ago. i can't say that about any VM host i've got. to me, that's easily worth the few dollars a month that aws charges for it. it's cheaper than hiring somebody, and it's cheaper than me.
That said, the paid NAT gateways do also publish metrics. That can be nice when debugging a legitimate issue, such as when your gateway actually runs out of NAT ports to use.
The market will provide. In this case by increasing prices to the point of maximum value extraction from people who don't want to deal with all that. There's a high initial cost to moving to something else here, with a lot of people dragging along paying more than what the market would otherwise equalize to, out of avoiding that initial hurdle. (And long term commitment of a resource, of course, one with low average but indeterminate excursion cost.)
Why state this as absolute fact? Seems a bit lacking in epistemic humility.
Wait, is "seems lacking in epistemic humility" just coded language for "I disagree, therefore you couldn't possibly be thoughtful"?
Modern devs are helpless in the face of things I taught myself to do in a day or two when I was fourteen, and they’re paralyzed with terror at the thought of running something.
It’s “hard” goes the cliche. Networking is “hard.” Sys admin is “hard.” Everything is “hard” so you’d better pay an expert to do it.
Where do we get these experts? Ever wonder that?
It’s just depressing. Why even bother.
It really makes me worry about who will keep all this stuff running or build anything new in the future if we are losing not only skills but spine and curiosity. Maybe AI.
I think of AI as a kind of floor, a minimum required skill to be able to get a job as a professional anything. If you want to find paid work as a developer, you have to at least be better than AI at the job.
Optimistically AI will filter out all the helpless Devs who can't get anything done from the job market. "Code monkeys" won't be a thing.
Juniors will have to enter unpaid trainee programs I guess, but that might not be such a bad thing
I'm only in my 30s but I was thinking recently "when I'm retired I feel like I'm going to be telling stories about how back in my day we had this thing called the filesystem and you'd just browse it directly..."
What happened is that as an Xennial (young genX / old millennial) I know way more about computers than either generation to the side of me. This includes younger devs. I knew way more than them when I was their age. As a teen I was hacking C to get my 386 with Slackware Linux that I installed from floppies online by modding SLIRP to run on the sun3 I had dial up access to so I could pipe serial SLIP through it. Learned all about everything happening under the hood on a network.
I don’t feel self congratulatory about this. I feel depressed. If the kids were all smarter than me it would give me more hope for the future.
And when you got things wrong back in the day, you came home from school, saw a very weirdly behaving computer, grumbled and reinstalled the OS. Nowadays it is a very different story with potentially very severe consequences.
And this is just about getting things wrong at home, in corporate environment it is 100x more annoying. In corporate, anyway you spend 80% of the development time figuring out how to do things and then 20% on actual work, nobody will have the time to teach themselves something out of their domain.
OSes are more secure. Isolation is better. Languages are better. Hardware is vastly cheaper and faster and more reliable. Everything is easier and faster and better.
In the corp world we have this absurd embarrassment of riches. There are like ten choices in every category. Half of it is free. It’s easier to set up and run than it was back then. Way easier. Hosting is silly cheap if you compare cost / performance.
People are just incurious and brainwashed with this weird sense of helplessness.
This security phobia is so overblown if you take some basic precautions and don’t run crap service software.
If I were hosting something controversial that might draw the ire of one of the insane political cults out there I’d run it through a free CDN maybe. That’s easy.
Isn't it anyway better for admin and security folks to have developers not get any ideas and stick to the bounds of the box?
I’m sure the list of things that you don’t know that some other developers do know is long.
No one is an “expert” at everything. I know AWS well (trust me on this) and I’ve used more services than you can imagine in a production capacity. I choose not to know the intricacies of Linux and front end development for instance. That’s either “someone else’s problem” or in the former case, I just give a zip file with my code in it and run it in Lambda or a Docker container and run it using a managed Kubernetes/ECS cluster, use Lambda (yes you can deploy a Docker container to Lambda) or Fargate (AWS manages instances in Docker cluster).
We have way less time unfortunately to dig into each tech, business is pressing us like lemon on the other side to ship quickly.
Then I run my stuff locally.
And then I use ssh tunneling to forward the port to localhost of the remote machine. Its a unit file, and will reconstruct the tunnel every 30s if broken. So at most 30s downtime.
Then nginx picks it up.
I use Tailscale myself, but if you want everything totally under your control (and don't want to go to the trouble of setting up headscale or something similar) then that's one of the absolutely simplest, lowest-effort ways of doing it. EDIT: Well, except for the VPS box I suppose, but if that provider went down or you had any reason to suspect they were doing anything suspicious, it would be quite simple to jump to a different provider, so that's pretty darn close to controlling everything yourself.
Particular things: I use letsencrypt wildcard, so my subdomains aren't leaked. If you register per subdomain, LE leaks all your subdomains as part of some transparency report. Learned that and had to burn that domain.
The VPS is from LowEndBox. Like 2 core, 20GB storage 2GB ram. But runs perfectly fine.
I run jellyfin, audiobookshelf, Navidrome, and Romm. Ssh tunnel per application.
It would also be trivial to switch providers as well. But again, not a seed box, not doing torrents, not doing anything that would attract attention. And best of all, no evidence on the VPS. Its all SSL and SSH.
Client automatically deals with reconnecting, never have to touch it.
SSH tunnel would have been simpler, just didn’t want it open.
SSH tunnel probably needs the keep alive on, otherwise connection loss may not be detected.
Only a tiny minority of people have to look at those addresses, the majority just types "facebook", enter, clicks on first google result and gets facebook (because ".com" is too hard to write).
Yet again, another fundamental misunderstanding (either genuine or not, I'm not sure) about the low-level technologies and their origins that underpin all of this. "Can't we just..."? No.
At best, I remember the prefix of my private network, and a handful of single-number suffixes of important hosts (i.e. my LAN is 192.168.1.x, and I remember that .100 is my local file server...)
This was discussed in the early 1990s. Criteria that were to be used for selecting then-IPng (§5.1: 10^12 / 2^40 was the minimum):
* https://datatracker.ietf.org/doc/html/rfc1726
The winning proposal, SIPP, was originally 'only' 64 bits, but it was decided to go to 128:
* https://datatracker.ietf.org/doc/html/rfc1752
> I know its stupid, but ipv6 addresses are just so hard to remember and look at that I think its just human nature to gravitate towards the simplicity of ipv4.
If only there was a system that allowed for easy to remember human labels to be translated to a machine-usable sequence of bits that we call "an address"…
Repeat after me: NAT is not a firewall. And we need to stop pretending it is.
Security is not the purpose of a NAT. It's there to give you more IPs than you have. There's all sorts of NAT hole punching techniques. If you want a firewall, you need a firewall.
The last part isn't adding the security, and you can absolutely NAT without preventing the "outside" subnets from being allowed to route to the "inside" subnet, it's just that NAT is almost always done on the box providing the stateful firewall too so people tend to think of the 3 functions as combined in concept as well.
Under very specific conditions. Technically if you send packet with destination 192.168.1.10 directly to wan port of router - yes it can route it inside. The problem - how to deliver this packet over internet. You need to be connected to exactly same network segment to pull it off.
And you don't need statefull firewall to deny this kind of packets.
You need state to block only inbound originated sessions (i.e. the one way door to a private subnet).
Death , taxes and transfer fees
This is probably a result of all AWS services being independent teams with their own release schedule. But it would have made sense for AWS to coordinate this better.
It’s a (small) moving part we now have to maintain. But it’s very much worth the massive cost savings in NATGateway-Bytes.
A big part of OpsLevel is we receive all kinds of event and payload data from prod systems, so as we grew, so did our network costs. fck-nat turned that growing variable cost into an adorably small fixed one.
Yes, in a very superficial sense, you can't literally route a packet over the internet backwards to a host behind NAT without matching a state entry or explicit port forwarding. But implementing NAT on it's own says nothing about the behavior of your router firewall with regards to receiving Martians, or with regards to whether the router firewall itself accepts connections and if the router firewall itself isn't running some service which causes exposure.
To actually protect things behind NAT you still need firewall rules and you can keep those rules even when you are not using NAT. Thus those rules, and by extension the protection, are separable from the concept of NAT.
This is the kind of weird argument that has caused a lot of people who hadn't ever used IPv6 to avoid trying it.
Don’t forget source routing. That said, depending on your threat model, it’s not entirely unreasonable to just rely on your ISP’s configuration to protect you from stuff like this, specifically behind an IANA private range.
The last time I heard about source routing actually being a useful feature or a vulnerability used by hackers was the 1990's.
It's like we've been collectively trained to think of RFC1918 as "safe" and forgotten what a firewall is. It's one of those "a little knowledge is a dangerous thing" things.
The vast, vast majority of people do not know what NAT is: ask your mom, aunt, uncle, grandma, cousin(s), etc. They simply have a 'magic box' (often from the ISP) that "connects to Internet". People connect to it (now mostly via Wifi) and they are "on the Internet".
They do not know about IPv4 or IPv6 (or ARP, or DHCP, or SLAAC).
As long as the magic box is statefully inspecting traffic, which is done for IPv4-NAT, and for IPv6 firewalls, it makes no practical difference which address family you are using from a security perspective.
The rending of garments over having a globally routable IPv6 address (but not globally reachable, because of SPI) on your home is just silliness.
If you think NAT addresses are safe because… of any reason whatsoever really… simply shows a lack of network understanding. You might as well be talking to a Flat Earther about orbital mechanics.
Are internet routers that do ipv4 NAT usually also doing an IPv6 firewall (meaning they only let incoming connections in if they are explicitly allowed by some configuration)? Maybe thats the point where the insecurity comes from. A Home NAT cannot work any other way(it fails "safely"), a firewall being absent usually means everything just gets through.
I do wonder how real the problem is, though. How are people going to discover a random IPv6 device on the internet? Even if you knew some /64 is residential it's still impractical to scan and find anything there (18 quintillion possible addresses). If you scanned an address per millisecond it would take 10^8 years, or about 1/8 the age of the earth, to scan a /64.
Are we just not able to think in such big numbers?
Consider the counter-factual: can you list any home routers/CPEs that do not do SPI, regardless of protocol? If someone found such a thing, IMHO there would be a CVE issued quite quickly for it.
And not just residential stuff: $WORK upgraded firewalls earlier in 2025, and in the rules table of the device(s) there is an entry at the bottom that says "Implicit deny all" (for all protocols).
So my question to NAT/IPv6 Truthers is: what are the devices that allow IPv6 connections without SPI?
And even if such a thing exists, a single IPv6 /64 subnet is as large as four billion (2^32) IPv4 Internets (2^32 addresses): good luck trying to find a host to hit in that space (RFC 7721).
In practice this has not been true for over 20 years.
IPv6 devices on SLAAC networks (which is to say, almost all of them) regularly rotate their IPv6 address. The protocol also explicitly encourages (actually, requires) hosts to have more than one IPv6 address active at any given time.
You are also making a wrong assumption that the externally visible address and port ranges chosen by the NAT device do not make the identity of internal devices easily guessable.
"Revisiting IoT Fingerprinting behind a NAT":
* https://par.nsf.gov/servlets/purl/10332218
"Study on OS Fingerprinting and NAT/Tethering based on DNS Log Analysis":
* https://www.irtf.org/raim-2015-papers/raim-2015-paper21.pdf
Also:
> […] In this paper, we design an efficient and scalable system via spatial-temporal traffic fingerprinting from an ISP’s perspective in consideration of practical issues like learning- testing asymmetry. Our system can accurately identify typical IoT devices in a network, with the additional capability of identifying what devices are hidden behind NAT and the number of each type of device that share the same IP address. […]
* https://www.thucloud.com/zhenhua/papers/TON'22%20Hidden_IoT....
Thinking you're hiding things because you're behind a NAT is security theatre.
NAT a.k.a IP masquerading does not do that, it only figures out that some ingress packets whose DST is the gateway actually map to previous packets coming from a LAN endpoint that have been masqueraded before, performs the reverse masquerading, and routes the new packet there.
But plop in a route to the network behind and unmatched ingress packets definitely get routed to the internal side. To have that not happen you need to drop those unmatched ingress packets, and that's the firewall doing that.
Fun fact: some decade ago an ISP where I lived screwed that up. A neighbour and I figured out the network was something like that:
192.168.1.x --- 192.168.1.1 --
\
10.0.0.x ----> WAN
/
192.168.2.x --- 192.168.2.1 --
192.168.1 and 192.168.2 would be two ISP subscribers and 10.0.0.x some internal local haul. 192.168.x.1 would perform NAT but not firewall.You'd never see that 10.0.0.x usually as things towards WAN would get NAT'd (twice). But 10.0.0.x would know about both of the 192, so you just had to add respective routes to each other in the 192.168.x.1 and bam you'd be able to have packets fly through both ways, NAT be damned.
Network Address Translation is not a firewall and provides no magically imbued protection.
* https://kb.netgear.com/25891/What-is-the-De-Militarized-Zone...
Its just uncommon in consumer or prosumer devices.
A similar allegory is perhaps industrial washing machines vs consumer ones or that printer/scanner combos are common (even in offices) but print shops and people who actually need a lot of paper would have dedicated equipment that does either scanning or copying better.
It’s also like a leatherman, they all have some commonality (the need to be gripped) so theres a lot of combination; but a tradie would only use one as a last resort- often preferring a proper screwdriver.
Most NAT requires itself to include a stateful firewall; it's the same thing as the NAT flow table. This whole trope is mostly getting into people's heads to not forget about actually configuring that "free" firewall properly, since it'll just be a poor one otherwise.
However, the fees from AWS are atrocious on the NAT Gateway.
I just can't take articles seriously when they lead with these kind of claims and then don't back them up. Typically to give their articles some sort of justification and/or weight. Did not bother to read the rest.
Mine's in the living room, it says TP Link.
More seriously, NAT is fun and all but it can introduce unexpected behaviors that wouldn't exist in a firewall that doesn't do translation. Less is more.
nodesocket•2mo ago
The bash configuration is literally a few lines:
Change ens5 with your instance network interface name. Also, VERY IMPORTANT you must set source_dest_check = false on the EC2 NAT instances.Also, don’t assign a EIP to your EC2 NAT instances (unless you absolutely must persist a given public IP) as that counterintuitively routes through public traffic. Just use a auto-assigned public IP (no EIP).
unquietwiki•2mo ago
nodesocket•2mo ago
Nextgrid•2mo ago
topspin•2mo ago
That's what you did before AWS had the "NAT Gateway" managed service. It's literally called "NAT Instance" in current AWS documentation, and you can implement it in any way you wish. Of course, you don't have to limit yourself to iptables/nftables etc. OPNsense is a great way to do a NAT instance.
nodesocket•2mo ago
> NAT AMI is built on the last version of the Amazon Linux AMI, 2018.03, which reached the end of standard support on December 31, 2020 and end of maintenance support on December 31, 2023.
sarathyweb•2mo ago
vladvasiliu•2mo ago
Could you point me to somewhere I can read more about this? I didn't know there was an extra charge for using an EIP (other than for the EIP itself).
m1keil•2mo ago