I use OpenBSD as a workstation and it works great, but in a production environment I doubt I would use OpenBSD for critical items, mainly because no LTS.
It is a sad state of affairs because Companies do not want nor will want a system you need to upgrade so often even if its security very good.
(IMO it's fine for heavier-write cases, too. It's just especially alright for the common deployment case where it's practically read-only anyway.)
You'll definitely want to have it on a UPS to avoid some potentially long and sometimes manual intervention on fscks after a power failure. And of course, backups for anything important.
I see we have a post-syspatch (6.1 - 2017), post-sysupgrade (6.6 - 2019) OpenBSD user in our midsts. ;D
You are positively a newbie in the OpenBSD world !
Some of us are old enough to remember when OpenBSD updates were a complete pain in the ass in involving downloading shit to /usr/src and compiling it yourself !
And yes, back then I wasn't using OpenBSD.
According to Wikipedia, Debian has had apt since 1998.
My point is OpenBSD didn't have binary updates until well into the 2000's as mentioned above. Initially in 2017 with syspatch and the finally full coverage in 2019 when sysupgrade came along.
As you can see on some old OpenBSD Mailing List posts[1] there was a high degree of resistance to the very idea of binary updates. People even being called trolls when they brought up the subject[2] or being told they "don't understand the philosophy of the system"[3]
I just felt it was an important point of clarification on your original post. Yes, I agree, OpenBSD updates are painless ... now, today. But until very recent history they were far from painless.
[1] https://misc.openbsd.narkive.com/IOf20unK/openbsd-binary-upd... [2] https://marc.info/?l=openbsd-misc&m=117255609026625&w=2 [3] https://marc.info/?l=openbsd-misc&m=117256318700031&w=2
And there was something with ... Postgres ? At some point? Both of course upstream issues that couldn't be helped.
For Debian, I've managed to do in-place upgrades of a few machines from Debian 5 to 10 at the last place I worked.
OpenBSD will just tell you that maintaining an LTS release is not one of their goals and if that's what you need you'll be better served by running another OS.
One of the reasons the OP is moving to FreeBSD: five-year support cycles for the major release branches.
Damn I wish that they had expanded on this a bit (not to start a flame war, but to give readers a fuller picture, or even to prod the FreeBSD community into "fixing" those things)
edit: typo fix
We are aware that this isn't ideal for some users, but it was a necessary tradeoff. We might be able to improve this in the future (possibly as "security updates for the base system, but no ports support") but no guarantees.
On a 4 core machine I see between 12% to 22% improvement with 10 parallel TCP streams. When testing only with a single TCP stream, throughput increases between 38% to 100%.
I'm not sure that directly translates to better pf performance, and four cores is hardly remarkable these days but might be typical on a small low-power router?
Would be interesting if someone had a recent benchmark comparison of OpenBSD 7.8 PF vs. FreeBSD's latest.
[1] https://undeadly.org/cgi?action=article;sid=20250508122430
For a firewall I guess the critical question is the degree of parallelism supported by OpenBSD's PF stack, especially as it relates to common features like connection statefulness, NAT, etc.
Why would any BSD perform better?
(edit: genuinely curious why BSDs are such popular firewalls)
And if they are already using it on openbsd, it’s almost certainly an easier lift to move from one BSD PF implementation to another versus migrating everything to Linux and iptables.
Borrowing a demonstration from https://srobb.net/pf.html
tcp_pass = "{ 22 25 80 110 123 }"
udp_pass = "{ 110 631 }"
block all
pass out on fxp0 proto tcp to any port $tcp_pass keep state
pass out on fxp0 proto udp to any port $udp_pass keep state
Note last rule matching wins, so you put your catch-all at the top, "block all". Then in this case fxp0 is the network interface. So they're defining where traffic can go to from the machine in question, in this case any source as long as it's to port 22, 25, 80, 110, or 123 for TCP, and either 110 or 631, for UDP.<action> <direction> on <interface> proto <protocol> to <destination> port <port> <state instructions>
int_if = "fxp0"
The BSDs still tend to use device-specific names versus the generic ethX or location-specific ensNN, so if you have multiple interfaces knowing about internal and external may help the next person who sees your code to grok it.I never liked iptables, but nftables is pretty nice to write and use.
And with one "flowtable" line added to your nftables.conf you can even in theory have faster routing when conntrack is active
https://thermalcircle.de/doku.php?id=blog:linux:flowtables_1...
Compared to working with iptables, PF is like this haiku:
A breath of fresh air,
floating on white rose petals,
eating strawberries.
Now I'm getting carried away:
Hartmeier codes now,
Henning knows not why it fails,
fails only for n00b.
Tables load my lists,
tarpit for the asshole spammer,
death to his mail store.
CARP due to Cisco,
redundant blessed packets,
licensed free for me.
(From https://marc.info/?l=openbsd-pf&m=108507584013046&w=2 )Nftables has improved the situation on Linux somewhat, but PF is incredibly intuitive and powerful. A league of its own when it comes to firewalling.
Regarding any BSD used for any purpose, BSD has a more consistent logic to how everything works. That said, if you're used to Linux then you're going to be annoyed that everything is very slightly different. I am always glad that multiple BSD projects have survived and still have some real users, I think that's good for computing in general.
But now with nftables I actually am going back to RHEL on Firewalls. I want something ultra-stable and long lived.
During that time the firewall tool du jour on Linux was ipchains, then iptables, and now nftables, and there have been at least some incompatible changes within the lifespan of each tool.
OpenBSD has an additional leg up in that incompatible changes between releases are concisely, clearly, and consistently documented, e.g. https://www.openbsd.org/faq/upgrade78.html The last incompatible pf.conf syntax change I could find was for 6.9, nearly 5 years ago, https://www.openbsd.org/faq/upgrade69.html
Alternatively you can use nftables which has only been around for the past 12 years.
I realise that one change per quarter century is possibly a little fast paced for BSD but I can cope with it.
Either way, I don't think there is any defense for the strange syntax of IPtables, the chains, the tables. And that's coming from a person who transitioned fully from BSD to Linux 15 years ago, and has designed commercial solutions using IPtables and ipset.
If you search up a problem, you get real documentation, real technical blog posts, and real forum posts with actual useful conversations happening.
It sounds to me like you picked a bad Linux distro for your use case.
I've seen plenty of single-purpose Linux-based network appliances, and none of them have come across as flaky or unreliable because of the OS. In fact they can be easier to use for people who have more operational experience using Linux already.
They switched out ifconfig for some other thing. There's been about 3 different firewall systems that you've have to migrate between. Some of the newer systems (docker and I think maybe flatpak/the other one) bypass your firewall rules by default, which is a nasty surprise. A couple of times I did a system upgrade and my system wouldn't boot because drivers or boot systems or what have you had changed. That stuff doesn't happen on FreeBSD.
I'm sure to someone who lives and breathes Linux, or who works on this stuff, it's all trivial. But if it's not something you work on day-to-day, it's something you want to set and forget as an appliance, Linux adds pain.
> It sounds to me like you picked a bad Linux distro for your use case.
Were there any grounds at all in what I said for thinking that, or did you just make it up out of blind tribalism?
> In fact they can be easier to use for people who have more operational experience using Linux already.
Of course, but that's purely circular logic. Whatever OS you use for most of your systems, systems using that OS will be easier for you to use.
I've run a lot of Linux firewalls over the decades, but FreeBSDs shaping is <chefs kiss>
Linux is enormously higher performance but it is a huge pain in the ass to squeeze the performance out AND retain any level of readability
which is why there are like a dozen vendors selling various solutions that quietly compile their proprietary filter definitions to bpf for use natively in the kernel netfilter code...
Although the article also uses weasel words like "sufficiently good" performance so it sounds like their BSD 10G performance isn't that good either.
(I'm the author of the linked-to article.)
This person seems like they know wht they are talking about and given it serious thought, but I cannot fathom how you could make such a conclusion today.
Ubuntu/Linux do have reasonable performance, but I think they prefer PF firewalls, so that makes Linux a non-option for firewalls.
Personally, I don't really care for PF, but it offers pfsync, which I do care for, so I use it and ipfw... but I need to check in, I think FreeBSD PF may have added the hooks I use ipfw for (bandwidth limits/shaping/queue discipline).
How is that even possible. What's the excuse?
On Windows, setting process affinity has been around since the Windows NT days.
OpenBSD is a small, niche operating system, and it really only gets support for something if it solves a problem for someone who writes OpenBSD code. In a way, this is nice, because you never get half-assed features that kinda-sorta work sometimes, maybe. Everything either works exactly as you'd expect, or it's just not there.
I love OpenBSD, but there are some tasks it's just not suited for, and that's fine, too.
I am particularly-reminded of speculative execution optimizations allowing attacks like Spectre and Meltdown in 2017.
https://en.wikipedia.org/wiki/Transient_execution_CPU_vulner...
Also many IT professionals that have used Linux will be familiar with a Debian or a Debian derivative such as Ubuntu. That simply isn't the case with OpenBSD.
I recently installed OpenBSD on my old laptop to try it out and I found it difficult even though I used to use it at University back in the late 2000s.
I went through the process again just this weekend, because the disk in my firewall died. It's obvious that they continue to put a lot of effort into the OS. It's too bad that I can't use it as my daily driver, because I gladly would.
I find with the BSDs is that it is difficult to look up how to do something quick via a web search. Yes that is a man page that will tell you how to use whatever, but knowing where you are supposed to look to solve "why doesn't two button scroll work" isn't immediately obvious.
I was mucking around with FreeBSD on my old laptop and it works well and it isn't too bad to get stuff going if you are following the handbook, there is still that "how do I get <thing> working". I think the OS is good underneath, but I kinda want two finger scrolling to kinda work when I install cinnamon and X.
Debian is at the stage now of install, you have desktop and most stuff just works at least on a x86-64 system. If I want to install anything, it is download deb / flatpak and I am done.
But as a desktop OS, yes they lack in a lot of areas, mainly hardware support/laptop support.
While there are a out of date tutorials in Linux land, at least I can find out how I might do something and then I can figure things out from there. I do know how to use the man page system, however simply knowing what to look for is the biggest challenge.
e.g I was trying to configure two finger scrolling. The freebsd wiki itself appeared out of date. So it looks like you use libinput X driver package (which I forgot the name of now) and do some config in X. It would be nice if this was covered in the handbook as I think a lot of people would like two finger scrolling working on their laptops.
> But as a desktop OS, yes they lack in a lot of areas, mainly hardware support/laptop support.
Actually FreeBSD appears quite well hardware wise at least on some of the hardware I have. My laptops are all boring corp business refurbs that I know work well with Linux/BSDs.
The problem is that often I require using software which does not work on FreeBSD/OpenBSD or is difficult to configure.
The other issue is that there are things that appear to be broken for quite a while that are in pkgs (at least with FreeBSD) so trying to configure a VM with a desktop resolution over something relatively low isn't possible at least with Qemu.
OpenBSD is very different from FreeBSD in this regard. OpenBSD mostly works out of the box.
I am quite familiar with the BSDs. I've tried NetBSD, OpenBSD and FreeBSD when I used to muck around with this stuff daily.
I'm honestly not sure what its use case is in 2025, beyond as a research OS.
https://utcc.utoronto.ca/~cks/space/blog/sysadmin/UsingBindN...
I an not sure what role these computers that may transition to Ubuntu do, there are probably good reasons, I wish he had expanded on it.
We may at some point switch our remaining OpenBSD DHCP server to Ubuntu (instead of to FreeBSD); like our DNS resolvers, it doesn't use PF, and we already operate a couple of Ubuntu DHCP servers. In general Ubuntu is our default choice for a Unix OS because we already run a lot of Ubuntu servers. But we have lots of PF firewall rules and no interest in trying to convert them to Linux firewall rules, so anything significant involving them is a natural environment for FreeBSD.
(I'm the author of the linked-to article.)
I mean.. It's one pkg_add away. It's a weird constraint to give yourself if that was the problem, considering you absolutely had to install it on your replacement ubuntu servers.
In fact if you asked me to explain the difference between obsd and fbsd it is exactly this. fbsd focuses on performance and obsd focuses on ergonomics.
The state of affairs you described is much less the case now than in the past.
I can't be worried that critical parts of my network won't come back up because the box spontaneously rebooted or the UPS battery ran out (yes it happens — do you load test your batteries — probably not) and their bubblegum-and-string filesystem has corruption and / and /usr won't mount and I gotta visit the console like Sam Jackson in Jurassic Park to fsck the damn thing.
Firewalls are critical infra — by definition they can't be the least reliable device in the network.
I've been working with systems for a long time, too. I've screwed things up.
I once somehow decided that using an a.out kernel would be a good match for a Slackware diskset that used elf binaries. (It didn't go well.)
In terms of filesystems: I've had issues with FAT, FAT32, HPFS, NTFS, EXT2, ReiserFS, EXT3, UFS, EXT4, and exFAT. Most of those filesystems are very old now, but some of of these issues have trashed parts of systems beyond comprehension and those issues are part of my background in life whether I like it or not.
I've also had issues with ZFS. I've only been using ZFS in any form at all for about 9 years so far, but in that time I've always able to wrest the system back into order even on the seemingly most-unlikely, least-resilient, garbage-tier hardware -- including after experiencing unlikely problems that I introduced myself by dicking around with stuff in unusual ways.
Can you elaborate upon the two particular unrecoverable issues you experienced?
(And yeah, Google is/was/has been poisoned for a long time as it relates to ZFS. There was a very long streak of people proffering bad mojo about ZFS under an air of presumed authority, and this hasn't been helpful to anyone. The sheer perversity of the popular myths that have popularly surrounded ZFS are profoundly bizarre, and do not help with finding actual solutions to real-world problems.
The timeline is corrupt.)
Cyberjock sends his regards, I'm sure.
I've been using ZFS since it initially debuted in Solaris 10 6/06 (also: zones and DTrace), before then using it on FreeBSD and Linux, and I've never had issues with it. ¯\_(ツ)_/¯
I'm including that. zfs takes more skill to manage properly.
When it is treated as just a filesystem, then it works about like any other modern filesystem does.
ZFS features like scrubs aren't necessary. Multiple datasets aren't necessary -- using the one created by default is fine. RAIDZ, mirrors, slog, l2arc: None of that is necessary. Snapshots, transparent compression? Nope, those aren't necessary functions for proper use, either.
There's a lot of features that a person may elect to use, but it is no worse than, say, ext4 or FFS2 is when those features are ignored completely.
(It can be tricky to get Linux booting properly with a ZFS root filesystem. But that difficulty is not shared at all with FreeBSD, wherein ZFS is a native built-in.)
I will admit though, to truly get zfs you need to change how you think about filesystems.
OpenZFS 2.2 added support for overlays, so you can have the main pool(s) mounted as read-only:
this makes no sense. firewalling does not touch the filesystem very much if at all.
what FS is being used is essentially orthogonal to firewalling performances.
if anything, having a copy-on-write filesystem like ZFS on your firewall/router means you have better integrity in case of configuration mistakes and OS upgrade (just rollback the dataset to the previous snapshot!)
ZFS works just fine with partitions, if that's how a person/company/org wants to use it today.
This is why you have failover for firewalls. The loss of any single device isn't that important.
That said, I can't find fault in the filesystem, haven't personally encountered an issue with it, other than it being slow.
You should be failing testing failover regularly, just like you're testing backups and recovery, and other things that should not "need" to happen but have to actually work when they do.
A good time would be during your monthly/quarterly/(bi)annual/whatever patch cycle (and if there are no patches, then you should just test failover).
> visit the console like Sam Jackson in Jurassic Park
Consoles aren't so unusual for most server admins, IME. They're the most common tool.
The components and their potential benefits aren't really consequential; performance is. Sometimes, the hi-spec components are technologically interesting and exciting to geeks (me too), but have little practical value, especially maximalist components like ZFS. I've never needed it, for example. Very rarely could a journaling file system provide an actual benefit, though I don't object to them.
Sometimes the value is negative because the complexity of those components adds risk. KISS.
Journaling: write the journal, write the filesystem, in event of sudden power outage either the journal will be partially corrupt and discarded or the filesystem will be corrupt and the journal can be replayed to fix it, the problem is that now you are duplicating all metadata writes.
Softupates: reorder the writes in memory so that as the filesystem is written it is always in a consistent state.
So softupdates was a clever system to reduce metadata writes, perhaps too clever, apparently it had to be implemented chained through every layer of the filesystem, nobody but the original author really understood it and everyone was avoiding doing filesystem work for fear of accidentally breaking it. And it may not of even worked, there were definitely workloads where softupdates would hose your data.(I am not exactly sure, But I think it was a ton of small metadata rewrites into a disk full) So when someone wanted to do work on the filesystem but did not want to deal with softupdates, obsd in characteristic fashion said "sure, tear it out" It may come back, I don't know the details, but I doubt it. It sounds like it was a maintenance problem for the team.
Journaling conversely is a sort of inelegant brute force sort of mechanism, but at least it is simple to understand and can be implemented in one layer of the filesystem.
Log message:
Make softdep mounts a no-op
Softdep is a significant impediment to progressing in the vfs layer
so we plan to get it out of the way. It is too clever for us to
continue maintaining as it is."
https://undeadly.org/cgi?action=article;sid=20230706044554This can happen on any filesystem unless you have a very good power supply which can buffer flushing of buffers to disk and/or battery backed caches.
I am not sure there is a more robust, or simple, filesystem in use today. Most networking devices, including, yes, your UPS, use something like FFS to handle writeable media.
I am not accustomed to defending OpenBSD in any context. It is, in every way, a museum, an exhibition of clever works and past curiosities for the modern visitor.
But this is a deeply weird hill to die on. The "Fast File System," fifty years old and audited to no end, is your greatest fear? It ain't fast, and it's barely a "file system," but its robustness? indubitable. It is very reliably boring and slow. It is the cutting edge circa 2BSD
edit: I am mistaken, the FFS actually dates to 4.1BSD. It is only 44 years old, not 50. Pardon me for my error.
The lack of at least a journaling FS is inexcusable in a modern OS. Linux and Windows have had it for 25 years by now, and we could argue softupdates are roughly equivalent (FreeBSD has had SU+J for years now too).
Yes, you can forward 10Gbit/s with linux using VPP, but you cannot forward at that rate with small packets and stateful firewall. And it requires a lot of tuning and a large machine.
A used SRX4200 from juniper runs at around 3k USD and you can even buy support for it and you can forward at like 40Gb/s IMIX with it.
I still prefer PF syntax over everything else though.
OpenBSD is going through a slow fine grained locking transformation that FreeBSD started over 20 years ago. Eventually they will figure out they need something like epoch reclaimation, hazard pointers, or rcu.
sysctl hw.smt=1
under obsd.This was doable back in 2008 with about $15k of x86 gear and a Linux kernel and a little trickery with pf_ring. The minute AMD K10 and Intel Nehalem dropped, high routing performance was mostly a software problem... Which is cool as hell, compared to the era when it required elaborate dedicated hardware, but it does not make it cheap or easy. Just, commodity. Expensive commodity.
Now you can buy a device off the shelf for $800 that will do it on the CPU, to avoid the cost of Cisco or Juniper, and it has a super simple configuration interface for all the software-based features. Everything you could do in L3/L4 on a Linux platform in 2008, for like, 1/16th the price, with vastly less engineering effort. It is just like, a thing you buy, and it all kinda works outta the box.
No pf_ring trickery, no deep in-house experience, just a box you buy on a web site and it moves 10 gbps with filtering for $800
There's no real magic here: they use absolutely shockingly enormous ARM chips from Amazon/Annapurna. You can build an $800 commodity platform that rivals a $15k commodity platform in 2008, and both of them replace what used to cost $500k.
Is it as good as Cisco or Juniper? oh, certainly not. Will it route and filter traffic at much greater rates, for $800, than anything they have ever been bothered to offer? ABSOLUTELY
Although, their original paper says they used a 2-socket prototype and got some very impressive numbers: https://www.sigops.org/s/conferences/sosp/2009/papers/dobres...
So maybe you could skate by with a slightly cheaper machine ;)
Turns out, the main reason `pf` is non-portable is that half of it runs inside Berkeley-type network stacks, often in kernel space, but the remainder is in user space.
So the miserable single-threaded `pf` on OpenBSD is still, in some part, single-threaded on FreeBSD, but for certain rule-sets, you will get the benefits of FreeBSD's intensively re-entrant and multithreaded TCP/IP, because those parts of `pf` are embedded in the network stack.
So depending on workload, a given `pf` configuration on OpenBSD might be perfectly equal to its FreeBSD counterpart, or hundreds of times slower. I feel like this gives a lot of context to the OP's grousing around "10 gbps"
P.S. To confess my own biases: a port of a `pf` configuration to a platform where some rulesets are high performance and others are not, that would not be very attractive to me. An improvement, but not a solution. I would be looking to move to a Linux stack. Baby steps, I guess. I have done worse things to better people!
P.P.S. I suspect this coupling between a re-entrant TCP/IP stack and a single-threaded firewall process is also why FreeBSD `pf` is never even close to feature parity with its OpenBSD counterpart -- it is just easier to do new stuff with a simpler model
0xWTF•2mo ago
wslh•2mo ago
porker•2mo ago