This is indeed a problem now that google search is next to useless. And AI further degrading the quality.
I work around it to some extent by keeping my local knowledge base up to date, as much as that is possible; and using a ton of scripts that help me do things. That works. I am also efficient. But some projects are simply underdocumented. A random example is, in the ruby ecosystem, rack. Have a look here:
Now find the documentation ... try it.
You may find it:
Linked from the github page.
Well, have a look at it.
Remain patient.
Now as you have looked at it ... tell me if someone is troll-roflcopter-joking you.
https://rack.github.io/rack/main/index.html
Yes, you can jump to the individual documentation of the classes, but does that really explain anything? It next to tells you nothing at all about anything about rack.
If you are new to ruby, would you waste any time with such a project? Yes, rack is useful; yes, many people don't use it directly but may use sinatra, rails and so forth, I get it. But this is not the point. The point is whether the documentation is good or bad. And that is not the only example. See ruby-webassembly. Ruby-opal. Numerous more projects (I won't even mention the abandoned gems, but this is of course a problem every language faces, some code will become outdated as maintainers disappear.)
So this is really nothing unique to Linux. I bet on BSD you will also find ... a lack of documentation. Probably even more as so few blog about BSD. OpenBSD claims it has great documentation. Well, if I look at what they have, and look at Arch or Gentoo wiki, then sorry but the BSDs don't understand the problem domain.
It really is a general problem. Documentation is simply too crap in general, with a few exceptions.
> if the team behind this OS puts this much care into its documentation, imagine how solid the system itself must be.
Meh. FreeBSD documentation can barely called the stand-out role model here either. Not sure what the BSD folks think about that.
> I realized almost immediately that GNU/Linux and FreeBSD were so similar they were completely different.
Not really.
There are some differences but I found they are very similar in their respective niche.
Unfortunately my finding convinced me that Linux is the better choice for my use cases. This ranges from e. g. LFS/BLFS to 500 out of top 500 supercomputers running Linux. Sure, I am not in that use case of having a supercomputer, but the point is about quality. Linux is like chaotic quality. Messy. But it works. New Jersey model versus [insert any high quality here]. https://www.jwz.org/doc/worse-is-better.html
> Not only that: Linux would overheat and produce unpredictable results - errors, sudden shutdowns, fans screaming even after compilation finished.
Well, hardware plays a big factor, I get it. I have issues with some nvidia cards, but other cards worked fine on the same computer. But this apocalypse scenario he writes about ... that's rubbish nonsense. Linux works. For the most part - depending on the hardware. But mostly it really works.
> I could read my email in mutt while compiling, something that was practically impossible on Linux
Ok sorry, I stopped reading there. My current computer was fairly cheap; I deliberately put in 64GB RAM (before the insane AI-driven cost increases) and that computer works super-fast. I compile almost everything from source. I have no real issue with anything being too slow; admittedly a few things take quite a bit of compile-power, e. g. LLVM, or qt - compiling that from source takes a while, yes, even on a fast computer. But nah, the attempt to claim how FreeBSD is so much faster than Linux is, that's simply not factual. It is rubbish nonsense. Note that OpenBSD and NetBSD folks never write such strangeness. What's wrong with the FreeBSD guys?
Altough on compatibility, under 9front everything it's statically compiled, period. Compile, store, copy back, run.
Docs? man pages, and /sys/doc. Easier to be understood and set.
Current Unixes are too bloated. Yes, FreeBSD too. NetBSD and OpenBSD, a bit less. GNU/Linux it's such a monster that over time I'd guess Guix will just keep Coreutils as a toolbox and everything will be jitted Guile/Scheme, and for the rest of the distros, it will just be Wayland+Gnome+Flatpak OS. Now try hacking that. Try creating a working 32 bit OS with it. Documentation beyond GUIX' info files? Good luck, man will be a legacy tool called from Info from Guix.
SystemD? Over time, Gnome won't be under non SystemD OSes. Forget it under BSD's and shims, KDE will be the only option (and under Guix too). The irony, GNU Networked Object Model Environment outside of the GNU os.
Meanwhile, by default GNU/Linux has more propietary bits in the kernel than GNU bits. Untar it, Radeon depends on nonfree firmware. So does tons of SOCS, audio devices, wireless cards. Linux Libre + Guix? Not with Gnome, maybe with a Guile JIT/AOT'ed desktop environment, a la Cosmic but with AOT'ted Guile instead of Rust. Forget cohesiveness, your Redhatware OS will be as native to the rest as Waydroid inside Fedora Silverblue. Seamless, but not native. And with similar issues on running some software in Waydroid without hacks faking up the existence of some blobs.
And as tons of infra depends on SystemD and blobs, guess what will happen to Arch and the rest of the distros. It will jut be second class, Pacman packaged Fedoras.
Not my idea of love. Maybe that hardware was supported on Linux. Switch from Linux to FreeBSD so that you can later switch to Mac when you get frustrated with unsupported hardware is not a good pitch.
Using macOS meant we got laptop hardware that worked reliably, including Wi-Fi, running a more or less BSD-derived userspace.
The lack of graphics and Wi-Fi driver support on the *BSDs is not Apple’s fault. It has always been a resource issue.
Thanks to the AT&T lawsuit, Linux secured momentum at a critical juncture — and here we are. Path dependence and the complexities of real life mean that “winning” is never just a question of technical merit.
Imagine quitting MacOS because it doesn't support Realtek RTL8188CUS.
I delayed upgrading to 15.0 after it was released, but last weekend I finally did it, and it left me wondering why I hadn't done it sooner, because it went quickly and smoothly.
Is there anything FreeBSD can do that, say, Debian cannot? Probably not (at least I cannot think of anything). When I set up the server, ZFS was a huge selling point, but I heard that it works quite well on Linux, these days. But I appreciate the reliability, the good documentation, the community (when I need help).
The main gripe is probably Docker and/or software depending on Linux-isms that can't be run natively without resorting to bhyve or smth alike that.
And this is part of the situation that's going to get worse, io_uring will become more popular in language runtimes and iirc it's not trivial to emulate via existing FreeBSD mechanisms (kqueue).
Iirc Mac docker uses xhyve (bhyve port/inspired) to run containers via Linux emulation, MS went for pv-Linux for WSL2, while FreeBSD has been "good enough" so far.
But I think that for containers it's either time to shape up Linux emulation well (It's ironic that WSL1 ironed out their worst quirks just as WSL2 was introduced, although that was without io_uring) or just add an option for Podman to have a minimal pv-Linux kernel via bhyve to get better compatibility.
I wonder if FreeBSD ought to consider a WSL2-style approach to Linux binary compatibility, too.
Keeping the Linux syscall compatibility layer up-to-date has always been a resource problem, especially when syscalls depend on large, complex Linux kernel subsystems that just don’t map cleanly to FreeBSD kernel facilities.
Stability of user interface and documentation.
But it probably has to change a lot for every major release, because so many things change. FreeBSD major releases have changes too, but a lot of the user interfaces are very stable and so the documentation can be too. Stable documentation allows time for it to be edited and revised to become better documentation, as well as developing quality translations.
That said, for non-core utilities on Linux it's pretty hit-or-miss. The BSDs are generally pretty consistent in what they do offer, and that's what I love about them. Of course it's a different development model and it shows.
On FreeBSD I know its always going to work.
ZFS boot environments.
One could install Debian's root on ZFS by following the OpenZFS documentation guide, combine it with ZFSBootMenu (or similar), but there won't be any upstream support from the Debian project itself.
The Nitrux Linux distribution is based on Debian and provides an immutable feature similar to boot environments, but you can't treat your immutable boot images the same way you can treat your mutable data like how you can with ZFS datasets on FreeBSD.
I have a higher opinion of ZFS than I do of btrfs, but FWIW snapper+btrfs has worked well for me on openSUSE Tumbleweed for ten years now, too.
If you asked the opposite (what can Debian do that FreeBSD cannot) I would have more to say and it would mostly be preceded by "I know FreeBSD is not Linux but ...". Whenever I need to do any sort of maintenance or inspection I have to look up the equivalent commands for things like `lsblk` and something nested in `/usr/etc/...` when I'm used to finding it in `/etc/` over every other system.
This is a consequence of both FreeBSD's reliability in needing very infrequent attention and my limited use-cases to use it. As a NAS it is great but I can't touch it without full-text search of all my notes on the side! Either way, no regrets about learning and relying on it after ~18 months so far.
I haven't done that yet because I think I'd want to switch to pkgbase but that makes me nervous. Did you go with that option or continued to use the sets?
Yes. Emulate traffic latency using IPFW and dummynet[^1]. There is no Linux (or OpenBSD, NetBSD) counterpart.
The ZFS implementation is less buggy.
Immich assumes you're running Docker and I can't seem to get Linux running in a bhyve VM with Intel Quick Sync acceleration.
Ubuntu could have been the one, but they reversed course after dropping support for Zsys in 2022[1].
If there are others, then please let me know, but as far as I can tell, the closest approximations in Linux are:
- Btrfs with Snapper in OpenSuse Tumbleweed/MicroOS
- Snapshot Manager/Boom in RHEL
- OStree in Fedora Atomic, CarbonOS, EndlessOS
- Bootable container implementations in Fedora CoreOS, RHEL10, CarbonOS, Bazzite, BlendOS, etc.
- Snaps in Ubuntu Core
- Generations in NixOS and Guix
- A/B update mechanism in ChromeOS, IncusOS
- OverlayFS in Nitrux Linux
- Ad-hoc implementations with Arch, Alpine, etc.
Excluding the ad-hoc implementations, only OpenSuse and Red Hat approaches allow you to treat your system image and system data the same way. They're great, but fundamentally incompatible, and neither has caught on with other distributions. Capabilities of both approaches are limited compared to ZFS.
The strangest part of the Linux situation IMHO is, every time ZFS on Linux is discussed, someone will invariably bring up XFS. For the past decade, XFS on Linux contains support for Copy-on-Write (CoW) and snapshots via relinking. If this is the preferred path on Linux (for users who don't want checksumming of ZFS/Btrfs/Bcachefs), then how come no major distros besides Red Hat have embraced it[2] to provide an update rollback functionality?
I concede that most of the other approaches do provide a higher level level of determinism for what your root system looks like after an upgrade. It's powerful when you can test that system as an OCI container (or as a VM with Nix/Guix). FWIW, FreeBSD can approximate this with the ability to use it's boot environments as a jail[3].
[0] https://daemonforums.org/showthread.php?t=7099
[1] https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1968...
[2] https://docs.redhat.com/en/documentation/red_hat_enterprise_...
[3] https://man.freebsd.org/cgi/man.cgi?query=bectl&sektion=8&ma...
I'm in the process of converting and consolidating all my home infra into a mono-compose, for the simple reason I don't want to fiddle with shit, I just want to set-and-forget. The joy of technology was in communications and experiences, not having to dive through abstraction layers to figure out why something was being fiddly. Containers promised to remove the fiddliness (as every virtualization advancement inevitably promises), and now I'm forced to either fiddle with Docker and its root security issues, fiddle with Podman and reconfiguring the OS for lower security so containers don't stop (or worse, converting compose to systemd files to make them services), or fiddle with Kubernetes to make things work with a myriad of ancillary services and CRDs for enterprises, not homelabs.
For two years now, there's been a pretty consistent campaign of love-letters for the BSDs that keep tugging at what I love about technology: that the whole point was to enable you to spend more time living, rather than wrangling what a computer does and how it does it. The concept of jails where I can just run software again, no abstractions needed, and trust it to not misbehave? Amazing, I want to learn more.
So yeah, in lieu of setting up the second NUC as a Debian HA node for Docker/QEMU failover, I think I'm going to slap FreeBSD on it and try porting my workloads to it via Jails. Worst case scenario, I learn something new; best case scenario, I finally get what I want and can finally catch up on my books, movies, shows, and music instead of constantly fiddling with why Plex or Jellyfin or my RSS Aggregator stopped functioning, again.
But there is always pressure for more features, more bloat. In Linux, on the plus side, I can plug in some random gadget and in most cases it just works. And any laptop that's a few years old, you can just install Fedora from its bootable live image, and it will work. Secure boot, suspend, Wifi, the special buttons on the keyboard, and so on. But the downside is enormous bloat and yes, often the kind of tinkering you really don't want to do any more, such as the Brother laser printer drivers still being shipped as 32-bit binaries and the installer silently failing because one particular 32-bit dependency wasn't autoinstalled. Or having to get an Ubuntu-dedicated installer (Displaylink!) to run on Fedora.
But here you have the "mainstream" Unix-ish OS absorbing all the bleeding edge stuff, all the bloat. Allowing FreeBSD free reign to be pure, with a higher average quality of user, which sets the tone of the whole scene. An echo of the old days, like Usenet before "Eternal September" and before Canter & Siegel - for those old enough to remember how it all felt back then.
If you think Linux can have "enormous bloat" then Windows bloat by the same standards is terrifyingly humongous (and slow!).
Anyways had enough of the random downtime, I just switched to Linux which didn't have these issues.
I'd say the best part of FreeBSD though is freebsd-update which was a game changer from the previous make world shenanigans.
Myself has been through generational hardware, and had had zero issues with any apart from when the raid card failed.
Network has been solid. ZFS has just worked. Not sure what your issues were however colocating since FreeBSD 8, and now colocating 16-CURRENT on my the server. FreeBSD has been rock stable in my books.
2x Dell R630 and 1x Cisco U220 M5
doublerabbit@cookie:~ $ uname -a && uptime
FreeBSD cookie.server 12.2-BETA1 FreeBSD 12.2-BETA1 r365618 GENERIC amd64
10:39PM up 1752 days, 1:31, 1 user, load averages: 0.64, 1.30, 1.31https://docs.freebsd.org/en/books/handbook/wayland/
If you want something with a graphical environment ready to run, check out GhostBSD, which is based on FreeBSD and features MATE:
Hendrikto•7h ago
I am just not sure it is worth leaving the Linux ecosystem. What if I want to run a Docker container? Do I have to trust random people for ports of software that runs natively on Linux, or port it myself?
FreeBSD seems good so far, but community and ecosystem are important.
vermaden•5h ago
Overhead of FreeBSD Bhyve Hypervisor is about 0.5% (measured in benchmarks) so You loose nothing.
Here You have easy and complete jumpstart into Bhyve in FreeBSD:
- https://vermaden.wordpress.com/2023/08/18/freebsd-bhyve-virt...
Regards, vermaden
ComputerGuru•3h ago
sidkshatriya•2h ago
No it is not reliable enough. Some syscalls not implemented, there are edge case issues with procfs etc. Best to execute in a Linux VM.
shevy-java•4h ago
> Do I have to trust random people for ports of software that runs natively on Linux, or port it myself?
This is a bit problematic in my opinion because ultimately we have to trust all who write open source code. This works well for the most part but there are malicious actors too. See the xz backdoor as example. Or various state actors who want to sniff after people. Age verification is the current attempt to sniff for people data, trying to push legislation by claiming "this is only to protect children" (while it has some interesting side effects, e. g. becoming a stepping stone for anyone wanting to sniff user data and relay this).
> FreeBSD seems good so far, but community and ecosystem are important.
Well, there are many more Linux users. Whether that is better or worse ... but it is a fact too.
mghackerlady•3h ago
0x457•2h ago
You already trust random people in linux, you have to trust even more random and more people when you run docker.
Ports are quite large collection already. If you port yourself it's either up to 20 minutes plus compilation time or major nightmare. More and more software today assumes you run on linux only.
I think FreeBSD is great for setup and forget. If you have to interact with it regularly it's not worth it. Definitely not worth it for desktop.
Gud•20m ago