frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
616•klaussilveira•12h ago•180 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
920•xnx•17h ago•545 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
32•helloplanets•4d ago•22 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
105•matheusalmeida•1d ago•26 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
8•kaonwarb•3d ago•2 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
37•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
214•isitcontent•12h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
207•dmpetrov•12h ago•102 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
319•vecti•14h ago•141 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
356•aktau•19h ago•181 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
367•ostacke•18h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
474•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
270•eljojo•15h ago•159 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
13•jesperordrup•2h ago•4 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
400•lstoll•18h ago•271 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
82•quibono•4d ago•20 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
243•i5heu•15h ago•185 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
10•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
51•gfortaine•10h ago•17 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
139•vmatsiiako•17h ago•61 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
277•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1055•cdrnsf•21h ago•433 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
69•phreda4•12h ago•13 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
128•SerCe•8h ago•113 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•7h ago•10 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
173•limoce•3d ago•94 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
62•rescrv•20h ago•22 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
30•denysonique•9h ago•6 comments
Open in hackernews

Longhorn – A Kubernetes-Native Filesystem

https://vegard.blog.engen.priv.no/?p=518
59•jandeboevrie•5mo ago

Comments

d3Xt3r•5mo ago
Longhorn was the codename for Windows Vista... so not a great choice of a name (IMO).
onionisafruit•4mo ago
Longhorn is a fine name, and it doesn't matter if somebody else used it 20+ years ago
weinzierl•4mo ago
By that logic Titanic would be a fine name too.
NewJazz•4mo ago
Hmm, maybe just shorten to Titan?
esafak•4mo ago
Just don't use it to name a database.
bigstrat2003•4mo ago
I mean, I think it would be. Superstition about naming is silly.
ofrzeta•4mo ago
https://www.titanic-magazin.de
selfhoster11•4mo ago
That is false.

Sincerely, a lover of Gemini (the protocol, and the AI) and Gopher (the protocol, and not the language).

tracker1•4mo ago
I remembered the Windows Vista reference as soon as I saw the name. That said, I don't think it's a big deal.
gdbsjjdn•4mo ago
I did this was going to be about the Vista and how some of the FS stuff that got cut was prescient. "This old thing that didn't work was ahead of its' time" is a whole genre of post (ex. Itanium)
antod•4mo ago
Could've been worse eg Cairo or Blackcomb.
pjmlp•4mo ago
Indeed, does it uses .NET in its implementation, or are they already rewriting it into COM?
Delphiza•4mo ago
I agree. You have to be a certain age to remember that a big part of Microsoft "Longhorn" was WinFS (Windows File System), which was intended to completely rework storage into a relational file system (or object-oriented depending on your view). "Longhorn" was supposed to do away with NTFS and failed miserably at that objective. I believe that WinFS delayed things considerably and eventually didn't ship with Vista.

Microsoft Longhorn's failure to be the next big thing was largely due to the bad implementation of a storage subsystem. The result was Windows Vista, which was derided as a bad OS (at least until Windows 8). Due to that history, I would not name any file system 'Longhorn'. It may not be the same as naming a cruise ship 'Titanic', but you wouldn't name it 'Iceberg' either.

dilyevsky•4mo ago
Anyone knows what's the story with NVMEoF/SPDK support these days? A couple years ago Mayastor/OpenEBS was running laps around Longhorn on every performance metrics big time, not sure if anything changed there...
coopreme•4mo ago
Go with Ceph… a little more of a learning curve but overall better.
cmeacham98•4mo ago
I tried longhorn on my homelab cluster. I'll admit it's possible that I did something wrong, but I managed to somehow get it into a state where it seemed my volumes got permanently corrupted. At the very least I couldn't figure out how to get my volumes working again.

When restoring from backup I went with Rook (which is a wrapper on ceph) instead and it's been much more stable, even able to recover (albeit with some manual intervention needed) from a total node hardware failure.

nerdjon•4mo ago
It is interesting seeing this article come up since just yesterday I setup longhorn in my homelab cluster needing better performance for some tasks than NFS was providing so I setup a raid on my r630 and tried it out.

So far things are running well but I can't shake this fear that I am in for a rude awakening and I loose everything. I backups but the recovery will be painful if I have to do it.

I will have to take a look at rook since I am not quite committed enough yet (only moved over 2 things) to switch.

master_crab•4mo ago
If the information is truly important push it off to a database or NAS. I use rook at home but really only for long lived app data (config files, etc). Anything truly important (media, files, etc) is served from an NFS attached to the cluster.
cortesoft•4mo ago
I have a small 4 node home cluster, and longhorn works great... on smaller volumes.

I have a 15TB volume for video storage, and it can't complete any replica rebuilds. It always fails at some point and then tries to restart.

nerdjon•4mo ago
That is good to know then, I am really just using this for smaller volumes. My media is sitting at about the same size yours is and instead of using PVC's I just have it mounting a straight NFS share specifically for that to avoid any issues there.

I think I am likely keeping most of my storage just setup with a storage class that uses my NFS as storage. But longhorn will be used for the things that need to be faster like the databases. I moved jellyfin over to Longhorn and it went from being borderline unusable while metadata was grabbed to actually working well.

I can't imagine my biggest volume being more than 100gb, and even that is likely a major over estimation on my part.

positisop•4mo ago
Longhorn is a poorly implemented distributed storage layer. You are better off with Ceph.
yupyupyups•4mo ago
I've heard Ceph is expensive to run. But maybe that's not true?
jauntywundrkind•4mo ago
I'm only just wading in, after years of intent. I don't feel like Ceph is particularly demanding. It does want a decent amount of ram. 1GB each for monitor, manager, and metadata, up to 16GB total for larger clusters, according to docs. But then each disk's OSD defaults to 4gb, which can add up fast!! And some users can use more. 10Gbe is recommended and more is better here but that seems not unique to ceph: syncing storage will want bandwidth. https://docs.ceph.com/en/octopus/start/hardware-recommendati...
westurner•4mo ago
This from 2023 says: https://www.redhat.com/en/blog/ceph-cluster-single-machine :

> All you need is a machine, virtual or physical, with two CPU cores, 4GB RAM, and at least two or three disks (plus one disk for the operating system).

xyzzy123•4mo ago
For me it was the ram for the OSDs, 1GB per 1TB but ideally more for SSDs...
keeperofdakeys•4mo ago
Ceph overheads aren't that large for a small cluster, but they grow as you add more hosts, drives, and more storage. Probably the main gotcha is that you're (ideally) writing your data three times on different machines, which is going to lead to a large overhead compared with local storage.

Most resource requirements for Ceph assume you're going for a decently sized cluster, not something homelab sized.

master_crab•4mo ago
It’s going to do a good job saturating your lan maintaining quorum on the data.
willbeddow•4mo ago
have not used longhorn, but we are currently in the process of migrating off of ceph after an extremely painful relationship with it. Ceph has fundamental design flaws (like the way it handles subtree pinning) that, IMO, make more modern distributed filesystems very useful. SeaweedFS is also cool, and for high performance use cases, weka is expensive but good.
q3k•4mo ago
That sounds more like a CephFS issue than a Ceph issue.

(a lot of us distrust distributed 'POSIX-like' filesystems for good reasons)

__turbobrew__•4mo ago
Are there any distributed POSIX filesystems which don’t suck? I think part of the issue is that POSIX compliant filesystem just doesn’t scale, and you are just seeing that?
willbeddow•4mo ago
weka seems to Just Work from our tests so far, even under pretty extreme load with hundreds of mounts on different machines, lots of small files, etc... Unfortunately it's ungodly expensive.
scheme271•4mo ago
I think Lustre works fairly well. At the very least, it's used in a lot of HPC centers to handle large filesystems that get hammered by lots of nodes concurrently. It's open source so nominally free although getting a support contract from specialized consulting firm might be pricey.
latchkey•4mo ago
https://www.reddit.com/r/AMD_Stock/comments/1nd078i/scaleup_...

You're going to have to open the image and then go to the third image. I thought it was interesting that OCI pegs Lustre at 8Gb/s and their high performance FS at much higher than that... 20-80.

scheme271•4mo ago
That's 8Gb/s per TB of storage. The bandwidth is going to scale up as you add OSTs and OSSs. The OCI FS maxes at 80Gb/s per mount target.
huntaub•4mo ago
Basically, we are building this at Archil (https://archil.com). The reason these things are generally super expensive is that it’s incredibly hard to build.
hulitu•4mo ago
I thought it is a Windows version. Wait, it is a Windows version. /s
studmuffin650•4mo ago
Where I work, we primarily use Ceph for the a K8s Native Filesystem. Though we still use OpenEBS for block store and are actively watching OpenEBS mayastor
__turbobrew__•4mo ago
I looked into mayastor and the NVME-of stuff is interesting, but it is so so so far behind ceph when it comes to stability and features. One ceph has the next generation crimson OSD with seastore I believe it should close a lot of the performance gaps with ceph.
dilyevsky•4mo ago
> One ceph has the next generation crimson OSD with seastore I believe it should close a lot of the performance gaps with ceph.

only been in development for what like 5 years at this point? =) i have no horse in this race but seems to me openebs will close the gap sooner.

__turbobrew__•4mo ago
soon™
scubbo•4mo ago
(Copied from[0] when this was posted to lobste.rs) Longhorn was nothing but trouble for me. Issues with mount paths, uneven allocation of volumes, orphaned undeletable data taking up space. It’s entirely possible that this was a skill issue, but still - never touching it again. Democratic-csi[1] has been a breath of fresh air by comparison.

[0] https://lobste.rs/s/vmardk/longhorn_kubernetes_native_filesy... [1] https://github.com/democratic-csi/democratic-csi

dpedu•4mo ago
Kubernetes CSI drivers are surprisingly easy to write. You basically just have to implement a number of gRPC procedures that manipulate your system's storage as the Kubernetes control plane calls them. I wrote one that uses file-level syncing between hosts using Syncthing to "fake" network volumes.

https://kubernetes-csi.github.io/docs/developing.html

There are 4 gRPCs listed in the overview, that literally is all you need.

remram•4mo ago
Be aware of its security flaws -- https://github.com/longhorn/longhorn/issues/1983

Allowing anyone to delete all your data is not great. When I found this I gave up on Longhorn and installed Ceph.

yamapikarya•4mo ago
i am using nfs and i think its pretty simple and just works
philsnow•4mo ago
It's simple enough, and I moved from Longhorn to NFS for my homelab as well, but I bristle at needing to have the same unix UIDs everywhere that wants to mount or serve an NFS volume. It seems like a huge layering violation.

I "just" want to expose storage over the network (I don't really care about the protocol, NFS would be fine) with a pre-shared secret or something like that.

edit: NFS really goes poorly when containers want to chown things, now I need to have a 'postgres' UID that's the same everywhere?

yamapikarya•4mo ago
not really sure about permission things, but basically it just dump all your data inside the server and many applications are accessing it. i think it's really depends on your application
devn0ll•4mo ago
As an Enterprise user of Rancher, we had long discussions with Suse about Longhorn. And we are not using it.

You need a separate storage lan, a seriously beafy one at to use Longhorn. But even 25Gbit was not enough to keep volumes from being corrupted.

When rebuilds take too long, longhorn fails, crashes, hangs, etc, etc.

We will never make the mistake of using Longhorn again.

johntash•4mo ago
For homelab uses, I've been enjoying Linstor/Piraeus a lot more than longhorn lately. Less issues overall so far and simpler.