frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
1•senekor•27s ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•3m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•5m ago•2 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•6m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•8m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•10m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•12m ago•1 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•14m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•19m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•21m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•24m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•36m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•38m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•39m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•52m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•55m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•57m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
4•throwaw12•1h ago•2 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1h ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments
Open in hackernews

BCacheFS is being disabled in the openSUSE kernels 6.17+

https://lwn.net/ml/all/9032de2a-03a7-4f9e-9c8a-8bd81c5d1fc5@suse.cz/
70•6581•4mo ago

Comments

bgwalter•4mo ago
[deleted wrongthink]
graemep•4mo ago
There is an apology for that comment and a rewording further down the thread. Evidently made by someone who is not a native speaker who did not realise how it comes across.
teekert•4mo ago
Good addition,thanx.

I've been in a similar situation, letting everyone know I was fired. Apparently in the US this has a negative connotation, and they use "being let go" (or something confusing as "handing in/being handed your 2 weeks notice", a concept completely unknown here). Here we only have one word for "your company terminating your employment", and there is no negative connotation associated with it. This can be difficult for non-natives. We can come across very weird or less intelligent.

T3OU-736•4mo ago
In the US, the terminology tends to split into "fired" (implies "for valid reasons") vs "laid off" (implies "position was terminsted, this was not about the employee or their qualities and performance").
graemep•4mo ago
In the UK "fired" would mean the same, "laid off" off would mean the same, "made redundant" also means the same and more clearly, with emphasis on the position no longer existing. "Sacked" means about the same as fired.
dbdr•4mo ago
Funnily enough the apology ends with:

> If the above offended anyone, I sincerely apology them.

Unless this was tongue-in-cheek, this kind of proves the point that language was the cause. The apology is a good move in any case.

t51923712•4mo ago
Why would the "behave" comment mean anything different in Czech than in English?

The revised version, "Once the bcachefs maintainer conforms to the agreed process and the code is maintained upstream again" is still lecturing and piling on, as the LWN comments say:

https://lwn.net/Articles/1037496/

It is the classic case of CoC people and their inner circle denouncing someone, and subsequently the entire Internet keeps piling on the target.

badosu•4mo ago
There's a lot more context than what is being discussed here. Kent has proven to be incapable to respect the release window process everyone who works in the kernel agrees to.

I don't like politics in my repo as much as the next guy, but this case is pretty clear cut. There's no ambiguity or controversy.

zenoprax•4mo ago
"behave" in this context can refer to simply respecting existing norms about RC code freezing.
motorest•4mo ago
Ultimately that's the right call, and the inevitable one as well.
lupusreal•4mo ago
The way the BCacheFS situation has been playing out is a tragedy. I had very high hopes for it.
InsideOutSanta•4mo ago
Yeah, this all seems so unnecessary. I hope Kent can either figure out how to work in the context of a larger team or find somebody who can do it on his behalf.
johnisgood•4mo ago
> Once the BCacheFS maintainer behaves [...]

So, there are still behavioral issues here I take it? That is a bummer. This is not news to me, but I thought the situation has changed ever since.

motorest•4mo ago
> So, there are still behavioral issues here I take it?

From the past discussion, it's mainly grave behavioral issues but they also end up being technical. Such as trying to push new untested features into RCs and breaking builds, and resorting to flame wars to push these problematic patches forward instead of actually working them out with maintainers.

But yeah, the final straw was a very abusive email sent to a maintainer in the mailing list.

koverstreet•4mo ago
...Where are you getting this stuff?

Seriously, every time I read tales about all the horrific things I've done in the kernel community, the stories grow and get wilder and wilder.

Oh wait.

It's you.

johnisgood•4mo ago
That is just people for you. My advice would be to not let them prompt you to leave such comments because it will appear to confirm what they are saying.
koverstreet•4mo ago
In my experience, the trolls never go away unless you start poking at them to point out how ridiculous and laughable they are.
InsideOutSanta•4mo ago
One thing I've learned over 40 years of working in software development is that it is often more effective to collaborate productively with others than to be right. Maybe Linus's rules for what can be merged at what point are bad, but you're not going to change his mind.

So even if you were right all along, the outcome is that people are screwed because bcachefs is no longer maintained in the kernel.

Every day, I ask myself, do I want to be right, or do I want to be effective?

koverstreet•4mo ago
That attitude is how you get ahead as an individual at the expense of our whole field.

That's how you end up with decisionmaking that's nothing more than popularity contests and no one even bothering to do the analysis on which option is technically better.

That way leads the death of leadership.

If you want to accomplish something as challenging as a filesystem, you do it by consistently making the right call and sticking to it, over and over and over.

And if you keep doing that, eventually you end up with the most active filesystem community around, and the ability to flip even the kernel community the bird when their weaponized incompetence becomes too much :)

Voultapher•4mo ago
Don't hold your breath.

I've approached Kent two weeks ago and offered to help upstream changes into the kernel so he wouldn't have to interact with any of the kernel people ... he claimed without being able to explain why that having a go-between couldn't possibly help and that if he couldn't dictate the kernel release schedule he wouldn't want to ship the software anyway. And then proceeded to dunk on btrfs.

yencabulator•4mo ago
Totally expecting to see bcacheos next.

(Then maybe down the line people will realize Linus had a heart of gold behind his foul mouth.)

johnisgood•4mo ago
Same. I liked many of its features (actually, all features, see https://bcachefs.org) and I was waiting for it to become usable, but I guess that day will never come now?

So, the alternative is ZFS only, maybe HAMMER2. HAMMER2 does not look too bad either, except you need DragonflyBSD for that.

ThatPlayer•4mo ago
It's not unusable, I use it on a spare computer for fun, cuz I want tiering of SSD + HDDs. And this doesn't mean development has stopped, just not done in the kernel.
johnisgood•4mo ago
True, I did not mean to say usable, I meant to say that "officially supported", but I will give it a go with a custom built kernel sooner or later.

For how long have you been using it? Any issues? Any favorite feature(s) in particular?

ThatPlayer•4mo ago
I've been using it about a year or so. Like I said, the 'killer feature' for me is tiering. I'm not using it on my main computer, but a spare parts computer with whatever storage I have laying around thrown into it, so being able to combine a 3TB HDD with a 1/2 TB SSD. All the storage stays usable, and I get better performance.

Not perfect though. Performance at times doesn't seem better than just SSD, though I didn't really test that and I'm not sure if it's just hitting the HDD at those times. And there was a kernel version that didn't boot, though I'm staying on the bleeding edge with Arch Linux.

ahartmetz•4mo ago
What I expect to happen is that bcachefs stabilizes outside of mainline, and after that, it can be merged back because no large patches = not much drama potential.
yjftsjthsd-h•4mo ago
My concern is that historically some of the conflict over bcachefs was because work on it touched code outside the filesystem area. If a year from now they show up and say "okay, here's the new bcachefs code that's 100% formally verified and has zero bugs and is good to go!", then it still could fail to get merged because they rewrote parts of (say) the block device system and the maintainers of that part of the kernel don't like the changes.
ahartmetz•4mo ago
I didn't forget about these things, but I was under the impression that they had been resolved one way or another.
yencabulator•4mo ago
Hey, the block i/o subsystem is at least connected! Kent also wrote a brand-new locking system: https://lwn.net/Articles/755164/
qalmakka•4mo ago
RIP BCacheFS. I was hopeful I could finally have a modern filesystem in Linux mainlined (I don't trust Btrfs anymore), but I guess I'll keep on having to install ZFS for the foreseeable future I guess.

As I predicted, out of tree bcachefs is basically dead on arrival - everybody interested is already on ZFS, btrfs is still around only because ZFS can't be mainlined basically

Ygg2•4mo ago
Wait. You don't trust Btrfs but you would trust BCacheFS, that's obviously very experimental?
rurban•4mo ago
Still more stable than btrfs. btrfs is also dead slow
Iridiumkoivu•4mo ago
I agree with this sentiment.

Btrfs has destroyed itself on my testing/lab machines three times during last two years up to point where recovery wasn’t possible. Metadata corruption being main issue (or that’s how it looks like to me at least).

As of now I trust BCacheFS way more. I’ve given it roughly the same time to prove itself as Btrfs too. BCacheFS has issues but so far I’ve managed to resolve them without major data loss.

Please note that I currently use ext4 in all ”really important” desktop/laptop installations and OpenZFS in my server. Performance being the main concern for desktop and reliability for server.

bcrl•4mo ago
ext4 has its own issues usually more in terms of scalability. XFS is trustworthy without the glass jaws ext4 has all over the place. Ah, the joy of taking 80 seconds to write out an 8MB file to disk after a fresh mount....
phire•4mo ago
Btrfs claims to be stable. IMO, it's not.

It's generally fine if you stay on the happy path. It will work for 99% of people. But if you fall off that happy path, bad things might happen and nobody is surprised. In my personal experience, nobody associated with the project seems to trust a btrfs filesystem that fell off the happy path, and they strongly recommend you delete it and start from scratch. I was horrified to discover that they don't trust fsck to actually fix a btrfs filesystem into a canonical state.

BCacheFS had the massive advantage that it knew it was experimental and embraced it. It took measures to keep data integrity despite the chaos, generally seems to be a better design and has a more trustworthy fsck.

It's not that I'd trust BCacheFS, it's still not quite there (even ignoring project management issues). But my trust for Btrfs is just so much lower.

ahartmetz•4mo ago
btrfs seems to be a wonky, ill-considered design with ten years of hotfixes. bcachefs seems to be a solid design that is (or has been, it's mostly done) regularly improved where trouble was found. Now it's just fixing basically little coding oversights. In two years, I will trust bcachefs to be a much more reliable filesystem than btrfs.
kiney•4mo ago
btrfs has many technical advantages over zfs
debazel•4mo ago
Yes, like destroying itself and losing all data.
natebc•4mo ago
ZFS is perfectly capable of this too.

source: worked as a support engineer for a block storage company, witnessed hundreds of customers blowing one or both of their feet off with ZFS.

zenoprax•4mo ago
To what extent are these customers blaming the hammer for hitting their thumb?

(Legitimate question: I manage several PB with ZFS and would like to know where I should be more cautious.)

nubinetwork•4mo ago
Pool feature mismatch on send receive, dedup send receive, new features breaking randomly on bleeding edge releases
TheNewsIsHere•4mo ago
The intent of feature flags in ZFS is to denote changes in on-disk structures. Replication isn’t supported between pools that don’t support the same flags because otherwise ZFS couldn’t read the data from disk properly on the receiving sides.

There are workarounds, with their respective caveats and warnings.

natebc•4mo ago
A great deal. Which is why my cringe reflex still activates when I read about people running ZFS in places that aren't super tightly configured. ZFS is just such a massively complex piece of software.

There were legitimate bugs in ZFS that we hit. Mostly around ZIL/SLOG and L2ARC and the umpteen million knobs that one can tweak.

TheNewsIsHere•4mo ago
Customers blowing off their feet with ZFS because they felt the need to tweak tunables they didn’t need to use, or didn’t properly understand, is not the fault of ZFS though.

You can do the same with just about any file system. In the Windows world you can blow your feet off with NTFS configuration too.

Of course there have been bugs, but every filesystem has had data-impacting bugs. Redundancy and backups are a critical caveat for all file systems for a reason. I once heard it said that “you can always afford to lose the data you don’t have backed up”. I do not think that broadly applies (such as with individuals), but it certainly applies in most business contexts.

natebc•4mo ago
Yeah, my reaction to it usually that's so quickly recommended so frequently for general use.

Obviously there's footguns in everything. Filesystem ones are just especially impactful.

TheNewsIsHere•4mo ago
Yep. I use ZFS at home, but on business oriented NAS hardware with drives to match (generally). And I don’t go asking it to do odd things or configure it bizarrely. I don’t pass through drives named with Linux names (I prefer WWN to PCI address naming, at least at home). Etc.

But a lot of people out there will slap a bunch of USB 2.0 hard drives on top of an old gaming computer. I’m all for experimenting, and I sympathize that it’s expensive to run ZFS on “ZFS class” platforms and hardware. I don’t begrudge others that.

It would be really nice if there was something like ZFS that was a tad more flexibility and right in the kernel with consistent and concise user space tooling. Not everyone is comfortable with DKMS.

motorest•4mo ago
> A great deal. Which is why my cringe reflex (...)

Can you provide some specifics? So far all I see is vague complains with no substance, and when complainers are lightly pressed they go defensive.

natebc•4mo ago
I don't have specifics for how many people running a fork of ZFS on Linux (or the fork for opensolaris, nexenta, etc) have copy-pasted some configuration from a wiki/forum/stackexchange and resulted in a pool that's misconfigured in some subtly fatal way. I don't have any personal anecdotes to share about my own homelab or enterprise IT experience with ZFS because I don't use it at home and nowhere I've worked in IT has used it.

I did live specific situations over several years in a support engineer role where a double digit percentage of customers in enterprise configurations that ended up somewhere between terrible performance and catastrophic data loss due to the misunderstood configuration of a very complex piece of software.

If you wanna use ZFS, use ZFS. I'm not the internets crusader against it. I have no doubt there's thousands of PB out there of perfectly happy, well configured and healthy zpools. It has some truely next-gen features that are extremely useful. I've just seen it recommended so, so many times as a panacea when something simpler would be just as safe and long lasting.

It's kinda like using Kubernetes to run a few containers. Right?

motorest•4mo ago
> I don't have specifics (...). I don't have any personal anecdotes (...).

I see.

> I did live specific situations over several years in a support engineer role where a double digit percentage of customers in enterprise configurations that ended up somewhere between terrible performance and catastrophic data loss due to the misunderstood configuration of a very complex piece of software.

I'm sorry, but this claim is outright unbelievable. If the project was even half as unstable as you claim to be, no one would ever use it in production at all. Either you are leaving out critical details such as non-standard patches and usages that have no relationship with real world usage, or you are fabricating tales.

Also, it's telling that no one stepped forward to offer any concrete details and specifics on these hypothetical issues. Makes you think.

natebc•4mo ago
Well, I assure you I'm not making it up. If you can't believe that people will misconfigure complicated systems that almost no single person can completely understand or that working in the storage industry exposes you to bizarre and interesting failures of both hardware and software (and firmware!) then I'm not sure what I can say to have you take a story at face value.

I'm not being melodramatic. You can take a story or leave it. I'm not here to convince you one way or another.

And frankly I don't particularly appreciate being called a liar, btw. Nice use of quoting also. Good day.

throw0101a•4mo ago
> source: worked as a support engineer for a block storage company, witnessed hundreds of customers blowing one or both of their feet off with ZFS.

The phrasing of this tends me to believe that the customers set up ZFS in a 'strange' (?) way. Or was this a bug(s) with-in ZFS itself?

Because when people talk about Btrfs issues, they are talking about the code itself and bugs that cause volumes to go AWOL and such.

(All file systems have foot-guns.)

natebc•4mo ago
Mostly customers thinking they fully understand the thousands of parameters in ZFS.

There was a _very_ nasty bug in the ZFS L2ARC that took out a few PB at a couple of large installations. This was back in 2012/2013 when multiple PBs was very expensive. Was a case of ZFS putting data from the ARC into the pool after the ZIL/SLOG had been flushed.

crest•4mo ago
Can you give an example because to me it always appeared as NIH copy-cat fs?
sureglymop•4mo ago
I've never had any issues with either ZFS or Btrfs after 2020. I wonder what you all are doing to have such issues with them.
jamesnorden•4mo ago
Ah yes, the famous "holding it wrong".
happymellon•4mo ago
I've also not had issues with BTRFS.

The question was around usage, because without knowing people's usecases and configurations it'll never be usable for you while working fine for others.

pizza234•4mo ago
If 1% of the users report a given issue (say, data corruption), the fact that 99% of the users report that they don't experience it, doesn't mean that the issue is not critical.
izacus•4mo ago
The fact that you see an issue reported loudly on social media it doesn't mean it's critical or more common than for other FSes.

As usual with all these Linux debates, there's a loud group grinding their old hatreds that can be decade old.

happymellon•4mo ago
> If 1% of the users report a given issue (say, data corruption

If 0.1% of users say it corrupted for them, and then don't provide any further details and no one can replicate their scenario then it does make it hard to resolve it

koverstreet•4mo ago
the btrfs devs are also famous for being unresponsive to these sorts of issues.

there's a feedback effect: if users know that a filesystem takes these kinds of issues seriously and will drop what they're doing and jump on them, a lot of users will very happily spend the time reporting bugs and working with devs to get it resolved.

people don't like wasting their time on bug reports that go into the void. they do like contributing their time when they know it's going to get their issue fixed and make things better for everyone.

this is why I regularly tell people "FEED ME YOUR BUG REPORTS! I WANT THEM ALL!"

it's just what you have to do if you want your code to be truly bulletproof.

jcalvinowens•4mo ago
> the btrfs devs are also famous for being unresponsive to these sorts

No, Kent, they are not. Posting attacks like this without evidence is cowardly and dishonest. I'm not going to tolerate these screeds from you about people I've worked with and respect without calling you out.

Every time you spew this toxicity, a chunk of bcachefs users reformat and walk away. Very soon, you'll have none left.

koverstreet•4mo ago
What are they famous for then? High quality, trustworthy code?

It doesn't matter if you want to "tolerate" it if every time there's a filesystem thread there's stories about lost filesystem and unfixed bugs, and the rare times people try to report bugs or issues they go nowhere.

You're ranting about reality, and somehow I doubt you were ever a user of mine.

I've worked hard to establish an engineering community where no one has to be afraid to point out broken shit, including and especially in my code, and I wouldn't want your attitude anywhere near it.

jcalvinowens•4mo ago
> What are they famous for then? High quality, trustworthy code?

Absolutely. Demonstrably much moreso than you over the past decade.

> I've worked hard to establish an engineering community where no one has to be afraid to point out broken shit,

You have failed. Look at this conversation: https://lore.kernel.org/lkml/fe51512baa18e1480ce997fc535813c...

Do you think Arnd, David, and Russell feel like they were rewarded for pointing out your broken shit? I doubt it. I certainly don't. You were so completely and obviously in the wrong there I have difficulty believing it was real.

I'm not replying to you again. Good luck :)

koverstreet•4mo ago
HAH

Not comparable to losing filesystems, sorry...

const_cast•4mo ago
The problem is every filesystem can experience data corruption. That doesn't tell us anything about how it relates to BTRFS.

Also, filesystems just work. Nobody is gonna say "oh I'm using fileystem X and it works!" because that's the default. So, naturally, 99% of the stuff you'll hear about filesystems is when they don't work.

Don't believe me? Look up NTFS and read through reddit or stackexchange or whatever. Not a lot of happy campers.

koverstreet•4mo ago
Do you see the same reports about ext4 or XFS?

I don't.

happymellon•4mo ago
https://forum.endeavouros.com/t/6-1-64-1-lts-kernel-linux-lt...

It's also the reason I am completely against the Debian "backporting" methodology to pretend they aren't using the new version of something.

koverstreet•4mo ago
yeah there's no real QA process for the stable trees
kasabali•4mo ago
That bug had nothing to do with Debian. Follow the reports in the fine thread you've linked and you'll see it was an upstream issue.

Debian doesn't "backport" anything, they ship upstream LTS kernels straight, so please stop spreading this misinformation.

For some bizarre reason I can't conceive people not only believe but also keep complaining that Debian ships franken-kernels like RHEL does. See [0] for another instance.

[0] https://news.ycombinator.com/item?id=44849988

eptcyka•4mo ago
I had a power failure and I lost the whole filesystem. Never happened with ext4 - I've had data loss after a power failure with other filesystems, but never an issue where I wasn't able to mount it and lost 100% of my data.
metadat•4mo ago
I've experienced unrecoverable corruption with btrfs within the past 2 years.
motorest•4mo ago
> Ah yes, the famous "holding it wrong".

Is it wrong to ask how to reproduce an issue?

ziml77•4mo ago
If you complain a knife crushing your food because you're holding it upside down, it's good for everyone else to know that context. Because anyone who is using it with the sharp side down can safely ignore that problem rather than being scared away due to an issue they won't experience.
pizza234•4mo ago
Just a few days ago I've had a checksum mismatch on a RAID-1 setup, on the metadata in both devices, which is very confusing.

Over the last one or two years I've experienced twice a checksum mismatch on the file storing the memory of a VMWare Workstation virtual machine.

Both are very likely bugs in Btrfs, and it's very unlikely that have been caused by the user (me).

In the relatively far past (around 5 years ago), I've had the system (root being on Btrfs) turning unbootable for no obvious reason, a couple of times.

pantalaimon•4mo ago
One lovely experience I had when trying to remove a failing disk from my array was that the `btrfs device remove` failed with an I/O error - because the device was failing.

I then had to manually delete the file with the I/O error (for which I had to resovle the inode number it barfed into dmesg) and try again - until the next I/O error.

(I'm still not sure if the disk was really failing. I did a full wipe afterwards and a full read to /dev/null and experienced no errors - might have just been the meta-data that was messed up)

Volundr•4mo ago
Pre-2020, but I had a BTRFS filesystem with over 40% free space start failing on all writes, including deletes, with a "no space left on device error". Took the main storage array for our company offline for over a day while we struggled to figure out wtf was going on. Supposedly this is better now, but basically BTRFS marks blocks as data or metadata, and once marked a block will be reassigned (without a rebalance). Supposedly this is better now, but this was after it had been stable for a few years. After that and some smaller foot guns, I'll never willingly run BTRFS on a critical system.
patrakov•4mo ago
I still have a btrfs with a big problem: more disk space used than expected. The explanation was helpfully provided by btdu:

    Despite not being directly used, these blocks are kept (and cannot be reused) because another part of the extent they belong to is actually used by files.
    
    This can happen if a large file is written in one go, and then later one block is overwritten - btrfs may keep the old extent which still contains the old copy of the overwritten block.
Elhana•4mo ago
With a clever script doing writes, deletes repeatedly, you can likely bring down any system with btrfs and it will bypass quotas.
StopDisinfo910•4mo ago
> btrfs is still around only because ZFS can't be mainlined basically

ZFS is extremely annoying with the way it does extend and the fact that you can’t mismatch drive size. It’s not a panacea. There clearly is space for an improved design.

nubinetwork•4mo ago
Underprivision your disks, then you don't have to worry about those edge cases...
StopDisinfo910•4mo ago
If you need to consider how to buy your drives so you can use a filesystem, that’s a flaw of said filesystem not an edge case.

It clearly is an acceptable one for a lot of people but it does leave space for alternative designs.

estimator7292•4mo ago
This is the way it's always been. RAID can't really handle mismatched drives either and you must consider that when purchasing drives. It's not a flaw, it's a consequence of geometry
StopDisinfo910•4mo ago
It’s strange you say that because my btrfs array handles mismatched sizes just fine.
Elhana•4mo ago
Let's say you can put a hdd and ssd of different sizes in raid1, but doesnt mean you should.
cyphar•4mo ago
This is being worked on (they call it AnyRaid), the work is being sponsored by HexOS[1].

[1]: https://hexos.com/blog/introducing-zfs-anyraid-sponsored-by-...

accelbred•4mo ago
I switched to ZFS for a while but had to switch back because of how much was broken. Overlayfs had issues, reflinks didn't work, etc. Linux-specific stuff that just works on kernel filesystems was missing or buggy. I saw later they added support for some of the missing features but they had data corruption issues. Also I doubt it'll ever support fs-verity.

I don't plan on giving ZFS or other filesystems not designed for Linux another go.

EspadaV9•4mo ago
Might be harder to keep running ZFS on Linux after 6.18

https://www.phoronix.com/news/Linux-6.18-write-cache-pages

qalmakka•4mo ago
Killing ZFS on Linux would basically make Linux unsuitable for lots of usecases. What would you use instead? Btrfs, which keeps having stupid data corruption issues? Bcachefs, which is not yet stable and now it's being struck out of the kernel? LVM2+thin provisioning, which will happily eat your data if your data overlap? I hope some industrial players will force the kernel to drop this nonsense.

Heck no native filesystem besides btrfs has compression, I'm saving HUNDREDS of GB with zstd compression on my machines with basically zero overhead

koverstreet•4mo ago
The community is still growing (developers too!), and people have been jumping in to help out with getting DKMS support into the distros.

bcachefs isn't going away.

The SuSE guy also reversed himself after I asked; Debian too, so we have time to get the DKMS packages out.

the_duke•4mo ago
This is a tragedy, bcachefs has so many great features...
rurban•4mo ago
> Once the BCacheFS maintainer behaves and the code is maintained upstream again, we will re-enable... (As IMO, it is a useful feature.)

How cynical. It's the kernel maintainer, not the bcachefs maintainer, who does not behave and has a huge history of unprofessional behavior for decades.

happymellon•4mo ago
The bcachefs maintainer has added new features during bugfix windows, and lied about it.
pantalaimon•4mo ago
It's still an experimental module, the feature was about gathering more debug information.
StopDisinfo910•4mo ago
So?

Bug fix windows are for bug fix. If it’s not a bug fix, it goes in the next version. That’s how the kernel release cycle works. It’s not very complicated.

If it’s so unstable that it urgently needs new features shipped regularly, I think it’s entirely legitimate that it has to live out of tree until it’s actually stable enough.

yjftsjthsd-h•4mo ago
If it's an experimental module, then it can surely wait for the next release; after all, nobody should be relying on code that's explicitly marked experimental.
nicman23•4mo ago
How cynical. It's the bcachefs maintainer, not the kernel maintainer, who does not behave and has a huge history of unprofessional behavior for decades.

it is not like he was not explicitly warned.

boricj•4mo ago
The original author later sent an apology email explaining that it sounded too harsh in English and it wasn't meant to be offensive:

https://lwn.net/ml/all/bece61a0-b818-4d59-b340-860e94080f0d@...

koverstreet•4mo ago
He also reversed himself when I asked him not to pull the rug out from users and explained that there is a plan for continued support.

The ever escalating drama and cynicism in the reactions this stuff gets though... bloody hell, what is with people these days?

self_awareness•4mo ago
The reason is that people like Linus, because he's entertaining. And people don't like Kent, because he opposed Linus, who is liked. That's all there is too it. Like in some high school.
flykespice•4mo ago
Do people.. find entertaining a Boss verbally abusing, with personal attacks, their co-workers?

Human nature is wicked.

koverstreet•4mo ago
Yup.

I'm quite happy to be forging my own path now.

fj23Z741GAh•4mo ago
There was a time when Linus encouraged critics of "unprofessional behavior" to snap back at him:

https://lkml.org/lkml/2013/7/15/374

That is a reasonable compromise. Except when someone actually snaps back at him.

jtickle•4mo ago
All of the "btrfs eats your data" bugs have been fixed and the people who constantly repeat them are people who relied on an experimental filesystem for files they cared not to lose. FUD all around. I have a btrfs on my home file server that's been running just fine for almost 10 years now and has survived the initial underlying hard drives mechanical death. Since then I have used it in plenty of production environments.

Don't do RAID 5. Just don't. That's not just a btrfs shortcoming. I lost a hardware RAID 5 due to "puncture" which would have been fascinating to learn about if it hadn't happened to a production database. It's an academically interesting concept but it is too dangerous especially with how large drives are now, if you're buying three, buy four instead. RAID 10 is much safer especially for software RAID.

Stop parroting lies about btrfs. Since it became marked stable, it has been a reliable, trustworthy, performant filesystem.

But as much as I trust it I also have backups because if you love your data, it's your own fault if you don't back it up and regularly verify the backups.

arccy•4mo ago
"performant", it's still slow if you actually use any of the advanced features like copy on write.
FirmwareBurner•4mo ago
Every CoW filesystem is just as slow. There's no magic pill to fix performance but it's a known tradeoff.
koverstreet•4mo ago
Not inherently.

Early bcachefs was ridiculously fast, it's gotten slower as we've grown all the features to compete with ZFS. All the database stuff that gives us amazing flexibility adds overhead (the btree iterator code has gotten fat), backpointers and modern accounting blew up our journalling overhead. A lot things that we needed for scalability, or hardening/self healing, have added overhead.

COW really isn't the main thing, it's cramming all the features in that we want these days while keeping the fastpaths fast that's the tricky part.

But, a lot of this stuff is fixable - performance just hasn't been the priority, since the actual users aren't complaining about performance and are instead clamoring for things like erasure coding.

(and, the performance numbers that I've seen comparing us to ZFS still put us _significantly faster)

betaby•4mo ago
> FUD all around

????

> Don't do RAID 5.

Ah, OK, so not FUD

> Stop parroting lies about btrfs.

I seee

plqbfbv•4mo ago
> All of the "btrfs eats your data" bugs have been fixed ... I have a btrfs on my home file server that's been running just fine for almost 10 years now and has survived the initial underlying hard drives mechanical death

In the last 10 years, btrfs:

1. Blew up three times on two unrelated systems due to internal bugs (one a desktop, one a server). Very few people were/are aware of the remount-only-once-in-degraded "FEATURE" where if a filesystem crashed, you could mount with -odegraded exactly only once, then the superblock would completely prevent mounting (error: invalid superblock). I'm not sure whether that's still the case or whether it got fixed (I hope so). By the way, these were on RAID1 arrays with 2 identical disks with metadata=dup and data=dup, so the filesystem was definitely mountable and usable. It basically killed the usecase of RAID1 for availability reasons. ZFS has allowed me to perform live data migrations while missing one or two disks across many reboots.

2. Developers merged patches to mainline, later released to stable, that completely broke discard=async (or something similar) which was a supported mount option from the manpages. My desktop SSD basically ate itself, had to restore from backups. IIRC the bug/mailing list discussions I found out later were along the lines of "nobody should be using it", so no impact.

3. Had (maybe still has - haven't checked) a bug where if you fill the whole disk, and then remove data, you can't rebalance, because the filesystem sees it has no more space available (all chunks are allocated). The trick I figured out was to shrink the filesystem to force data relocation, then re-expand it, then balance. It was ~5 years ago and I even wrote a blog post about it.

4. Quota tracking when using docker subvolumes is basically unusable due to the btrfs-cleaner "background" task (imagine VSCode + DevContainers taking 3m on a modern SSD to cleanup 1 big docker container). This is on 6.16.

5. Hit a random bug just 3 days ago on 6.16, where I was doing periodic rebalancing and removing a docker subvolume. 200+ lines of logs in dmesg, filesystem "corrupted" and remounted read-only. I was already sweating, not wanting to spend hours restoring from backups, but unexpectedly the filesystem mounted correctly after reboot. (first pleasant experience in years)

ZFS in 10y+ has basically only failed me when I had bad non-ECC RAM, period. Unfortunately I want the latest features for graphics etc on my desktop and ZFS being out of tree is a no-go. I also like to keep the same filesystem on desktop and server, so I can troubleshoot locally if required. So now I'm still on btrfs, but I was really banking on bcachefs.

Oh well, at least I won't have to wait >4 weeks for a version that I can compile with the latest stable kernel.

The only stable implementation is Synology's, the rest, even mainline stable, failed on me at least once in the last 10 years.

greyw•4mo ago
> Quota tracking when using docker subvolumes is basically unusable due to the btrfs-cleaner "background" task (imagine VSCode + DevContainers taking 3m on a modern SSD to cleanup 1 big docker container). This is on 6.16.

I had to disable quota tracking. It lags my whole desktop whenever that shit is running in the background. Makes it unusable on an interactive desktop.

yjftsjthsd-h•4mo ago
Yeah, no. I've had btrfs lose a root filesystem on a laptop with only one disk. No RAID, nothing fancy, well after it was supposed to be stable, on OpenSUSE where I assumed it would be well supported and pick good defaults.

Claiming that anyone reporting problems is lying is acting in bad faith and makes your argument weaker.

Also, "works for me" isn't terribly convincing.

M95D•4mo ago
I'm still waiting for an overlayfs that does read caching on the overlay without the need to format the backing storage.
rekoil•4mo ago
Sounds like what bcache does? https://bcache.evilpiepirate.org/

This is what bcachefs is based on.

M95D•4mo ago
From kernel Documentation/bcache.txt:

> You'll need make-bcache from the bcache-tools repository. Both the cache device and backing device must be formatted before use.

So, it's far from overlayfs. I could accept formatting the cache device, but not the backing storage.

duffyjp•4mo ago
Very recently I setup a mergerfs mount for this. It’s very crude for my use case but works perfectly and I could use the existing volume as is.

I got partway thru setting up a script to copy recently accessed files from the HDD to the read-prioritized SSD.

My LLMs load up way faster, and I still have a source of truth volume in the huge HDD. It’s not something I’d use professionally though, way too janky.

betaby•4mo ago
Very theoretical question. If there was a usable microkernel, how hard would be it be to have an FS as a service? Are MacOS FS' processes or are they 'monolithic'?
yjftsjthsd-h•4mo ago
If there was a usable microkernel, then its filesystems would almost certainly be implemented in userspace; that's kinda the point of a microkernel. I can't speak to how Darwin does things, but I will point out that Linux and the BSDs have filesystems as userspace processes courtesy of FUSE; in their architecture that has a performance hit (caused by the context switching needed, I believe recently improved by use of io_uring), but it is a worked example of what you want.
betaby•4mo ago
I'm aware of FUSE and used NTFS driver though it back then. So we should expect similar performance with microkernels?
yjftsjthsd-h•4mo ago
It depends:) Old microkernels infamously did have awful perf because they took that context switch hit on everything, not just filesystems. I am told, though, that that was solved a long time ago and modern microkernels have fairly good performance. I'm not a kernel dev, though; you'll have to do your own research to get details.
xenadu02•4mo ago
So FSKit is public API and supports userspace filesystems but on macOS most built-in filesystems are kernel modules. On iOS everything except APFS is userspace.

Modern Darwin is mostly a monolithic kernel, though more and more things are moving to userspace where possible (eg DriverKit).

One interesting side-effect of various spectre mitigations is silicon improving the performance of context switches. That has the side-effect of decreasing the cost of kernel/userspace transitions. It isn't nearly as expensive as people still believe - though it isn't free either.

koverstreet•4mo ago
I think this would be the most usable microkernel these days, and yes, it does FS as a service: https://www.redox-os.org/

There's no inherent reason why a filesystem in userspace has to have a measurable performance impact over in-kernel, done right. It would be a lot of engineering work, though.

Context switches aren't that expensive when you're staying on the same core, it's IPIs (waking up a thread on another CPU) that's inherently expensive; and you need zero copy to/from the pagecache, which is tricky. But it totally could be done.

zoezoezoezoe•4mo ago
dodged a bullet with this one. Migrated away from BCacheFS on my openSUSE deployments a few days ago because I could see the writing on the wall for a while.