frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•5m ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•7m ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•10m ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
2•pabs3•12m ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
1•pabs3•13m ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•14m ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
1•devavinoth12•14m ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•19m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•28m ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•32m ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•36m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•38m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•47m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•51m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•52m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•58m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•58m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
2•irreducible•59m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•1h ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•1h ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•1h ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•1h ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•1h ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
2•alexjplant•1h ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
4•akagusu•1h ago•1 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•1h ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
9•DesoPK•1h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments
Open in hackernews

MD RAID or DRBD can be broken from userspace when using O_DIRECT

https://bugzilla.kernel.org/show_bug.cgi?id=99171
76•vbezhenar•3mo ago

Comments

saurik•3mo ago
So... one can, on a filesystem that is mirrored using MD RAID, from userspace, and with no special permissions (as it seems O_DIRECT does not require any), create a standard-looking file that has two different possible contents, depending from which RAID mirror it happens to be read from today? And, this bug, which has been open for a decade now, has, somehow, not been considered to be an all-hands-on-deck security issue that undermines the integrity of every single mechanism people might ever use to validate the content of a file, because... checks notes... we should instead be "teaching [the attacker] not to use [O_DIRECT]"?

(FWIW, I appreciate the performance impact of a full fix here might be brutal, but the suggestion of requiring boot-args opt-in for O_DIRECT in these cases should not have been ignored, as there are a ton of people who might not actively need or even be using O_DIRECT, and the people who do should be required to know what they are getting into.)

summa_tech•3mo ago
Wouldn't the performance impact be that of setting the page read-only when the request is submitted, then doing a copy-on-write if the user process does write it? I mean, that's nonzero, TLB flushes being what they are. But they do happen a bunch anyway...
vbezhenar•3mo ago
Please note that some filesystems, namely bcachefs, btrfs, zfs seem to be immune to this issue, probably because they don't just directly delegate writes to the block layer with O_DIRECT flag. But it is important to be aware of this issue.
saurik•3mo ago
While those are all "filesystems", they are also (internally) alternatives to MD RAID; like, you could run zfs on top of MD RAID, but it feels like a waste of zfs (and the same largely goes for btrfs and bcachefs). It thereby is not at all clear to me that it is the filesystems that are "immune to this issue" rather than their respective RAID-like behaviors, as it seems to be the latter that the discussion was focussing on (hence the initial mention of potentially adding btrfs to the issue, which did not otherwise mention any filesystem at all). Put another way: if you did do the unusual thing of running zfs on top of MD RAID, I actually bet you are still vulnerable to this scenario.

(Oh, unless you are maybe talking about something orthogonal to the fixes mentioned in the discussion thread, such as some property of the extra checksumming done by these filesystems? And so, even if the disks de-synchronize, maybe zfs will detect an error if it reads "the wrong one" off of the underlying MD RAID, rather than ending up with the other content?)

ludocode•3mo ago
These filesystems are not really alternatives because mdraid supports features those filesystems do not. For example, parity raid is still broken in btrfs (so it effectively does not support it), and last I checked zfs can't grow a parity raid array while mdraid can.

I run btrfs on top of mdraid in RAID6 so I can incrementally grow it while still having copy-on-write, checksums, snapshots, etc.

I hope that one day btrfs fixes its parity raid or bcachefs will become stable enough to fully replace mdraid. In the meantime I'll continue using mdraid with a copy-on-write filesystem on top.

bananapub•3mo ago
> zfs can't grow a parity raid array while mdraid can.

indeed out of date - that was merged a long time ago and shipped in a stable version earlier this year.

koverstreet•3mo ago
soon :)
bestham•3mo ago
Like everything else in engineering it is a matter of trade offs. The setup you chose to run really hampers the usefulness of having a checksuming file system, since it cannot simply get the correct data from another drive. As a peer pointed out: ZFS does support adding additional drives to expand a RaidZ (with some trade offs). What you cannot do is change the raid topology at the fly.
Polizeiposaune•3mo ago
ZFS puts checksums in the block pointer, so, unless you disable checksums, it always knows the expected checksum of a block it is about to read.

When the actual checksum of what was read from storage doesn't match the expected value, it will try reading alternate locations (if there are any), and it will write back the corrected block if it succeeds in reconstructing a block with the expected checksum.

tobias3•3mo ago
On the contrary. Btrfs had a long standing issue where you could make the filesystem checksums not match with non-stable O_DIRECT writes (so even with a single disk).

This has only recently been fixed by disabling O_DIRECT for files with checksums (so the default): https://lore.kernel.org/linux-btrfs/54c7002136a047b7140c3647...

ZFS has O_DIRECT do nothing as well, as far as I know.

weinzierl•3mo ago
Linus very much opposed O_DIRECT from the start. If I remember correctly he only introduced it at the pressure from the "database people" i.e. his beloved Oracle.

No wonder O_DIRECT never saw much love.

"I hope some day we can just rip the damn disaster out."

-- Linus Torvalds, 2007

https://lkml.org/lkml/2007/1/10/235

jandrewrogers•3mo ago
This is one of several examples where Linus thinks something is bad because he doesn't understand how it is used.

Something like O_DIRECT is critical for high-performance storage in software for well-understood reasons. It enables entire categories of optimization by breaking a kernel abstraction that is intrinsically unfit for purpose; there is no way to fix it in the kernel, the existence of the abstraction is the problem as a matter of theory.

As a database performance enjoyer, I've been using O_DIRECT for 15+ years. Something like it will always exist because removing it would make some high-performance, high-scale software strictly worse.

jeffbee•3mo ago
His lack of industry experience is the root cause of many issues in Linux.
vacuity•3mo ago
Although this is somewhat true, I think the bigger issue is expecting Linux to support all these use cases. Even if Linus accepted all use cases, it's a different story to maintain a kernel/OS that supports them all. The story from an engineering standpoint is just too unwieldy. A general-purpose OS can only go so far to optimize countless special-purpose uses.
tremon•3mo ago
This is not some minor niche use case though, and all other operating systems seem to have no trouble supporting OS fscache bypass.
vacuity•3mo ago
Considering how big Linux is and how many different use cases it supports, this could well be an undue maintenance burden for Linux where it wouldn't be for other operating systems. Though, I'll grant that I don't know the details here, and of course Linus is...opinionated.
jeffbee•3mo ago
I agree. I wish we had more varied operating systems.
raffraffraff•3mo ago
I asked my question in the wrong place!

"So is the original requirement for O_DIRECT addressed completely by O_SYNC and O_DSYNC"

I'm guessing you'd say "no"

jandrewrogers•3mo ago
O_DIRECT is separate from synchronization. There is no guarantee that O_DIRECT writes are durable, though a subset of hardware may work this way in fact.

The practical purpose of O_DIRECT is to have precise visibility and control over what is in memory, what is on disk, and any inflight I/O operations. This opens up an entire category of workload-aware execution scheduling optimizations that become crucial for performance as storage sizes increase.

repstosb•3mo ago
Of course you should offer some method to disable caching/compression/encryption/ECC/etc. in intermediate layers whenever those are non-zero cost and might be duplicated at application level... that's the ancient end-to-end argument.

But that method doesn't necessarily have to be "something like O_DIRECT", which turns into a composition/complexity nightmare all for the sake of preserving the traditional open()/write()/close() interface. If you're really that concerned about performance, it's probably better to use an API that reflects the OS-level view of your data, as Linus pointed out in this ancient (2002!) thread:

https://yarchive.net/comp/linux/o_direct.html

Or, as noted in the 2007 thread that someone else linked above, at least posix_fadvise() lets the user specify a definite extent for the uncached region, which is invaluable information for the block and FS layers but not usually communicated at the time of open().

I think it's quite reasonable to consider the real problem to be the user code that after 20 years hasn't managed to migrate to something more sophisticated than open(O_DIRECT), rather than Linux's ability to handle every single cache invalidation corner case in every possible composition of block device wrappers. It really is a poorly-thought-out API from the OS implementor's perspective, even if at first seemingly simple and welcoming to an unsophisticated user.

jandrewrogers•3mo ago
O_DIRECT isn't about bypassing the kernel for the sake of reducing overhead. The gains would be small if that was the only reason.

O_DIRECT is used to disable cache replacement algorithms entirely in contexts where their NP-hardness becomes unavoidably pathological. You can't fix "fundamentally broken algorithm" with more knobs.

The canonical solution for workloads that break cache replacement is to dynamically rewrite the workload execution schedule in realtime at a very granular level. A prerequisite for this when storage is involved is to have perfect visibility and control over what is in memory, what is on disk, and any inflight I/O operations. The execution sequencing and I/O schedule are intertwined to the point of being essentially the same bit of code. For things like database systems this provides qualitative integer factor throughput improvements for many workloads, so very much worth the effort.

Without O_DIRECT, Linux will demonstrably destroy the performance of the carefully orchestrated schedule by obliviously running it through cache replacement algorithms in an attempt to be helpful. More practically, O_DIRECT also gives you fast, efficient visibility over the state of all storage the process is working with, which you need regardless.

Even if Linux handed strict explicit control of the page cache to the database process it doesn't solve the problem. Rewriting the execution schedule requires running algorithms across the internal page cache metadata. In modern systems this may be done 100 million times per second in userspace. You aren't gatekeeping analysis of that metadata with a syscall. The way Linux organizes and manages this metadata couldn't support that operation rate regardless.

Linux still needs to work well for processes that are well-served by normal cache replacement algorithms. O_DIRECT is perfectly adequate for disabling cache replacement algorithms in contexts where no one should be using cache replacement algorithms.

raffraffraff•3mo ago
So is the original requirement for O_DIRECT addressed completely by O_SYNC and O_DSYNC?

The way I was told it, if the database engine implements it's own cache (like InnoDB and presumably Oracle), are just "doubling up" if you also use the OS cache?. Perhaps Oracle is happy with its own internal caching (for reads).

I've seen a DB guy insist on O_DIRECT without implementing array controller cache battery alerting, or checking if drives themselves had caches disabled. Nope "O_DIRECT fixes everything!" ... although these days enterprise class SSDs have little batteries and capacitors to handle power loss so in the right circumstances that's kinda resolved too, but like the array controller cache batteries, this is one more thing you have to monitor if you're running your own hardware

karmakaze•3mo ago
This is nuts. I've used both MD RAID and O_DIRECT though luckily not together on the same system. One system was with btrfs so may have been spared anyway. Footguns/landmines.
rwaksmunski•3mo ago
This, fsync() data corruption, BitterFS issues, lack of Audit on io_uring, triplicated EXT2,3,4 code bases. For the past 20 years, every time I consider moving mission critical data from FreeBSD/ZFS something like this pops up.
zokier•3mo ago
Personally I think these problems are a sign that posix fs/io apis are just not that good. Or rather they have been stretched and extended way past their original intent/design. Stuff like zenfs give interesting glimpse of what could be.
burnt-resistor•3mo ago
FreeBSD 13+ threw away their faithful adaptation of production-proven code for OpenZFS (ZoL).[0,1] I refuse to use OpenZFS (ZoL) because a RHEL file server self-corrupted, wouldn't mount rw any longer, and ZoL shrugged it off without any resolution except "buy more drives and start over".

Overall, there's grossly insufficient comprehensive testing tools, techniques, and culture in FOSS (FreeBSD, Linux, and most projects) rely upon informal/under-documented, ad-hoc, meat-based scream testing rather than proper, formal verification of correctness. Although no one ever said high-confidence software engineering was easy, it's essential to avoid entire classes of CVEs and unexpected operation bugs.

0: https://www.freebsd.org/releases/13.0R/relnotes/

1: https://lists.freebsd.org/pipermail/freebsd-fs/2018-December...

guipsp•3mo ago
The link you posted explains exactly why they threw it away. You may disagree, but the stakeholders did not.
burnt-resistor•3mo ago
Yes, I know. And I know iXsystems folks too. If you want stable, battle-tested ZFS, Solaris is the only supportable option on Sun hardware like the good ol' (legacy) Thumper. OpenZFS isn't tested well enough and there's too much hype and religious zealotry around it. For person use, it's probably fine for some people but, at this point, semi alternatives such as xfs and btrfs [thanks to Meta] have billions more hours of production usage.
anime_snail•3mo ago
There are no checksums for data in xfs, and there is also no way to create a raid. There is also no data compression. Raid 5 and raid 6 are still unstable in btrfs. What alternative are we talking about?
anime_snail•3mo ago
Many mistakenly believe that FreeBSD took the ZFS implementation from Linux. It's not true, and it never was. OpenZFS is the result of merging code from illumos, Linux, and FreeBSD.