frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•7m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•12m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•17m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•25m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
2•eeko_systems•32m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
2•neogoose•35m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
1•mav5431•36m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•36m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•37m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•38m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•38m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
1•dangtony98•43m ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•51m ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•53m ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•56m ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
4•pabs3•58m ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
2•pabs3•58m ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
1•devavinoth12•1h ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•1h ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1h ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•1h ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•1h ago•1 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•1h ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•1h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
3•ambitious_potat•1h ago•4 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•1h ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
2•irreducible•1h ago•0 comments
Open in hackernews

NonRAID – fork of unRAID array kernel module

https://github.com/qvr/nonraid
65•qvr•6mo ago

Comments

miffe•6mo ago
What makes this different from regular md? I'm not familiar with unRAID.
eddythompson80•6mo ago
unRAID is geared towards homelab style deployments. Its main advantages over typical RAID is it's flexibility (https://www.snapraid.it/compare):

- It lets you throw JBODs (of ANY size) and you can create a "RAID" over them.

- The biggest drive must be a parity drive(s).

- N parity = surviving N drive failures.

- You can expand your storage pool 1 drive at a time. You need to recalculate parity for the full array.

The actual data is spread across drives. If a drive fails, you rebuild it from the parity. This is another implementation (using MergerFS + SnapRAID) https://perfectmediaserver.com/02-tech-stack/snapraid/

It's a very simple model to think of compared to something like ZFS. You can add/remove capacity AND protection as you go.

Its perf is significantly less than ZFS of course.

somat•6mo ago
Those are the features I liked about ceph, with the benefit that it uses the network to scale.
phoronixrly•6mo ago
I have an issue with this though... Won't you get a write on the parity drive for each write on any other drive? Doesn't seem well balanced... to be frank, looks like a good way to shoot yourself in the foot. Have a parity drive fail, then have another drive fail during the rebuild (a taxing process) and congrats -- your data is now eaten, but at least you saved a few hundred dollars by not buying drives of equal size...
hammyhavoc•6mo ago
No, because you have a cache pool and calculate the parity changes on a schedule, or when specific conditions are met, e.g., remaining available storage on the cache pool.

The cache pool is recommended to be mirrored for this reason (not many people see why I find this to be amusing).

phoronixrly•6mo ago
And let me guess, the cache pool is suggested to be on an SSD?

> Increased perceived write speed: You will want a drive that is as fast as possible. For the fastest possible speed, you'll want an SSD

Great, now I have an SSD that is treated as a consummative and will die and need to be replaced. Oh and btw you are going to need two of them if you don't want to accidentally your data.

The alternative? Have the cache on a pair of spinning rust drives which will again be overloaded and are expected to fail earlier and need to be replaced while also having the benefit of being slow... But at least you won't have to go through a full rebuild after a cache drive failure.

Man, I am not sold on the cost savings of this approach at all... Let alone the complexity and moving parts that can fail...

dawnerd•6mo ago
But you’d have that problem on any system really.
Dylan16807•6mo ago
> Great, now I have an SSD that is treated as a consummative and will die and need to be replaced.

It's only consumable if you hit the write limit. Hard drive arrays are usually not intended for tons of writes. SSDs $100 or less go up to at least 2000 terabytes written (WD Red SN700). How many hundreds of gigabytes of churn do you need per day?

BoredPositron•6mo ago
Nobody has to rebuild after a cache drive failure. The data you would lose is the non moved data on the cache drive. You are really overthinking this with former knowledge that leads you to assumptions that are just not true.
hammyhavoc•6mo ago
Hit the comment depth limit again, but yes, SSDs!

Yes, Unraid can crash-and-burn in quite a lot of different ways. Ask me how I know! Why I'm all-in on ZFS now.

eddythompson80•6mo ago
> Have a parity drive fail, then have another drive fail during the rebuild (a taxing process) and congrats -- your data is now eaten

That's just your drive failure tolerance. It's the same risk/capacity trade as RAIDZ1, but with less performance and more flexibility on expanding. Which is exactly what I said.

If 1 drive failure isn't acceptable for you, you wouldn't use RAIDZ1 and wouldn't use 1 parity drive.

You can use 2 parity drives for RAIDZ2-like protection.

You can use 3 drives for RAIDZ3-like protection.

You can use 4 drives, 10 drives. Add and remove as many parity/capacity as you want. Can't do that with RAID/RAIDZ easily.

You manage your own risk/reward ratio

phoronixrly•6mo ago
My issue is that due to uneven load balancing, the parity drive is going to fail more often than in a configuration with distributed parity, thus you are going to need to recalculate parity for the array more often, which is a risky and taxing operation for all drives in the array.

As hammyhavoc below noted, you can work around this by having cache, and 'by deferring the inevitable parity calculation until a later time (3:40 am server time, by default)'.

Which seems like a hell of a bodge -- both risky, and expensive -- now the unevenly balanced drive is the cache one, it is also not parity protected. So you need mirroring for it in case you don't want to lose your data, and the cache drives are still expected to fail before a drive in an evenly load-balanced array, so you're going to have to buy new ones?

Oh and btw you are still at risk of bit flips and garbage data due to cache not being checksum-protected.

eddythompson80•6mo ago
You need to run frequent scrubs on the whole zfs array as well.

On unraid/snapraid you need to spin 2 drives up (one of then is always the parity)

On zfs, you are always spinnin up multiple drives too. Sure the "parity" isn't always the same drives or at least it's up to zfs to figure that out.

Nonetheless, this is all not really likely to have a significant impact. Spinning disks failure rates don't exactly correlate with their utilization[1][2]. Between SSD cache, ZFS scrubs, general usage, I don't think the parity drives are necessarily more at risk. This is anectodal, but when I ran an unRAID box for few years myself, I only had 1 failure and it was a non-parity drive.

[1] Google study from 2007 for harddrive failure rates: https://static.googleusercontent.com/media/research.google.c...

[2] "Utilization" in the paper is defined as:

       The literature generally refers to utilization metrics by employing the term duty cycle which unfortunately has no consistent and precise definition, but can be roughly characterized as the fraction of time a drive is active out of the total powered-on time. What is widely reported in the literature is that higher duty cycles affect disk drives negatively
Dylan16807•6mo ago
> due to uneven load balancing, the parity drive is going to fail more often than in a configuration with distributed parity

Good, it can be the canary.

> thus you are going to need to recalculate parity for the array more often, which is a risky and taxing operation for all drives in the array

This is not worth worrying about.

First off, if the risk is linear then your increased parity failure is offset by decreased other-drive failure and I don't think you'll have more rebuilds.

And even if you do get more rebuilds, it's significantly less than one per year, and one extra full-drive read per year is a negligible amount of load. If you're worried about it all hitting at once then A) you should be scrubbing more often and B) throttle the rebuild.

nodja•6mo ago
The wear on the parity drive is the same regardless of raid technology you choose, unraid just lets you have mismatched data drives. In fact you could argue that unraid is healthier for the drives since a write doesn't trigger a write on all drives, just 2. The situation you described is true for any raid system.
dawnerd•6mo ago
Depends. If you use a cache like they recommend you’d only get parity writes when it runs its mover command. Definitely adds a lot of wear but so far i haven’t had any parity issues with two parity drives protecting 28 drives.
wongarsu•6mo ago
You want your drives to fail at different times! Which means you want your load to be unbalanced, from a reliability standpoint. If you put the same load on every drive (like in a traditional RAID5/6) then the drives are likely to fail at around the same time. Especially if you don't go out of your way to get drives from different manufacturing batches. But if you misbalance the amount of work the drives get they accumulate wear and tear at different rates and spend different amounts of time in idle, leading them to fail at wildly different times, giving you ample time to rebuild the raid.

I'd still recommend anyone to have two parity drives (which unraid does support)

riddley•6mo ago
I often see these discussions and "drive failure" is often mentioned and I wish the phrase was instead "unrecoverable read error" because that's the more accurate phrase. To me, "drive failure" conjures ideas of completely failed devices. An unrecoverable read error can and does happen on our bigger and bigger drives with regularity and will stop most raid rebuilds in their tracks.
wongarsu•6mo ago
"unrecoverable read error" or "defects" is probably a better framing because it highlights the need to run regular scrubs of your RAID. If you don't search for errors but just wait until the disk no longer powers on you might find out that by then you have more errors than your RAID configuration can recover from
hammyhavoc•6mo ago
> If a drive fails, you rebuild it from the parity.

But if the file system is corrupt then you're hosed and end up with a `lost+found`. It sounds great until it fails, and then you realize why ZFS with replication makes sense. Unraid doesn't do automatic repairs from replicated ZFS datasets yet either even if you use individual ZFS disks within your Unraid array.

Whatarethese•6mo ago
Hence why this is for home users that store media. ZFS is for the enterprise where you have people to babysit storage solutions.
hammyhavoc•6mo ago
Now explain ZFS being added as a feature in Unraid 7. I'll wait.
xcrunner529•6mo ago
I currently use and love snap raid. I assume the reason for this project was for real time? That seems to be the only thing unraid improves on?
eddythompson80•6mo ago
I think that's mainly it. It does give you some peace of mind that you are never in an "unprotected until next snapshot" state. But if you don't care, then there isn't much else that I noticed.
bane•6mo ago
I have an unRAID homelab. It's kind of really awesome in the sense that it lets the home user incrementally add compute and capacity over time without having to do it all in one big shebang and doesn't require nearly the fussing over of my prior linux server.

I started mine with a spare NUC and some portable USB drives and its grown into a beast with over 100TB spread across a high performance SSD backed ZFS pool and an unRAID array, 24 cores, running about 20 containers and a few VMs without breaking a sweat and so far (knock on wood) zero data loss.

All at a couple hundred dollars every so often over the years.

One performance trick it supports is also letting you overlay fast SSD storage over the array, which is periodically moved onto the slower underlying disk. It's transparent, so when you write to the array you can easily get several hundred MB/sec which will automatically get moved onto warm storage periodically. I have two fast SSDs RAIDed there and easily saturate the network link when writing.

The server basically maintains itself, I only go in every so often and bump the docker containers at this point. But I also know that I can add another disk to it in a about 10 minutes and a couple hundred bucks.

TMWNN•6mo ago
> The server basically maintains itself, I only go in every so often and bump the docker containers at this point. But I also know that I can add another disk to it in a about 10 minutes and a couple hundred bucks.

Yes. UnRAID rightfully gets a lot of attention for its flexibility in upgrading with disks of any size (which feels like magic), but for me its current >100-day uptime while maintaining a UnRAID array, three VMs, and a few other services is just as important. The only maintenance I do is occasionally look at notifications, and every month (if that often) upgrade plugins/Docker containers with new versions.

benjiro•6mo ago
Ignoring its biggest advantage vs mdraid, or zfs raid:

The ability to sleep all / individual HDDs:

* only keep awake the drives that your actually read data from * only keep awake the drive that your write data too + n parity drives

For home users, that is a TON of energy saving. And no, your "poor" HDDs are not going to suffer from spinning up a few times per day.

You can spin up/down a HDD 10x per day, for 100 years before you come even close to the manufactures (lowest) hdd limits. Let alone if you have 4+ drives and have a bit of data spreading, or combined with unraids nvme/ssd caching layer.

So unlike mdraid or zfs where its a all or nothing situation, unraid / snapraid gives you a ton of energy saving.

And i understand the US folks here do not care when they pay maybe 6 to 12 cent / kwh, but the rest of the world has electricity prices in the 30 to 50 cent / kwh, and it stacks up very fast when you are using < 1watt vs 5/7Watt per HDD/24/7...

theshrike79•6mo ago
This is the biggie for me, not for electricity but for noise.

My /archive share is two big-ass SAS drives that were cheap. They are also LOUD.

But since I don't poke around the archive much, they sleep most of the time.

wongarsu•6mo ago
md takes multiple partitions to make a virtual device you can put a file system on, with striping and traditional RAID levels

unRaid takes multiple partitions, dedicates one or two of them to parity, and hands the other partitions through. You can handle those normally, putting different file systems on different partitions in the same array and treating them as completely separate file systems that happen to be protected by the same parity drives

This enables you to easily mix drives of different sizes (as long as the parity drives are at least as large as the largest data partition), add, remove or upgrade drives with relative ease, and means that every read operation only goes to one drive, and writes to that drive plus the parity drives. Depending on how you organize your files you can have drives that are basically never on, while in an md array every drive is used for basically every read or write.

The disadvantages are that you lose out on the performance advantages of a RAID, and that the raid only really protects against losing entire disks. You can't easily repair single blocks the way a zfs RAID could. Also you have a number of file systems you have to balance (which unRaid helps you with, but I don't know how much of that is in this module)

phoronixrly•6mo ago
Not sure what you mean by 'easily repair single blocks the way a zfs RAID could', but often the physical devices handle bad blocks, and md has one safety layer on top of this - bad blocks tracking. No relocation in md though, AFAIK.
hammyhavoc•6mo ago
If you have a redundant dataset (#1 reason to use ZFS replication) then you can repair a ZFS dataset.
phoronixrly•6mo ago
I'm sorry, I still don't quite follow... If you have a RAID5, you can repair a drive failure... Weren't we talking about handling 'blocks'? Is it bad blocks or bad block devices (a.k.a. dead drives)?
hammyhavoc•6mo ago
Hit the comment depth limit (so annoying), but the comment about repairing blocks means that you can repair bitrot/corruption/malicious changes/whatever down to the block level of a ZFS dataset if you have a redundant replicated dataset.

The magic of ZFS repairs isn't in RAID itself, IMO, it's in being able to take your cold replicated dataset, e.g., from LTO, an external disk, remote server etc, and repair any issues without needing to resilver, stress the whole array, interrupt access, or hurt performance.

RAID can correct issues, yes, but ZFS as a filesystem can repair itself from redundant datasets. Likewise, you can mount the snapshots like Apple Time Machine and get back specific versions of individual files.

I wish HN didn't limit comment depth as these are great questions and this is heavily under-discussed, but it's arguably the best reason to run ZFS, IMO.

Another way of putting this—you don't need a RAID array, you can do individual ZFS disks and still replicate and repair them. There's no limits to how many replicas or mediums you use either. It's quite amazing for self-healing problems with your datasets.

aspenmayer•6mo ago
> Hit the comment depth limit (so annoying)

I think it’s actually a flamewar detector that you may be hitting. In any case, next time try selecting the timestamp of the comment which you wish to reply to; this works when the reply button is missing and the comment isn’t [dead] or [flagged][dead] iirc.

hammyhavoc•6mo ago
Thanks!
tomhow•6mo ago
The rate limiter is only applied to accounts that post too many comments that are of low-quality or break the guidelines. We're always open to turning off the rate limiter on an account but we need to see that the user has shown a sincere intent to use HN as intended over a reasonable period of time.
hammyhavoc•6mo ago
I've been using HN since 2018, and whilst I'm a bit rough around the edges, I generally interact with the best of intentions as long as my blood glucose is within range (which, with a CGM, is more than at any other point in my life).
wongarsu•6mo ago
What I mean is that unraid, zfs and md all allow you to run a scrub over your raid to check for bit rot. That might happen for all kinds of issues, including cosmic rays just flipping bits on the drive platter. The issue is that unraid and md can't do much if they detect a block/stripe where the parity doesn't match the data (because it doesn't know which of the drives suffered a bit flip). Zfs on the other hand can repair the data in that scenario because it keeps checksums.

Now a fairly common scenario is to use unRaid with zfs as the file system for each partition, having Y independent zfs file systems. In that case in theory the information to repair blocks exists: a zfs scrub will tell you which blocks are bad, and you could repair those from parity. And a unraid parity check will do the same for the parity drives. But there is no system to repair single blocks. You either have to dig in and do it yourself or just resilver the whole disk

reginald78•6mo ago
The silly part is unraid has all the pieces to do this. The btrfs file system which unraid supports for array disks could identify bitrot, and the unraid array supports virtualizing missing disks by essentially reconstructing the disk from parity and all of the other disks. Combining those two would allow rebuilding a rotted file with features already present.

My impression is unraid developers have kind of ignored enhancing the core feature of their product to much. They seem to have put a lot of effort into ZFS support which isn't that easy to integrate as it isn't part of the kernel, when ZFS isn't really the core draw of their product in the first place.

bayindirh•6mo ago
I have driven BTRFS for test in the past.

It's too metadata heavy, and is really shines on high IOPS SSDs, it's a no go for spinning drives, esp. if they're external.

RAID5/6 is not still production ready [0], and having a non production ready feature not gated behind a "I know what I'm doing" switch is dangerous. I believe BTRFS' customers are not small fish, but enterprises which protect their data in other ways.

So, I think unraid does the right thing by not doubling down on something half-baked. ZFS is battle tested at this point.

I'm personally building a small NAS for myself and researching the software stack to use. I can't trust BTRFS with my data, esp. in RAID5/6 form, which I'm planning to do.

[0]: https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid5...

nullc•6mo ago
Are any filesystems offering file level FEC yet?

If a file has a hundred thousand blocks you could tack on a thousands blocks of error correction for the cost of making it just 1% larger. If the file is a seldom/never written archive it's essentially free beyond the space it takes up.

The kind of massive data archives that you want to minimize storage costs of tend to be read-mostly affairs.

It won't save you from a disk failure but I see bad blocks much more often than whole disk failures these days... and raid/5/6 have rather high costs while being still quite vulnerable to the possibility of an aligned fault on multiple disks.

Of course you could use par or similar tools, but that lacks nice FS transparent integration and particularly doesn't benefit from checksums already implemented in (some) FS (as you need half the error correction data to recover from known-position errors, and-or can use erasure only codes).

leptons•6mo ago
RAID and any other fault-tolerance scheme can not be the only way you protect your data. I have two RAID 10 arrays, one is for active data, one is for backup, and the backup system has an LTO tape drive, where I also use PAR parity files on the tape backups. Important stuff is backed-up to multiple tape sets. Both systems are in different buildings, with tapes stored in a third.

My point is, it doesn't much matter what your FS does, so long as you have 3 or more of them.

zamadatix•6mo ago
There is no such thing as a guaranteed data storage system. The only thing you can choose is how reliable is reliable enough (or how reliable you can afford). Parity or RAID can get you more granular reliability increments than straight copies can provide, or even just far greater convenience when you do have copies.
nullc•6mo ago
Without error coding only a perfect channel can give lossless performance. But with error coding even a fairly lossy channel can give performance that is arbitrarily close to lossless, depending only on how much capacity you're willing to waste.

As the number of blocks on our storage devices grows their the probability that there is at least one with an error goes up. Even with raid5 the probability that there are two errors in one stripe unit at the same time can become non-negligible.

Worse, for raid5/6 normally the system is defendant on the device detecting corruption. When it doesn't, the raid will not only not fix but potentially propagate the corruption to other data. (I understand that ZFS at least can use its internal checksums to handle this case).

oakwhiz•6mo ago
Rateless erasure FEC can go even further.
IgnaciusMonk•6mo ago
newer, smaller (physically,epitaxy, not capacity) ssd "cell" = more times per year you have to rewrite(refresh) that cell / whole disk so you do not lose data, anyway.

any sane person uses FS / system with dedup in it, so you can have 7+4+12 snapshots for 5TB of data taking only 7TB of space. etc

you want snapshots, for example Manjarolinux (arch based) does use BTRFS capable of snapshots. so before every update it will make snapshot so if update fails, you can just select to go back into working state in grub...

Alma linux uses BTRFS too but im not sure if they have this functionality too.

ZFS, bcache, BTRFS, checksums

MD-INTERGITY inside of linux kernel can provide checksums for any fs essentially. just " lvcreate --type raidN --raidintegrity y " and you have checksumms + raid in linux

IgnaciusMonk•6mo ago
without unraid / nonraid nonsense. Just " pure Linux™ " Trademark owned by Linus Torvalds

filesystem scrubbing.

Dylan16807•6mo ago
I think the closest you're going to get is splitting the drive into 20 partitions and running RAIDZ across them.
nullc•6mo ago
yeesh, that would have pretty poor performance and non-trivial overhead compared to the protection level against bad blocks.
namibj•6mo ago
Ceph
zenoprax•6mo ago
This should be a "Show HN".
IgnaciusMonk•6mo ago
Intel RSTe / VROC it is integrated directly to your CPU/CHIPSET. you just populate that in "BIOS" and linux,BSD,windows will nofuss boot / install on top of that.

or every linux distro,bsd has ZFS available,

or every linux distro has LVM raid available,

or BTRFS has raid 1, 0, 10,

or windows has their own software raid, just open Storage Spaces / Disk Management console,

so whole unraid / nonraid are just nonsensical waste of effort for everyone

why would i invest time and effort to this technology by small team, if i can have technology supervised, maintained by linux kernel devs ? ? makes no sense.

and things i mentioned here are here longer than unraid/nonraid existed. so it was nonsensical from start.

lazylizard•6mo ago
i think unraid is a raid4
IgnaciusMonk•6mo ago
unraid is nonsense for silly people(50year old managers).