frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
67•bsgeraci•6h ago•22 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
23•Shubham_Amb•11h ago•2 comments

Show HN: Calfkit – an SDK to build distributed, event-driven AI agents on Kafka

https://github.com/calf-ai/calfkit-sdk
13•ryanyu•11h ago•1 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
166•vkazanov•1d ago•46 comments

Show HN: Hacker Backlinks – HN Stories Most Linked To By HN Comments

https://hacker-backlinks.browserbox.io/?sort=linked&p=1
2•keepamovin•4h ago•1 comments

Show HN: Total Recall – write-gated memory for Claude Code

https://github.com/davegoldblatt/total-recall
8•davegoldblatt•10h ago•4 comments

Show HN: Craftplan – I built my wife a production management tool for her bakery

https://github.com/puemos/craftplan
561•deofoo•4d ago•164 comments

Show HN: Ghidra MCP Server – 110 tools for AI-assisted reverse engineering

https://github.com/bethington/ghidra-mcp
293•xerzes•2d ago•66 comments

Show HN: Morph – Videos of AI testing your PR, embedded in GitHub

https://morphllm.com/products/glance
34•bhaktatejas922•1d ago•12 comments

Show HN: Mmdr – 1000x faster Mermaid rendering in pure Rust (no browser)

https://github.com/1jehuang/mermaid-rs-renderer/blob/master/README.md
45•jeremyh1•1d ago•8 comments

Show HN: A state-based narrative engine for tabletop RPGs

https://github.com/dkoepsell/EverdiceRealm1
2•KoeppyLoco•11h ago•0 comments

Show HN: Safe-now.live – Ultra-light emergency info site (<10KB)

https://safe-now.live
193•tinuviel•3d ago•94 comments

Show HN: GitHub Browser Plugin for AI Contribution Blame in Pull Requests

https://blog.rbby.dev/posts/github-ai-contribution-blame-for-pull-requests/
61•rbbydotdev•2d ago•34 comments

Show HN: Octosphere, a tool to decentralise scientific publishing

https://octosphere.social/
63•crimsoneer•2d ago•34 comments

Show HN: Claude.md templates based on Boris Cherny's advice

https://github.com/abhishekray07/claude-md-templates
4•aray07•20h ago•0 comments

Show HN: Sandboxing untrusted code using WebAssembly

https://github.com/mavdol/capsule
76•mavdol04•2d ago•25 comments

Show HN: C discrete event SIM w stackful coroutines runs 45x faster than SimPy

https://github.com/ambonvik/cimba
68•ambonvik•2d ago•18 comments

Show HN: Adboost – A browser extension that adds ads to every webpage

https://github.com/surprisetalk/AdBoost
128•surprisetalk•3d ago•128 comments

Show HN: Accept-md – One command to make Next.js sites LLM-scraping friendly

https://www.accept.md/
5•hval•15h ago•0 comments

Show HN: Playwright Best Practices AI SKill

https://github.com/currents-dev/playwright-best-practices-skill
2•waltergalvao•15h ago•0 comments

Show HN: CLI tool to convert Markdown to rich HTML clipboard content

https://github.com/letientai299/md2cb
10•letientai299•1d ago•7 comments

Show HN: Pipeline and datasets for data-centric AI on real-world floor plans

https://archilyse.standfest.science
11•standfest•1d ago•4 comments

Show HN: An AI-Powered President Simulator

https://presiduck.feedscription.com/
14•tzhu1997•1d ago•0 comments

Show HN: FIPSPad – a FIPS 140-3 and NIST SP 800-53 minimal Notepad app in Rust

https://github.com/BrowserBox/FIPSPad
8•keepamovin•1d ago•3 comments

Show HN: Inklings – Handwritten family notes turned into a printed book monthly

https://inklings.social
8•archaeal•1d ago•1 comments

Show HN: Umbrel Pro – 4x NVMe SSD home server (CNC aluminum and walnut)

https://umbrel.com/umbrel-pro
2•mayankchhabra•17h ago•8 comments

Show HN: Buquet – Durable queues and workflows using only S3

https://horv.co/buquet.html
7•h0rv•1d ago•0 comments

Show HN: The Last Worm – Visualizing guinea worm eradication, from 3.5M to 10

https://echomoltinsson.github.io/last-worm/
8•onyx_writes•1d ago•1 comments

Show HN: A package manager for agent skills with built-in evals

https://tessl.io/
7•guypod•18h ago•2 comments

Show HN: FizzBuzz Enterprise Edition 2026. AI-powered divisibility detection

https://github.com/joshuaisaact/fizzbuzz-enterprise-edition-2026
2•joshuaisaact•19h ago•0 comments
Open in hackernews

Show HN: Umbrel Pro – 4x NVMe SSD home server (CNC aluminum and walnut)

https://umbrel.com/umbrel-pro
2•mayankchhabra•17h ago

Comments

mayankchhabra•17h ago
Hey HN, I’m one of the founders of Umbrel.

We’ve spent the last 5 years building umbrelOS to make self-hosting accessible. Yesterday, we launched our dream hardware: Umbrel Pro.

Specs: - 4x NVMe SSD slots for storage (tool-less operation) - Intel N300 CPU (8 cores, 3.8GHz) - 16GB LPDDR5 RAM - 64GB onboard eMMC with umbrelOS

The chassis is milled from a single block of aluminum and framed with real American Walnut wood.

Here is a video of the manufacturing process if you want to nerd out on the machining details: https://youtu.be/4IAXfgBnRe8

Also, we built a "FailSafe" mode in umbrelOS, powered by ZFS raidz1. The coolest part is the flexibility: you can start with a single SSD and enable RAID later when you add the second drive (without wiping data), or enable it from day one if you start with multiple drives.

We also really obsessed over the thermal design. The magnetic lid on the bottom has a thermal pad that makes direct contact with all 4 NVMe SSDs, transferring heat into the aluminum. Air is pulled through the side vents on the lid, flows over the SSDs, then the motherboard/CPU, and exits the back. It runs whisper quiet.

Lots more details on our website, but we’ll be hanging out in the comments to answer any questions :)

f30e3dfed1c9•15h ago
If you start with one SSD, how can you later make that into a raidz1 of two? Also, a raidz1 of two block devices does not seem like a really great idea.
f30e3dfed1c9•15h ago
Another question: the hardware looks pretty nice. Can I run FreeBSD on it?
lukechilds•7h ago
Yes, you can run anything on it.
f30e3dfed1c9•14h ago
FWIW, this is what Gemini thinks you are likely doing. Is this correct, or close?

The Trick: The "Sparse File" Loopback

Since ZFS doesn't allow you to convert a single disk vdev to RAID-Z1, Umbrel's "FailSafe" mode almost certainly uses a sparse file to lie to the system.

Phase 1 (Single Drive): When you set up Umbrel with one 2TB SSD, they don't just create a simple ZFS pool. They likely create a RAID-Z1 pool consisting of your physical SSD and two "fake" virtual disks (large files on the same SSD).

The "Degraded" State: They immediately "offline" or "remove" the fake disks. The pool stays in a DEGRADED state but remains functional. To you, the UI just shows "1 Drive."

Phase 2 (Adding the 2nd Drive): When you plug in the second drive, umbrelOS likely runs a zpool replace command, replacing one of those "fake" virtual disks with your new physical SSD.

Resilvering: ZFS then copies the parity data onto the second disk.

lukechilds•6h ago
Hey, other founder here.

Great question! Close, but not exactly. We do use a sparse file but only very briefly during the transition.

We start with 1 SSD as a single top level vdev. When you add the second SSD you choose if you want to enable FailSafe or not. If you don't enable FailSafe you can just keep adding disks and they will be added as top level vdevs. Giving you maximum read and write performance due to striping data across them. Very simple, no tricks.

However if you choose FailSafe when you add your second SSD, we then do a bit of ZFS topology surgery, but only very briefly. So you start with a ZFS pool with a single top level vdev running on your current SSD. And you just added a new unused SSD and chose to transition to FailSafe mode. First we create a sparse file sized to the exact same size as your current active SSD. Then we create an entirely new pool with a single top level raidz1 vdev backed by two disks, the new SSD, and the sparse file. The sparse file acts as a placeholder for your current active SSD in the new pool. We then immediately remove the sparse file so this new pool and dataset is degraded. We then take a snapshot of the first dataset, and sync the entire snapshot over to the new pool. The system is live and running off the old pool for this whole process.

Once the snapshot has completed we then very briefly reboot to switch to the new pool. (We have the entire OS running on a writable overlay on the ZFS dataset). This is an atomic process. Early on in the boot process, before the ZFS dataset is mounted, we take an additional snapshot of the old dataset, and do an incremental sync over to the new dataset. This is very quick and copies over any small changes since the first snapshot was created.

Once this sync has completed, the two separate pools now contain identical data. We then mount the new pool and boot up with it. Then we can destroy the old pool, and attach the old SSD to the new pool, bringing it out of degraded state. And the old SSD will be resilvered in the new pool. The user is now booted up on a two wide raidz1 dataset on the new pool with bit-for-bit identical data that they shutdown on with the single ssd dataset on the old pool.

Despite sounding a bit wacky, the transition process is actually extremely safe. Apart from the switch over to the new dataset, the entire process happens in the background with the system online and fully functional. The transition can fail at almost any point and it will gracefully roll back to the single SSD. We only nuke the old single SSD at the very last step, so either we can roll back, or they have a working raidz1 array.

It sounds bad that the raidz1 goes through a period of degradation, but there is no additional risk here over not doing the transition. They are coming from a single disk vdev that already cannot survive a single disk failure. We briefly put them through a degraded raidz1 array that can also not survive a single disk loss, (no less risky than how they were already operating), to then end up at a healthy raidz1 array that can survive a single disk loss, significantly increasing the safety in a simple and frictionless way for the user.

Using two wide raidz1 arrays also get's a bit of a kneejerk reaction but it turns out for our use case the downsides are practically negligible and the upsides are huge. Mirrors basically give you 2x read speed over two disk raidz1. And less read intensive rebuilds. Everything else is pretty much the same or the differences are negligible. It turns out those benefits don't make a meaningful difference to us. A single SSD can already far exceed the bandwidth required to fully saturate our 2.5GbE connection. The additional speed of a mirror is nice but not really that noticeable. However the absolute killer feature of raidz is raidz expansion. Once we've moved to a two disk wide raidz1 array, which is not the fastest possible 2 disk configuration, but more than fast enough for what we need, we can add extra SSDs and do online expansions to a 3 disk raidz1 array and then 4 disk raidz1 array etc. As you add more disks to the raidz1 array, you also stripe reads and writes across n-1 disks, so with 4 disks you exceed the mirror perf benefits anyway.

In theory we could start with one SSD, then migrate to a mirror with the second SSD, and then again migrate to a 3 disk raidz1 array using the sparse file trick. However it's extra complexity for negligible improvements. And when moving from the mirror to the raidz1, you then degrade the user AFTER you've told them they're running FailSafe. Which changes the transition process from a practically zero additional risk operation, to an extremely high risk operation.

Ultimately what we think this design gives us is the simplest consumer RAID implementation with the highest safety guarantees that exist today. We provide ZFS level data assurance, with Synology SHR style one-by-one disk expansion, in an extremely simple and easy to use UI.

f30e3dfed1c9•6h ago
Thanks for the thorough answer. It is a little wacky and complicated but I agree it should be safe. I'm not really in the target market for your software but the hardware does look very nice. Good luck with it.
lukechilds•2h ago
Thanks, appreciate it!