frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
419•klaussilveira•5h ago•94 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
771•xnx•11h ago•465 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
137•isitcontent•5h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
131•dmpetrov•6h ago•54 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
37•quibono•4d ago•2 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
242•vecti•8h ago•116 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
63•jnord•3d ago•4 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
309•aktau•12h ago•153 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
309•ostacke•11h ago•84 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
168•eljojo•8h ago•124 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
391•todsacerdoti•13h ago•217 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
39•SerCe•1h ago•34 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
315•lstoll•12h ago•230 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
48•phreda4•5h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
107•vmatsiiako•10h ago•34 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
183•i5heu•8h ago•128 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
9•kmm•4d ago•0 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
233•surprisetalk•3d ago•30 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
15•gfortaine•3h ago•1 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
972•cdrnsf•15h ago•414 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
141•limoce•3d ago•79 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
40•rescrv•13h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
42•ray__•2h ago•11 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
34•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
76•antves•1d ago•57 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
18•MarlonPro•3d ago•4 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
38•nwparker•1d ago•9 comments

Claude Composer

https://www.josh.ing/blog/claude-composer
104•coloneltcb•2d ago•69 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
25•betamark•12h ago•23 comments

Planetary Roller Screws

https://www.humanityslastmachine.com/#planetary-roller-screws
36•everlier•3d ago•8 comments
Open in hackernews

Zfsbackrest: Pgbackrest style encrypted backups for ZFS filesystems

https://github.com/gargakshit/zfsbackrest
62•sphericalkat•5mo ago

Comments

levkk•5mo ago
Finally! Been looking for this a long time. File-based backups for large Pg databases are not very scalable (even incremental), having this in my toolkit would be amazing.
craftkiller•5mo ago
I'm not sure I follow. Wouldn't this be file-based (zfs-dataset-based) incremental backups? I don't think this has anything to do with postgresql other than copying the style of pgBackRest.
blacklion•5mo ago
this uses `zfs send @snapshot` which is block-level, not file-level.
craftkiller•5mo ago
Oh! So the issue with large postgres backups is the number of files?
levkk•5mo ago
No. Postgres data files are 1Gb each. When you change just one byte in a table, a whole 1Gb file gets updated (just 1 byte change, effectively). Your file-based backup tool now has to upload 1Gb of data to save 1 byte of actual changes.
Tostino•5mo ago
They fixed that in pgbackrest a while ago: https://pgbackrest.org/user-guide.html#backup/block

It was a major pain point for my backups for years.

levkk•5mo ago
Does that work with S3, etc.? I don't remember them allowing partial file uploads.
Tostino•5mo ago
I believe so, because it is done in conjunction with their file bundling feature and doesn't rely on support from the blob storage backend.

They create a new file with the diffs of a bundle of Postgres files, and upload that to blob storage.

mrflop•5mo ago
That's a fair point, and it's a known challenge with file-based backups on systems like Postgres. That said, some backup systems implement chunk-level deduplication and content-addressable storage, which can significantly reduce the amount of data actually transferred, even when large files change slightly.

For example, tools like Plakar (contributor here) split data into smaller immutable chunks and only store the modified ones, avoiding full re-uploads of 1GB files when only a few bytes change.

ylyn•5mo ago
This seems to store the zfs send stream. That's a bad idea.

> Incremental ZFS send streams do not have any of these properties and full ZFS send streams only have a few of them. Neither full nor incremental streams have any resilience against damage to the stream; a stream is either entirely intact or it's useless. Neither has selective restores or readily available indexes. Incremental streams are completely useless without everything they're based on. All of these issues will sooner or later cause you pain if you use ZFS streams as a backup format.

https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSSendNotA...

TheNewsIsHere•5mo ago
If you’re looking to ZFS send streams for many of ZFS’s guarantees, absolutely.

However ZFS replication was originally designed with the assumption and use case in mind that organizations would store the send streams as opaque blobs on tape. This is, in part, why storing send streams as blobs is still a thing people do.

There are some use cases where this makes sense. I’ve stored full send streams of archived ZFS file systems in S3(-compatible services) where integrity is handled at the platform level. In that use case I didn’t benefit from having every copy of the filesystem in question running on live storage media, and incremental sends/snapshots weren’t on the table. (I also SHA checksummed the resulting files, and did restore tests.)

There is also a misconception that frequently gets brought up in the ZFS community that the send stream format isn’t stable between versions and cannot be relied upon in future, but it absolutely is stable. In fact the ZoL manpage for send explicitly states that it is. As with anything in ZFS though, you want to move versions forward or not at all, rather than backward.

ticklyjunk•5mo ago
we are writing blobs/ objects to zfs tape volumes. it gives us an extra layer of defense from ransom attacks and satisfies our 321 requirement. We make the blobs transparent with some metadata tags. The objects are recorded in the catalog and we can pull individual files out of the blob. deepspace storage manages the tape gateway and catalog for the objects. short answer yes storing send streams to tape is doable robust workflow.