frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Le Chat. Custom MCP Connectors. Memories

https://mistral.ai/news/le-chat-mcp-connectors-memories
15•Anon84•22m ago•1 comments

30 minutes with a stranger

https://pudding.cool/2025/06/hello-stranger/
434•MaxLeiter•5h ago•131 comments

Use Bayes rule to mechanically solve probability riddles

https://cloud.disroot.org/s/Ec4xTMFDteTrFio
9•zaik•3d ago•0 comments

The Color of the Future: A history of blue

https://www.hopefulmons.com/p/the-color-of-the-future
36•prismatic•1h ago•5 comments

Polars Cloud and Distributed Polars now available

https://pola.rs/posts/polars-cloud-launch/
52•jonbaer•8h ago•29 comments

I Should Have Loved Electrical Engineering

https://blog.tdhttt.com/post/love-ee/
16•tdhttt•3d ago•11 comments

Show HN: A roguelike game that runs inside Notepad++

https://github.com/thelowsunoverthemoon/NeuroPriest
93•lowsun•3d ago•10 comments

Étoilé – desktop built on GNUStep

http://etoileos.com/
152•pabs3•8h ago•57 comments

Claude Code: Now in Beta in Zed

https://zed.dev/blog/claude-code-via-acp
606•meetpateltech•20h ago•383 comments

Neovim Pack

https://neovim.io/doc/user/pack.html#vim.pack
189•k2enemy•11h ago•107 comments

Liquid Glass? That's what your M4 CPU is for

https://idiallo.com/byte-size/apple-liquid-glass
45•luismedel•1h ago•48 comments

Reverse engineering Solos smart glasses

https://jfloren.net/b/2025/8/28/0
98•floren•3d ago•14 comments

Minesweeper thermodynamics

https://oscarcunningham.com/792/minesweeper-thermodynamics/
128•robinhouston•2d ago•34 comments

The Bitter Lesson Is Misunderstood

https://obviouslywrong.substack.com/p/the-bitter-lesson-is-misunderstood
283•JnBrymn•6d ago•172 comments

AR Fluid Simulation Demo

https://danybittel.ch/fluid
93•danybittel•3d ago•19 comments

Melvyn Bragg steps down from presenting In Our Time

https://www.bbc.co.uk/mediacentre/2025/melvyn-bragg-decides-to-step-down-from-presenting-in-our-t...
153•aways•5h ago•92 comments

Nuclear: Desktop music player focused on streaming from free sources

https://github.com/nukeop/nuclear
335•indigodaddy•19h ago•211 comments

A Rebel Writer's First Revolt

https://www.vulture.com/article/arundhati-roy-mother-mary-comes-to-me-review.html
7•lermontov•1d ago•1 comments

Google was down in eastern EU and Turkey

https://www.novinite.com/articles/234225/Google+Down+in+Eastern+Europe+%28UPDATED%29
65•nurettin•3h ago•14 comments

Hledger 1.50

https://github.com/simonmichael/hledger/releases/tag/1.50
20•olexsmir•1h ago•1 comments

William Wordsworth's letter: "The Law of Copyright" (1838)

https://gutenberg.org/cache/epub/76806/pg76806-images.html
28•petethomas•6h ago•15 comments

New knot theory discovery overturns long-held mathematical assumption

https://www.scientificamerican.com/article/new-knot-theory-discovery-overturns-long-held-mathemat...
110•baruchel•1d ago•19 comments

Half an year on Alpine: just musl aside

https://blog.jutty.dev/posts/half-an-year-on-alpine/
34•zdw•2d ago•11 comments

Writing a C compiler in 500 lines of Python (2023)

https://vgel.me/posts/c500/
208•ofou•18h ago•60 comments

Understanding Transformers Using a Minimal Example

https://rti.github.io/gptvis/
221•rttti•19h ago•14 comments

Eels are fish

https://eocampaign1.com/web-version?p=495827fa-8295-11f0-8687-8f5da38390bd&pt=campaign&t=17562270...
137•speckx•21h ago•136 comments

What is it like to be a bat?

https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F
160•adityaathalye•17h ago•217 comments

ReMarkable Paper Pro Move

https://remarkable.com/products/remarkable-paper/pro-move
240•ksec•11h ago•286 comments

Say Bye with JavaScript Beacon

https://hemath.dev/blog/say-bye-with-javascript-beacon/
22•moebrowne•3d ago•14 comments

Speeding up PyTorch inference on Apple devices with AI-generated Metal kernels

https://gimletlabs.ai/blog/ai-generated-metal-kernels
170•nserrino•18h ago•26 comments
Open in hackernews

Zfsbackrest: Pgbackrest style encrypted backups for ZFS filesystems

https://github.com/gargakshit/zfsbackrest
62•sphericalkat•2d ago

Comments

levkk•2d ago
Finally! Been looking for this a long time. File-based backups for large Pg databases are not very scalable (even incremental), having this in my toolkit would be amazing.
craftkiller•2d ago
I'm not sure I follow. Wouldn't this be file-based (zfs-dataset-based) incremental backups? I don't think this has anything to do with postgresql other than copying the style of pgBackRest.
blacklion•2d ago
this uses `zfs send @snapshot` which is block-level, not file-level.
craftkiller•2d ago
Oh! So the issue with large postgres backups is the number of files?
levkk•2d ago
No. Postgres data files are 1Gb each. When you change just one byte in a table, a whole 1Gb file gets updated (just 1 byte change, effectively). Your file-based backup tool now has to upload 1Gb of data to save 1 byte of actual changes.
Tostino•2d ago
They fixed that in pgbackrest a while ago: https://pgbackrest.org/user-guide.html#backup/block

It was a major pain point for my backups for years.

levkk•2d ago
Does that work with S3, etc.? I don't remember them allowing partial file uploads.
Tostino•2d ago
I believe so, because it is done in conjunction with their file bundling feature and doesn't rely on support from the blob storage backend.

They create a new file with the diffs of a bundle of Postgres files, and upload that to blob storage.

mrflop•1d ago
That's a fair point, and it's a known challenge with file-based backups on systems like Postgres. That said, some backup systems implement chunk-level deduplication and content-addressable storage, which can significantly reduce the amount of data actually transferred, even when large files change slightly.

For example, tools like Plakar (contributor here) split data into smaller immutable chunks and only store the modified ones, avoiding full re-uploads of 1GB files when only a few bytes change.

ylyn•2d ago
This seems to store the zfs send stream. That's a bad idea.

> Incremental ZFS send streams do not have any of these properties and full ZFS send streams only have a few of them. Neither full nor incremental streams have any resilience against damage to the stream; a stream is either entirely intact or it's useless. Neither has selective restores or readily available indexes. Incremental streams are completely useless without everything they're based on. All of these issues will sooner or later cause you pain if you use ZFS streams as a backup format.

https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSSendNotA...

TheNewsIsHere•1d ago
If you’re looking to ZFS send streams for many of ZFS’s guarantees, absolutely.

However ZFS replication was originally designed with the assumption and use case in mind that organizations would store the send streams as opaque blobs on tape. This is, in part, why storing send streams as blobs is still a thing people do.

There are some use cases where this makes sense. I’ve stored full send streams of archived ZFS file systems in S3(-compatible services) where integrity is handled at the platform level. In that use case I didn’t benefit from having every copy of the filesystem in question running on live storage media, and incremental sends/snapshots weren’t on the table. (I also SHA checksummed the resulting files, and did restore tests.)

There is also a misconception that frequently gets brought up in the ZFS community that the send stream format isn’t stable between versions and cannot be relied upon in future, but it absolutely is stable. In fact the ZoL manpage for send explicitly states that it is. As with anything in ZFS though, you want to move versions forward or not at all, rather than backward.

ticklyjunk•1d ago
we are writing blobs/ objects to zfs tape volumes. it gives us an extra layer of defense from ransom attacks and satisfies our 321 requirement. We make the blobs transparent with some metadata tags. The objects are recorded in the catalog and we can pull individual files out of the blob. deepspace storage manages the tape gateway and catalog for the objects. short answer yes storing send streams to tape is doable robust workflow.