frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
97•valyala•4h ago•16 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
43•zdw•3d ago•9 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•19 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
55•surprisetalk•3h ago•54 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
97•mellosouls•6h ago•175 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
144•AlexeyBrin•9h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
100•vinhnx•7h ago•13 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
850•klaussilveira•1d ago•258 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
138•valyala•4h ago•109 comments

First Proof

https://arxiv.org/abs/2602.05192
68•samasblack•6h ago•52 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
7•mbitsnbites•3d ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1093•xnx•1d ago•618 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
64•thelok•6h ago•10 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
235•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
519•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
94•onurkanbkrc•9h ago•5 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
31•momciloo•4h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
259•alainrk•8h ago•425 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
186•1vuio0pswjnm7•10h ago•267 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
48•rbanffy•4d ago•9 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
615•nar001•8h ago•272 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
36•marklit•5d ago•6 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
348•ColinWright•3h ago•414 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
125•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
99•speckx•4d ago•116 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
33•sandGorgon•2d ago•15 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•119 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
288•isitcontent•1d ago•38 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments
Open in hackernews

Zfsbackrest: Pgbackrest style encrypted backups for ZFS filesystems

https://github.com/gargakshit/zfsbackrest
62•sphericalkat•5mo ago

Comments

levkk•5mo ago
Finally! Been looking for this a long time. File-based backups for large Pg databases are not very scalable (even incremental), having this in my toolkit would be amazing.
craftkiller•5mo ago
I'm not sure I follow. Wouldn't this be file-based (zfs-dataset-based) incremental backups? I don't think this has anything to do with postgresql other than copying the style of pgBackRest.
blacklion•5mo ago
this uses `zfs send @snapshot` which is block-level, not file-level.
craftkiller•5mo ago
Oh! So the issue with large postgres backups is the number of files?
levkk•5mo ago
No. Postgres data files are 1Gb each. When you change just one byte in a table, a whole 1Gb file gets updated (just 1 byte change, effectively). Your file-based backup tool now has to upload 1Gb of data to save 1 byte of actual changes.
Tostino•5mo ago
They fixed that in pgbackrest a while ago: https://pgbackrest.org/user-guide.html#backup/block

It was a major pain point for my backups for years.

levkk•5mo ago
Does that work with S3, etc.? I don't remember them allowing partial file uploads.
Tostino•5mo ago
I believe so, because it is done in conjunction with their file bundling feature and doesn't rely on support from the blob storage backend.

They create a new file with the diffs of a bundle of Postgres files, and upload that to blob storage.

mrflop•5mo ago
That's a fair point, and it's a known challenge with file-based backups on systems like Postgres. That said, some backup systems implement chunk-level deduplication and content-addressable storage, which can significantly reduce the amount of data actually transferred, even when large files change slightly.

For example, tools like Plakar (contributor here) split data into smaller immutable chunks and only store the modified ones, avoiding full re-uploads of 1GB files when only a few bytes change.

ylyn•5mo ago
This seems to store the zfs send stream. That's a bad idea.

> Incremental ZFS send streams do not have any of these properties and full ZFS send streams only have a few of them. Neither full nor incremental streams have any resilience against damage to the stream; a stream is either entirely intact or it's useless. Neither has selective restores or readily available indexes. Incremental streams are completely useless without everything they're based on. All of these issues will sooner or later cause you pain if you use ZFS streams as a backup format.

https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSSendNotA...

TheNewsIsHere•5mo ago
If you’re looking to ZFS send streams for many of ZFS’s guarantees, absolutely.

However ZFS replication was originally designed with the assumption and use case in mind that organizations would store the send streams as opaque blobs on tape. This is, in part, why storing send streams as blobs is still a thing people do.

There are some use cases where this makes sense. I’ve stored full send streams of archived ZFS file systems in S3(-compatible services) where integrity is handled at the platform level. In that use case I didn’t benefit from having every copy of the filesystem in question running on live storage media, and incremental sends/snapshots weren’t on the table. (I also SHA checksummed the resulting files, and did restore tests.)

There is also a misconception that frequently gets brought up in the ZFS community that the send stream format isn’t stable between versions and cannot be relied upon in future, but it absolutely is stable. In fact the ZoL manpage for send explicitly states that it is. As with anything in ZFS though, you want to move versions forward or not at all, rather than backward.

ticklyjunk•5mo ago
we are writing blobs/ objects to zfs tape volumes. it gives us an extra layer of defense from ransom attacks and satisfies our 321 requirement. We make the blobs transparent with some metadata tags. The objects are recorded in the catalog and we can pull individual files out of the blob. deepspace storage manages the tape gateway and catalog for the objects. short answer yes storing send streams to tape is doable robust workflow.