frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
1•breadwithjam•2m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•3m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•4m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•6m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•6m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•6m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
2•vkelk•7m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•8m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•9m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
2•HamoodBahzar•10m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•14m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•14m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•16m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•16m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•19m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•23m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•23m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•24m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•24m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•25m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•28m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•28m ago•1 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•30m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•31m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•32m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•32m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
3•Brajeshwar•32m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•32m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•33m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•34m ago•0 comments
Open in hackernews

The rsync algorithm (1996) [pdf]

https://www.andrew.cmu.edu/course/15-749/READINGS/required/cas/tridgell96.pdf
218•vortex_ape•1mo ago

Comments

doodlesdev•1mo ago
Well-written, succinct.

This small document shows what computer science looked like to me when I was just getting started: a way to make computers more efficient and smarter, to solve real problems. I wish more people who claim to be "computer scientists" or "engineers" would actually work on real problems like this (efficient file sync) instead of having to spend time learning how to use the new React API or patching the f-up NextJS CVE that's affecting a multitude of services.

PunchyHamster•1mo ago
to be fair level of security of systems back then was pretty fucking bad
observationist•1mo ago
6 characters or fewer passwords, if there were passwords at all. Phreaking still worked into the 90s, and all sorts of really stupid things were done without really thinking about the security at all. They'd print out receipts with the entire credit or debit card number and information on it, or carbon copy the card with an impression, and you'd see these receipts blowing around parking lots, or find entire bags or dumpsters full of them. Knowing an IP address might be sufficient information to gain access to systems that should have been secured. It's pretty amazing that things functioned as well as they did, that society was as trusting and trustworthy as it was, that we were able to build as much as we did with as relatively a tiny level of exploitation that happened.

If the same level of vulnerability was as prevalent today as it was back then, civilization might collapse overnight.

mjevans•1mo ago
To be fair, back then it was relatively easy for anyone intelligent enough to be able to abuse any of that to have a well paying 'white collar' job with things like full health benefits, a pension, and more than sufficient income to support an entirely family SOLO. They even owned houses!

When your life is set like that why risk trying to defraud someone a the cost of a nice suit when that's something that can be done legally and written off as a business expense on taxes?

gritzko•1mo ago
Just read AWS or CloudFlare outage postmortems and you will see: are still there, in the happy land.
SoftTalker•1mo ago
We still print the routing and account number in full on paper checks, and that's all that's needed to do an ECH transaction. Yet there's not an alarming level of ECH fraud.
axiolite•1mo ago
In 1996? OpenBSD and Apache had been around for a year. PGP had been around for several years. HTTPS was used where needed. SecurID tokens were common for organizations that cared about security.

Admittedly SSH wasn't around, but kerberos+rlogin and SSL+telnet was available. Organizations who cared about security would have SecurID tokens issued to their employees and required for login.

Dial-in over phone lines, and requiring a password, was much less discoverable or exploitable than services exposed to the internet, today.

wmf•1mo ago
And every machine had 100 RCEs that you could discover with a few hours of effort.
axiolite•1mo ago
Even back in 1996, OpenBSD emphasized security. By 2000 they claimed "Three years without a remote hole in the default install!" at the very top of their website. Qmail was released in Dec 1995 and its security withstood scrutiny for quite a lot of years. I'd be interested in seeing just how many RCEs a modern security researcher could actually come up with from a 1996 release of BSDi, OpenBSD, Solaris, AIX, etc. I'd bet on just a handful.

I can understand how, if your whole world was Windows 3.1 and 95, you'd feel that way about security at the time.

jrpelkonen•1mo ago
SSH was around, but not nearly as pervasive it is today. I have memories of having to shake my mouse around during the windows client installation to generate entropy. Fun times
axiolite•1mo ago
I believe your recollection is off by several years...

What you're describing is PuttyGen. According to Wikipedia, the first Putty release was in 1999. Archive.org doesn't have any snapshots of the Putty website before 2000, so that checks-out.

The RSA patent didn't expire in the US until September 2000, so that's when free implementations like OpenSSH first became widely available. That's precisely when I started using it...

The original SSH was first released mid-1995. There would have been a small number of installations in 1996, but absolutely negligible. It was not well-known until later, circa 2000.

jbotz•1mo ago
> There would have been a small number of installations in 1996, but absolutely negligible.

On HN there's always a good chance you're talking to some of the people involved in those "negligible" installations. I know that I submitted some patches to Tatu Ylönen for Ssh to compile on Ultrix, so that must have been in 1995 or early 1996 because after that I didn't have access to any Ultrix machines. I may have been an early adopter, but it didn't take long for ssh to take over the world, at least among Unix system administrators; at Usenix within a year everybody was using ssh because there wasn't any alternative and in terms of security it was a life-saver.

As for the RSA patent... I don't know what license the original Ssh was released under, but it was considered "freeware" when it came out and nobody cared about the US RSA patent. Maybe technically in the USA you shouldn't have used it? Nobody cared.

And the mouse-jiggling thing... not specifically a PuttyGen thing. On linux /dev/random device gave you a few bits at a time stingily, only after it had enough entropy, so it was common for programs that needed good randomness to ask you to jiggle the mouse because that was one of the sources of entropy, so bits random bits would come faster. I'm pretty sure that was still the case well into the Zips.

itsthecourier•1mo ago
so I was running a SVN server in a decommissioned PC somewhere in a startup as an intern. whole company ends up using it and out of nowhere it used to freeze, I would go to check if it had rebooted or crashed and everything was fine.

it fixed by itself, without any fixes from my part. happened many times.

asked for help to a senior, guy ran strace and found a read waiting in /dev/random. and of course it solved by itself any time I checked because I was moving the mouse!

controversially but acceptably, we had linked it to urandom and move on

how fast that guy used strace and analyzed the syscalls inspired me to be better at linux

axiolite•1mo ago
> it didn't take long for ssh to take over the world

That doesn't seem to be accurate. Wikipedia says, by the end of "2000 the number of users had grown to 2 million"

> everybody was using ssh because there wasn't any alternative

I already listed TWO of the most popular alternatives.

> the mouse-jiggling thing... not specifically a PuttyGen thing. On linux

Parent specifically said "windows client installation." Putty was very common on Windows. PuttyGen specifically and prominently told the user to move their mouse... etc. etc.

cobertos•1mo ago
If only those who claim to be "managers" enabled those "engineers" to do such work, but it's not in their interest to their product, their bottom line, or their performance review. At least in their mind.
UqWBcuFx6NV4r•1mo ago
…what? IC developers are a huge, huge contributor to the sort of over-complicated engineering and stack churn that’s at the heart of what’s being described here. Take an iota of responsibility for yourself.
craftkiller•1mo ago
I've been using this extensively recently. I was setting up remote virtual machines that boot a live ISO containing all the software for the machine. Sometimes I need to change a small config file, which would lead to generating a new 1.7GiB ISO, but 99.9% of that ISO is identical to the previous one. So I used rsync. Blew my mind when after a day of working on these images, uploading 1.7GiB ISO after 1.7GiB ISO, wireguard showed that I had only sent 600MiBs.

Fun surprise, rsync uses file size and modified time first to see if the files are identical. I build these ISOs with nix. Nix sets the time to Jan 1st 1970 for reproducible builds, and I suspect the ISOs are padded out to the next sector. So rsync was not noticing the new ISO images when I made small changes to config files until I added the --checksum flag.

seb1204•1mo ago
In the past I downloaded daily diffs from iso which were only few MB. I then applied this diff to my iso from yesterday. Forgot the name of this tool though. I did this on my machine, if parent wants to update in a remote machine I'm not sure it works the same way.
alright2565•1mo ago
If it was a 100GB image on the other hand—good luck! It'd be faster to copy it from scratch every time than to use rsync.
axiolite•1mo ago
You'll find something like BorgBackup will be far more efficient than rsync.
SoftTalker•1mo ago
But rsync is widely available, usually installed by default on linux or unix-like systems. You can just use it.
axiolite•1mo ago
Borg is available for download as a standalone binary, easily dropped onto any Linux system even with very limited privs. And in the repos of every distro easily installed and kept up-to-date.

By avoiding that one step and using rsync instead, you're resigning yourself to "send 600MiBs" over the network for every tiny config change. Not a good trade-off.

yencabulator•1mo ago
> Fun surprise, rsync uses file size and modified time first to see if the files are identical. [...] time to Jan 1st 1970 for reproducible builds

I think it'd be a good idea for rsync to not trust timestamp 0.

bix6•1mo ago
Funny timing, I just used this today while setting up my NAS
teleforce•1mo ago
Fun facts, the author of rsync, Andrew Tridgell, is also the one who reverse-engineered Microsoft SMB that laid the foundation for Samba [1].

How he did manage to avoid lawsuits from Microsoft is beyond me.

[1] Server Message Block:

https://en.wikipedia.org/wiki/Server_Message_Block

kvemkon•1mo ago
A protocol is not a software, it is needed for interoperability.

Similar with header files. Issues arise if there is a "misuse" to derive actually not a compatible but competing solution.

js2•1mo ago
He also wrote a free BitKeeper client, antagonizing Larry McVoy, which is largely why we have git.

https://blog.brachiosoft.com/en/posts/git/

tekkk•1mo ago
Now that was an awesome blog post, thank you for linking!
oska•1mo ago
Australians might like to know he worked on rsync and Samba while a PhD student at the ANU
webdevver•1mo ago
>How he did manage to avoid lawsuits from Microsoft is beyond me.

MS probably chose not to shut down that effort on the basis that it was enabling the MS stack in Linux.

I wish I could dig up an internal presentation that was prepared in the 90s for Bill Gates at the time, which evaluated the threat posed by Linux to Microsoft. I think they were probably happy that Linux now had a reason to talk to Windows machines.

amiga386•1mo ago
https://en.wikipedia.org/wiki/Halloween_documents -> https://www.gnu.org/software/fsfe/projects/ms-vs-eu/hallowee...
webdevver•1mo ago
thats the one, thankyou for posting!
YesThatTom2•1mo ago
At first, MS didn’t mind as long as SAMBA only implemented the outdated older protocols.

Then they realized interoperability could make them more money, and they invited him and his team to Redmond for a week of working with MS engineers to understand the latest protocol versions. Oh wait, no, it was because the EU forced them. https://www.theregister.com/2007/12/21/samba_microsoft_agree...

amiga386•1mo ago
He describes how he did it with a French Café analogy:

https://download.samba.org/pub/tridge/misc/french_cafe.txt

snvzz•1mo ago
Besides Tridgell's venerable rsync, there exists a permissively licensed implementation[0] by openbsd.

0. https://www.openrsync.org/

irusensei•1mo ago
Which is also the current version on MacOS. Thing is it doesn't seem to talk very well with the samba version of rsync. The OpenBSD implementation seems to be capped at the version 29 of the protocol.

When pulling data on MacOS from a Linux computer I would experiment hangs even when setting protocol versions to 29 or 28. What fixed for me was to just switch to the samba rsync program on MacOS.

ectospheno•1mo ago
I believe openrsync exists just because of rpki.

https://news.ycombinator.com/item?id=43605846

Most openbsd people I know install the real version from ports.

imiric•1mo ago
Rsync is one of my favorite programs. I use it daily. The CLI is a bit quirky (e.g. trailing slashes), but once you get used to it, it makes sense. And I really always use the same flags: `-avmLP`, with `-n` for dry runs.

One alternative I'd like to try is Google's abandoned CDC[1], which claims to be up to 30x faster than rsync in certain scenarios. Does anyone know if there is a maintained fork with full Linux support?

[1]: https://github.com/google/cdc-file-transfer

adrian_b•1mo ago
The same is true for me.

I always alias rsync to:

'/usr/bin/rsync --archive --xattrs --acls --hard-links --progress --rsh="ssh -p PORT -l USER"'

I almost never use any other program for file transfers between computers.

ssl-3•1mo ago
The first time I got paid to use rsync was nearly 25 years ago. It provided for reasonably space-efficient, remote, versioned backups of a mail server, using hard links.

That mail server used maildir, which...for those who are not familiar: With maildir, each email message is a separate file on the disk. Thus, there were a lot of folders that had many thousands of files in them. Plus hardlinks for daily/weekly/whatever versions of each of those files.

At the time there were those who were very vocal about their opinion of using maildir in this kind of capacity, likening it to abuse of the filesystem. And if that was stupid, then my use of hard links certainly multiplied that stupidity.

Perhaps I was simply not very smart at that time.

But it was actually fun to fit that together, and it was kind of amazing to watch rsync perform this job both automatically and without complaint between a pair of particularly not-fast (256kbps?) DOCSIS connections from Roadrunner.

It worked fine. Whenever I needed to go back in time for some reason, the information was reliably present at the other end with adequate granularity -- with just a couple of cron jobs, rsync, and maybe a little bit of bash script to automate it all.

xk3•1mo ago
> there were a lot of folders that had many thousands of files in them

If you ever need to do something like this again, it's often faster to parallelize rsync. One tool that provides this is fpsync:

https://www.fpart.org/fpsync/

jraph•1mo ago
And you'd probably use the snapshot feature of a filesystem like btrfs or zfs instead of hardlinks for deduplication :-)
xk3•4w ago
Yes and something like btrfs-send or zfs-send is probably faster than fpsync
mickael-kerjean•1mo ago
In section 6: "tar files ... of the Linux kernel sources ... version ... 1.99.10 ... are approximately 24MB in size ... Out of the 2441 files in the 2.0.0 release 291 files had changed"

It never crossed my mind Linux at some point only had 2441 files and you could actually parse the code that went through a new version, that time has sailed

alex1138•1mo ago
1996 is not that long ago as Unix goes but it's fun to know as I browse with my Debian computer that I'm using something (derived from Unix but BSD also isn't original Unix either) that has a long tradition
linsomniac•1mo ago
If you want to do similar for block devices: https://github.com/rolffokkens/bdsync

I use it to back up a few virtual machines that, in the event of a site loss, would be difficult to rebuild but also critical to getting our developers back to work. I take an LVM snapshot of the VM, then use bdsync to replicate it to our backup server, and from there I replicate it off to backblaze, then destroy the snapshot.

OrangeDelonge•1mo ago
How does this compare to drbd?
linsomniac•1mo ago
DRBD is more of a live sync, and it's great stuff, as long as you set it up BEFORE you need it, and you need it frequently. If you want to keep a second copy of your data on another system, up to the second(ish), it's a great choice.

If, however, you just want a copy of a block device on another system, like for weekly backup (our case), it's probably overkill. Especially as to keep it truly consistent you need to run in the mode where writes are acked only once the remote AND local devices have it.

My VMs are running on ganeti, which has a mode where the backing device can be DRBD and written to another host. Which works great if you have the extra disc space and can deal with the latency. Also allows you to live migrate VMs between the two hosts.

In my case I ultimately want the copy off-site, so DRBD isn't really a great fit.

DRBD is very good stuff though, I've used it for decades for HA database servers and the like.

cyanydeez•1mo ago
has anyone see rsync or similar implemented in WASM for the browser?