frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: Workout.cool – Open-source fitness coaching platform

https://github.com/Snouzy/workout-cool
332•surgomat•5h ago•125 comments

The Unreasonable Effectiveness of Fuzzing for Porting Programs

https://rjp.io/blog/2025-06-17-unreasonable-effectiveness-of-fuzzing
30•Bogdanp•1h ago•2 comments

Show HN: I built a tensor library from scratch in C++/CUDA

https://github.com/nirw4nna/dsc
38•nirw4nna•2h ago•2 comments

Homomorphically Encrypting CRDTs

https://jakelazaroff.com/words/homomorphically-encrypted-crdts/
133•jakelazaroff•5h ago•39 comments

Game Hacking – Valve Anti-Cheat (VAC)

https://codeneverdies.github.io/posts/gh-2/
14•LorenDB•41m ago•5 comments

"poline" is an enigmatic color palette generator using polar coordinates

https://meodai.github.io/poline/
104•zdw•3d ago•24 comments

Framework Laptop 12 review

https://arstechnica.com/gadgets/2025/06/framework-laptop-12-review-im-excited-to-see-what-the-2nd-generation-looks-like/
79•moelf•2h ago•82 comments

Attimet (YC F24) – Quant Trading Research Lab – Is Hiring Founding Engineer

https://www.ycombinator.com/companies/attimet/jobs/b1w9pjE-founding-engineer
1•kbanothu•59m ago

Terpstra Keyboard

http://terpstrakeyboard.com/web-app/keys.htm
166•xeonmc•7h ago•53 comments

Writing documentation for AI: best practices

https://docs.kapa.ai/improving/writing-best-practices
31•mooreds•1h ago•6 comments

Yes I Will Read Ulysses Yes

https://www.theatlantic.com/magazine/archive/2025/07/zachary-leader-richard-ellmann-james-joyce-review/682907/
11•petethomas•29m ago•2 comments

Building agents using streaming SQL queries

https://www.morling.dev/blog/this-ai-agent-should-have-been-sql-query/
56•rmoff•2h ago•5 comments

Is There a Half-Life for the Success Rates of AI Agents?

https://www.tobyord.com/writing/half-life
133•EvgeniyZh•7h ago•74 comments

MiniMax-M1 open-weight, large-scale hybrid-attention reasoning model

https://github.com/MiniMax-AI/MiniMax-M1
266•danboarder•11h ago•60 comments

Introduction to the A* Algorithm

https://www.redblobgames.com/pathfinding/a-star/introduction.html
154•auraham•1d ago•64 comments

Scrappy - make little apps for you and your friends

https://pontus.granstrom.me/scrappy/
361•8organicbits•12h ago•116 comments

The Invisible Light That's Harming Our Health

https://caseorganic.medium.com/the-invisible-light-thats-harming-our-health-and-how-we-can-light-things-better-d3916de90521
3•SLHamlet•28m ago•0 comments

Revisiting Minsky's Society of Mind in 2025

https://suthakamal.substack.com/p/revisiting-minskys-society-of-mind
13•suthakamal•2h ago•2 comments

Andrej Karpathy's YC AI SUS talk on the future of the industry

https://www.donnamagi.com/articles/karpathy-yc-talk
4•pudiklubi•1h ago•1 comments

Locally hosting an internet-connected server

https://mjg59.dreamwidth.org/72095.html
97•pabs3•13h ago•100 comments

Should we design for iffy internet?

https://bytes.zone/posts/should-we-design-for-iffy-internet/
25•surprisetalk•2d ago•9 comments

Real-time action chunking with large models

https://www.pi.website/research/real_time_chunking
48•pr337h4m•22h ago•6 comments

I counted all of the yurts in Mongolia using machine learning

https://monroeclinton.com/counting-all-yurts-in-mongolia/
168•furkansahin•10h ago•67 comments

Reasoning by Superposition: A Perspective on Chain of Continuous Thought

https://arxiv.org/abs/2505.12514
34•danielmorozoff•5h ago•1 comments

After millions of years, why are carnivorous plants still so small?

https://www.smithsonianmag.com/articles/carnivorous-plants-have-been-trapping-animals-for-millions-of-years-so-why-have-they-never-grown-larger-180986708/
162•gmays•4d ago•55 comments

The Grug Brained Developer (2022)

https://grugbrain.dev/
954•smartmic•21h ago•453 comments

Show HN: Free local security checks for AI coding in VSCode, Cursor and Windsurf

9•jaimefjorge•5h ago•4 comments

Show HN: Trieve CLI – Terminal-based LLM agent loop with search tool for PDFs

https://github.com/devflowinc/trieve/tree/main/clients/cli
12•skeptrune•4h ago•4 comments

Show HN: Delve, an open source (AGPL) enterprise-grade data analytics platform

https://github.com/iLoveTux/delve
4•ilovetux•1h ago•0 comments

A different take on S-expressions

https://gist.github.com/tearflake/569db7fdc8b363b7d320ebfeef8ab503
13•tearflake•3d ago•10 comments
Open in hackernews

Homomorphically Encrypting CRDTs

https://jakelazaroff.com/words/homomorphically-encrypted-crdts/
133•jakelazaroff•5h ago

Comments

qualeed•4h ago
It doesn't actually say it anywhere, so: CRDT = Conflict-free replicated data type.

https://en.wikipedia.org/wiki/Conflict-free_replicated_data_...

Xeoncross•3h ago
e.g. commonly used by applications that need to deal with multiple people editing a document.
Joker_vD•4h ago
> Now, however, the server can no longer understand the changes you send. If you want to see your friend’s latest changes, you’ll need to both be online at the same time.

What? No, the server sends you the changes you've not seen yet, you decrypt and merge them, and so you get the latest version of the document. Right?

The homomorphic encryption is a fascinating topic, but it's almost never an answer if you need anything resembling reasonable performance and/or reasonable bandwidth.

I've seen a paper that ingeniuously uses homomorphic encryption to implement arbitrary algorithmic computations, totally secret, by encoding a (custom-crafted) CPU together with RAM and then running "tick a clock" algorithm on them. And it works, so you can borrow some AWS huge instance and run you super-important calculations there — at 1 Hz. I am not kidding, it's literally 1 virtual CPU instruction per second. Well, if you are okay with such speed and costs, you either have very small data — at which point just run your computation locally, or you're really, really rich — at which point just buy your own goddamn hardware and, again, run it locally.

zawaideh•3h ago
If the server can't operate on the content, it can't merge it into the CRDT documents. Which means it would need to sending and receiving the entire state of the CRDT with each change.

If the friend is online then sending operations is possible, because they can be decrypted and merged.

Joker_vD•3h ago
I... still can't make heads or tails out of this description. Let me restate how I understand the scheme in TFA: there are two people, editing on the same document using CRDTs. When one person makes an edit, they push an encrypted CRDT to the sync server. Periodically, each of them pulls edits made by the other from the sync server, apply them to their own copy, and push the (encrypted) result back. Because of CRDT's properties, they both end up with the same document.

This scheme doesn't require them two people to be on-line simultaneously — all updates are mediated via the sync server, after all. So, where am I wrong?

eightys3v3n•2h ago
I think the difference in understanding is that the article implies, as I understand it, that the server is applying the changes to the document when it receives a change message, not the clients. If the clients were applying the changes then we don't need Homomorphic encryption in the first place. The server would just store a log of all changes; cleaning it up once it was sure everyone played the changes if that is possible. Without Homomorphic encryption, the server must store all changes since some full snapshot and a full snapshot of the document. Where as with it, the server only ever stores the most recent copy of the document.

This could be done to reduce the time required for a client to catch up once it comes online (because it would need to replay all changes that have happened since it last connected to achieve the conflict free modification). But the article also mentions something about keeping the latest version quickly accessible.

Joker_vD•2h ago
Article literally starts with

    One way to solve this is end-to-end encryption. You and your friend agree
    on a secret key, known only to each other. You each use that key to encrypt
    your changes before sending them, decrypt them upon receipt, and no one in
    the middle is able to listen in. Because the document is a CRDT, you can
    each still get the latest document without the sync server merging the
    updates.

    That is indeed a solution,
but then for some reason claims that this schemes requires both parties to be on-line simultaneously. No, it doesn't, unless this scheme is (tacitly) supposed to be directly peer-to-peer which I find unlikely: if it were P2P, there would be no need for "the sync server" in the first place, and the description clearly states that in this scheme it doesn't do anything with document updates except for relaying them.
jakelazaroff•1h ago
Hi, author here! The scenario was meant to be high level — I guess I should have gotten more into the various architectures and tradeoffs, but the article is already pretty long.

The way I see it there are a couple of ways this can shake out:

1. If you have a sync server that only relays the updates between peers, then you can of course have it work asynchronously — just store the encrypted updates and send them when a peer comes back online. The problem is that there's no way for the server to compress any of the updates; if a peer is offline for an extended period of time, they might need to download a ton of data.

2. If your sync server can merge updates, it can send compressed updates to each peer when it comes online. The downside, of course, is that the server can see everything.

Ink & Switch's Keyhive (which I link to at the end) proposes a method for each peer to independently agree on how updates should be compressed [1] which attempts to solve the problems with #1.

[1] https://github.com/inkandswitch/keyhive/blob/main/design/sed...

killerstorm•1h ago
There's another option: let the clients do the compression. I.e. a client would sign & encrypt a message "I applied messages 0..1001 and got document X". Then this can be a starting point, perhaps after it's signed by multiple clients.

That introduces a communication overhead, but is still likely to be orders of magnitude cheaper than homomorphic encryption

vhcr•38m ago
Now you need a consensus algorithm.
jason_oster•1h ago
The protocol half of most real world CRDTs do not want to send the raw stream of changes. They prefer to compress changes into a minimal patch set. Each patch set is specific to individual peers, based on the state of their local CRDT at merge time.

The naive raw stream of changes is far too inefficient due to the immense amount of overhead required to indicate relationships between changes. Changing a single character in a document needs to include the peer ID (e.g., a 128-bit UUID, or a public key), a change ID (like a commit hash - also about 128-bit), and the character’s position in the document (usually a reference to the parent’s ID and relative marker indicating the insert is either before or after the parent).

The other obvious compression is deletions. They will be compressed to tombstones so that the original change messages for deleted content does not need to be relayed.

And I know it is only implied, but peer to peer independent edits are the point of CRDTs. The “relay server” is there only for the worst case scenario described: when peers are not simultaneously available to perform the merge operation.

ath92•3h ago
Generally, this is not really true. The point of CRDTs is that as long as all parties receive all messages (in any order), they should be able to recreate the same state.

So instead of merging changes on the server, all you need is some way of knowing which messages you haven’t received yet. Importantly this does not require the server to be able to actually read those messages. All it needs is some metadata (basically just an id per message), and when reconnecting, it needs to send all the not-yet-received messages to the client, so it’s probably useful to keep track of which client has received which messages, to prevent having to figure that out every time a client connects.

charcircuit•1h ago
If it takes 1 seconds per merge as per the article it sounds like a poor user experience for when new people join they have to wait hundreds or thousands of seconds to get to the doc.
ctde•1h ago
it was 0.5 ns .. 1s was the FHE case
clawlor•2h ago
There are variants of CRDTs where each change is only a state delta, or each change is described in terms of operations performed, which don't require sending the entire state for each change.
blamestross•49m ago
> Which means it would need to sending and receiving the entire state of the CRDT with each change. > If the friend is online then sending operations is possible, because they can be decrypted and merged.

Or the user's client can flatten un-acked changes and tell the server to store that instead.

It can just allways flatten until it hears back from a peer.

The entire scenario is over-contrived. I wish they had just shown it off instead of making the lie of a justification.

killerstorm•1h ago
It's very common for CS and cryptography-adjacent papers to describe something impractical. Even more impractical than what you described - e.g. complexity of an attack is reduced from 2^250 to 2^230.

The purpose of these papers is to map out what's possible, etc, which might at some point help with actual R&D.

teleforce•4h ago
>Runtime performance is also — to put it lightly — lacking. I benchmarked the unencrypted and encrypted versions of the last write wins register on an M4 MacBook Pro. The unencrypted one averaged a merge time of 0.52 nanoseconds.The encrypted one? 1.06 seconds. That’s not a typo: the homomorphically encrypted merge is two billion times slower.

Ouch!

thrance•4h ago
Another cool use of FHE, allowing to browse wikipedia privately while still online: https://news.ycombinator.com/item?id=31668814
plopilop•4h ago
As the article mentions, fully homomorphic encryption is insanely slow and inefficient. But I have to say that it is a relatively new field (the first FHE scheme was discovered in 2009), and that the field has immensely progressed over the last decade and a half.

The first FHE scheme required keys of several TB/PB, bootstrapping (an operation that is pivotal in FHE schemes, when too many multiplications are computed) would take thousands of hours. We are now down to keys of "only" 30 MB, and bootstrapping in less than 0.1 second.

Hopefully progress will continue and FHE will become more practical.

6r17•3h ago
CRDTs are also crazy slow due to their architecture ; even the best alg out there are costly by design ; so adding homomorphic encryption is even more of a challenge ; tough it really is impressing I'm curious if this can be usable at all;

edit so i bring some "proof" of my claim: from this very page : `To calculate the new map, the server must go through and merge every single key. After that, it needs to transfer the full map to each peer — because remember, as far as it knows, the entire map is different.`

__MatrixMan__•2h ago
Is it the CRDT that's slow there, or is the problem that they've made it one party's job to update everybody?

By having a server in the mix it feels like we're forcing a hub/spoke model on something that wants to be a partial mesh. Not surprising that the hub is stressed out.

Asraelite•2h ago
> CRDTs are also crazy slow due to their architecture

What kinds of CRDTs specifically are you referring to? On its own this statement sounds far too broad to be meaningful. It's like saying "nested for loops are crazy slow".

jason_oster•2h ago
CRDTs are not inherently “crazy slow”. Researchers just don’t succumb to the appeal of premature optimization.

See: https://josephg.com/blog/crdts-go-brrr/

(And even these optimizations are nascent. It can still get so much better.)

The section you quoted describes an effect of homomorphic encryption alone.

There is the problem that both CRDTs and encryption add some overhead, and the overhead is additive when use together. But I can’t tell if that is the point you are trying to make.

hansvm•53m ago
> additive

The overhead is usually multiplicative per-item. Let's say you're doing N things. CRDTs make that O(Nk) for some scaling factor k, and adding encryption makes it O(Nkj) for some scaling factor j.

Give or take some multiplicative log (or worse) factors depending on the implementation.

motorest•1h ago
> CRDTs are also crazy slow due to their architecture ;

You must back up your extraordinary claim with some extraordinary evidence. There is nothing inherently slow in CRDTs.

Also, applying changes is hardly on anyone's hot path.

The only instance where I saw anyone complaining about CRDT performance, it turned out to be from very naive implementations that tried to spam changes with overly chatty implementations. If you come up with any code that requires a full HTTPS connection to send a single character down the wire, the problem is not the algorithm.

MangoToupe•9m ago
> CRDTs are also crazy slow

compared to what? c'mon

westurner•3h ago
Should students trust and run FHE encrypted WASM or JS grading code that contains the answers on their own Chromebooks; for example with JupyterLite and ottergrader?

On code signing and the SETI@home screensaver

867-5309•3h ago
Conflict-Free Replicated Data Type
scyclow•2h ago
Encrypt the diffs for the server and write the hash to a blockchain to manage the ordering. Boom, problem solved without HME.
NetRunnerSu•2h ago
FHE is indeed slow, but the progress since 2009 is truly remarkable. Bootstrapping speed alone improved by tens of millions of times, and tfhe-rs already demonstrates homomorphic AES-128. Real-time FHE for AI inference/training feels increasingly plausible.

> https://github.com/sharkbot1/tfhe-aes-128

ProofHouse•12m ago
Can I double click on ‘plausible’ ?
mihau•1h ago
Sorry for going off-topic, but kudos for UI/UX on your blog!

To name a few: Nice style (colors, font, style), "footnotes" visible on the margin, always on table of contents, interactivity and link previews on hover.

Nice. What's your tech stack?

somezero•1h ago
FHE is simply the wrong tool here. FHE is for a central server operating on data held/known by another. They want MPC -multiple parties jointly computing on distributed data- and that’s fairly more efficient.
meindnoch•41m ago
I like it! The slowness and inefficiency of homomorphic encryption is nicely complemented by the storage bloat of CRDTs.
yusina•28m ago
I'm not sure I get the premise of the article. I know what a CRDT is and how homomorphic encryption works. But why do both parties have to be online at the same time to sync? They could send the updates asynchronously, store-and-forward style, and everything in flight is encrypted. Why does this need server that keeps state (which is kept in encrypted state and modified, as per the proposal)?
noam_k•20m ago
I think the issue here is that the server would have to store a copy of the register per peer, as it can't calculate which one is the most recent. Using FHE allows the server to hold a single copy.

In other words the server could forward and not store if all parties are always online (at the same time).

yusina•10m ago
So it's "just" a storage optimization?
neon_me•7m ago
Server will store encrypted blob and its hash/etag.

Client before upload of data, check for hash/etag of blob he originally fetched. If blob on server has different one, it will download it, decrypt, patch new data on existing one, encrypt and reupload.

Whats the catch?

AES is hardware accelerated on the most devices - so with all the ops it will be significantly faster than any homomorphic enc nowadays.