frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•7m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•7m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•12m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•16m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•17m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•19m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•20m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•23m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•34m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•40m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•44m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•53m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•1h ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•1h ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments
Open in hackernews

The Beginner's Textbook for Fully Homomorphic Encryption

https://arxiv.org/abs/2503.05136
251•Qision•4mo ago
Direct link to the book: https://fhetextbook.github.io/

Comments

noman-land•4mo ago
Direct link to the book:

https://fhetextbook.github.io/

Hizonner•4mo ago
I was under the impression that, for any FHE scheme with "good" security, (a) there was a finite and not very large limit to the number of operations you could do on encrypted data before the result became undecryptable, and (b) each operation on the encrypted side was a lot more expensive than the corresponding operation on plaintext numbers or whatever.

Am I wrong? I freely admit I don't know how it's supposed to work inside, because I've never taken the time to learn, because I believed those limitations made it unusable for most purposes.

Yet the abstract suggests that FHE is useful for running machine learning models, and I assume that means models of significant size.

pclmulqdq•4mo ago
Both of these are correct-ish. You can do a renornalization that resets the operation counter without decrypting on FHE schemes, so in that sense there is no strict limit on operation count. However, FHE operations are still about 6 orders of magnitude more expensive than normal, so you are not going to be running an LLM, for instance, any time soon. A small classifier maybe.
k__•4mo ago
Does this mean, according to Moore's Law, FHE can operate at speeds from 6 years ago?
jammaloo•4mo ago
Moore's Law roughly states that we get a doubling of speed every 2 years.

If we're 6 orders of magnitude off, then we need to double our speed 20 times (2^20 = 1,048,576), which would give us speeds approximately in line with 40 years ago. Unless my understanding is completely off.

treyd•4mo ago
The rule of thumb is "about a 100000x slowdown". With Moore's law of 2 years that means it would operate at speeds of computers from about 40 years ago. Although really that's still making it seem like it's faster than it is. Making direct comparisons is hard.
j2kun•4mo ago
LLMs are at the current forefront of FHE research. There are a few papers doing some tweaked versions of BERT in <1 minute per token. Which is only ~4 orders of magnitude slower than cleartext.

https://arxiv.org/html/2410.02486v1#S5

pclmulqdq•4mo ago
This paper uses a very heavily modified version of an encoder-only BERT model. Forward pass on a single 4090 is cited there at 13 seconds after switching softmax out for a different kernel (21 seconds with softmax). They are missing a non-FHE baseline, but that model has only about 35 million parameters when you look at its size. At FP16, you would expect this to be about 100x faster than a normal BERT because it's so damn small. On a 4090, that model's forward pass probably runs at something like 100k-1M tokens per second given some batching. It sounds like 6 orders of magnitude is still about right.
Nevermark•4mo ago
Given individual LLM parameters are not easily interpreted, naturally obfuscated by the diffuse nature of their impact, I would think leaning into that would be a more efficient route.

Obfuscating input and output formats could be very effective.

Obfuscation layers can be incorporated into training. With an input (output) layer that passes information forward, but whose output (input) is optimized to have statistically flat characteristics, resistant to attempts to interpret.

Nothing like apparent pure noise for obfuscation!

The core of the model would then be trained, and infer, on the obfuscated data.

When used, the core model would publicly operate on obfuscated data. While the obfuscation/de-obfuscation layers would be used privately.

In addition to obfuscating, the pre and post-layers could also reduce data dimensionality. Naturally increasing obfuscation and reducing data transfer costs. It is a really good fit.

Even the most elaborate obfuscation layers will be orders and orders of magnitude faster than today's homomorphic approaches.

(Given the natural level parameter obfuscation, and the highly limited set of operations for most deep models, I wouldn't be surprised if efficient homomorphic approaches were found in the future.)

benlivengood•4mo ago
The difference between homomorphic schemes and fully homomorphic schemes is that FHE can be bootstrapped; there's a circuit that can be homomorphically evaluated that removes the noise from an encrypted value, allowing any homomorphic calculation's result to have its noise removed for further computation.
Nzen•4mo ago
My understanding is largely ten years old and high level and only for one kind of fully homomorphic encryption. Things have changed and there is more than one kind.

I heard it described as a system that encrypts each bit and then evaluates the "encrypted bit" in a virtual gate-based circuit that implements the desired operations that one wants applied to the plaintext. The key to (de|en)crypt plaintext will be at least one gigabyte. Processing this exponentially larger data is why FHE based on the system I've described is so slow.

So, if you wanted to, say, add numbers, that would involve implementing a full adder [0] circuit in the FHE system.

[0] https://en.wikipedia.org/wiki/Adder_(electronics)#/media/Fil...

For a better overview that is shorter than the linked 250 page paper, I encourage you to consider Jeremy Kun's 2024 overview [1]

[1] https://www.jeremykun.com/2024/05/04/fhe-overview/

1oooqooq•4mo ago
the goalpost moved and it's not private anymore, just private enough.
sandworm101•4mo ago
What is the computational burden of FHE over doing the same operation in plaintext? I realize that many cloud proponants think that FHE may allow them to work with data without security worries (if it is all encrypted, and we dont have the keys, it aint our problem) but if FHE requires a 100x or 1000x increase in processor capacity then i am not sure it will be practical at scale.
layer8•4mo ago
It’s at least a million times slower than non-encrypted computation. 1000x or 100x would be a huge progress.
sandworm101•4mo ago
Oh. It really is that bad still. So if the question is between wrapping the plaintext in layers of security, or building out a million new server instances to do it via FHE, i know which one everyone will choose.
bgnn•4mo ago
It's so bad that the only way FHE can get more efficient is to use a non-conventional compute technology. Some want to do it in optical donain.
j2kun•4mo ago
It is not that bad these days, closer to 10,000x.

Accelerators are being developed that claim to get down to 10x, though i think they will be more like 100-1000x, which would still be a huge improvement considering how people use LLMs today for basic tasks like string matching.

aitchnyu•4mo ago
Are those accelerators software-only? 10x could let 4$ VPS run server side checks for backup software (evil clients cant clean backups) and git forges (eg, dont allow X to push to main).
vishakh82•4mo ago
It's really not that bad. We're close to using FHE in a production consumer app.

https://vishakh.blog/2025/08/06/lessons-from-using-fhe-to-bu...

adgjlsfhk1•4mo ago
if you're talking about doing database queries on a 5mb database, why not just ship the database client side and have them do the computation?
vishakh82•4mo ago
You may wish to build a protocol where third parties can asynchronously operate on user data. You may also want to have separation between the end app and the compute layer for legal or practical purposes. Finally, you may not want to store large payloads on client devices.
adgjlsfhk1•4mo ago
5mb is hardly a "large payload"
vishakh82•4mo ago
I'm giving you general reasons why this is the case. For our own app, we hope to build a protocol where third parties can operate async on user data (with consent).
EGreg•4mo ago
Funny thing is

Since neural networks are differentiable, they can be homomorphically encrypted!

That’s right, your LLM can be made to secretly produce stuff hehe

LeGrosDadai•4mo ago
That's pretty cool, but isn't any computable function can be computed via FHE, so I'm not sure the differentiable part is necessary.
ogogmad•4mo ago
Any program which you apply FHE to needs to be expressed as a circuit, which implies that the time taken to run a computation needs to be fixed in advance. It's therefore impossible to express a branch instruction (or "if" statement, if you prefer).

The circuits are built out of "+" and "×" gates, which are enough to express any polynomial. In turn, these are enough to approximate any continuous function (Weierstrass's approximation theorem). In turn, every computable function on the real numbers is a continuous function - so FHE is very powerful.

jlokier•4mo ago
> In turn, every computable function on the real numbers is a continuous function

That doesn't seem right. Consider the function f(x: ℝ) = 1 if x ≥ 0, 0 otherwise. That's computable but not continuous.

ogogmad•4mo ago
That's uncomputable because equality of real numbers is undecidable. Think infinite strings of digits.
gametorch•4mo ago
ReLU, commonly used in neural networks, is not differentiable at zero but it's still able to be approximated by expressions that are efficiently FHE-evaluable. You don't truly care about differentiability here, if you're being pedantic.

Very insightful comment, though. LLMs run under FHE (or just fully local LLMs) are a great step forwards for mankind. Everyone should have the right to interact with LLMs privately. That is an ideal to strive for.

seanhunter•4mo ago
Differentiability isn’t a requirement for homomorphism I don’t think.

Homomorphism just means say I have a bijective function [1] f: A -> B and a binary operator * in A and *’ in B, f is homomorphic if f(a1*a2) = f(a1)*’f(a2). Loosely speaking it “preserves structure”.

So if f is my encryption then I can do *’ outside the encryption and I know because f is homomorphic that the result is identical to doing * inside the encryption. So you need your encryption to be an isomorphism [2]and you need to have ”outside the encryption “ variants of any operation you want to do inside the encryption. That is a different requirement to differentiability.

1: bijective means it’s a one to one correspondence

2: a bijection that has the homomorphism property is called an isomorphism because it makes set A equivalent to set B in our example.

arjvik•4mo ago
Is the title broken?

I see “Unified Line and Paragraph Detection by Graph Convolutional Networks (2022)”

fhe•4mo ago
I see the same, and there is a posting of that title (and linking to the correct paper) also on HN frontpage. wondering what's going on.
EarlKing•4mo ago
You're not alone. I saw that FHE paper earlier, so... what's going on?
Qision•4mo ago
Sorry for not responding earlier. This is probably a bug but it's super weird... I just emailed the mods about this.
tomhow•4mo ago
Sorry about this. That was my screwup.

There were (at least) two posts from arxiv.org on the front page at the time, and when I was updating the title on the other one I must have applied it to this one instead. I've fixed it now and re-upped it onto the front page so I can have its full exposure on the front page with its correct title.

karolcodes•4mo ago
man, imagine having time to read such papers. genuinly would read it but i know this alone is like 30h study
inasio•4mo ago
I was surprised that for almost 300 pages there were only 26 references listed in the back. Not the end of the world by any means, clearly a ton of work went into this, but I find it useful to see from references how it overlaps with other subjects I may know more about
logannyeMD•4mo ago
FWIW: I created a github repo for compact zero-knowledge proofs that could be useful for privacy-preserving ML models of reasonable size (https://github.com/logannye/space-efficient-zero-knowledge-p...). Unfortunately, FHE's computational overhead is still prohibitive for running ML workloads except on very small models. Hoping to help make ZKML a little more practical.
infimum•4mo ago
This sounds super interesting. Can you elaborate on how you apply ZK to ML? (or can you point me to any resources?)
oulipo2•4mo ago
Did you check Zama.ai's work on FHE?
hamburgererror•4mo ago
Let's admit for a second that the problem around computational cost is solved and using FHE is similar to using plaintext data.

My question might be very naive but I'd like to better understand the impact of FHE, discussions here seem to revolve very much around the use of FHE in ML, but are there other uses for FHE?

For example, could it be used for everyday work in an OS or a messaging app?

Also, is it the path for true obsfuscation?

dcminter•4mo ago
That's a big stretch for the premise, but...

There's no value to it in circumstances where you control all the hardware processing data, so "everyday work in an OS" - only if that OS is hosted on someone else's hardware, "a messaging app" - only if you expect some of the messages or metadata to undergo processing on someone else's hardware.

It seems wildly unlikely that the performance characteristics will improve dramatically, so in practice the uses are going to remain somewhat niche.

hamburgererror•4mo ago
> There's no value to it in circumstances where you control all the hardware processing data

But what about the case where you don't have so much control about what runs next to your program? Could it be possible for an attacker to run a program in order to extract some data when your program is run?

Also, could FHE offer some protection against vulnerabilities like Meltdown and Spectre?

> It seems wildly unlikely that the performance characteristics will improve dramatically

Why? Are there some specific signs for this already? I had the impression that everytime people tend to believe that with technology they get proven wrong later.

GTP•4mo ago
The tipical, and also most useful, example use case for FHE is running computational tasks on some cloud service without having to trust it. And yes, it would provide protection against Meltdown and Spectre (if performed on the hardware running the computation), as the attacker would be able to only extract encrypted data.
dcminter•4mo ago
The data has to be decrypted at some point in order to display it... unless we're envisioning FHME hardware in the monitor as well - honestly I think we're well across the threshold into fantasy already though.
GTP•4mo ago
Of course the data has to be decrypted, but in this case you would decrypt it on your client machine, so that you don't need to trust the cloud provider or other third parties using VMs on the same server (side channel attacks can sometimes be exploited from another VM running on the same hardware, although this is rarely considered as part of one's threat model).