frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
484•klaussilveira•7h ago•125 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
821•xnx•13h ago•494 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
44•matheusalmeida•1d ago•5 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
103•jnord•3d ago•14 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
159•dmpetrov•8h ago•70 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
162•isitcontent•7h ago•18 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
55•quibono•4d ago•7 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
214•eljojo•10h ago•136 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
266•vecti•9h ago•125 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
333•aktau•14h ago•159 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
329•ostacke•13h ago•86 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
416•todsacerdoti•15h ago•220 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
30•kmm•4d ago•1 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
7•romes•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
346•lstoll•14h ago•245 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
54•phreda4•7h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
203•i5heu•10h ago•149 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
116•vmatsiiako•12h ago•39 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
154•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
250•surprisetalk•3d ago•32 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1006•cdrnsf•17h ago•421 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
28•gfortaine•5h ago•4 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
50•rescrv•15h ago•17 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
11•gmays•2h ago•2 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
79•ray__•4h ago•38 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
39•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
32•betamark•14h ago•28 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments

Claude Opus 4.6

https://www.anthropic.com/news/claude-opus-4-6
2278•HellsMaddy•1d ago•982 comments
Open in hackernews

Running a Certificate Transparency log

https://words.filippo.io/run-sunlight/
160•Metalnem•7mo ago

Comments

agwa•7mo ago
Sunlight and static-ct-api are a breath of fresh air in the CT log space. Traditional CT log implementations were built on databases (because that's the easiest way to implement the old API) and were over-complicated due to a misplaced desire for high write availability. This made operating a CT log difficult and expensive (some operators were spending upwards of $100k/year). Consequentially, there have been a rash of CT log failures and few organizations willing to run logs. I'm extremely excited by how Sunlight and static-ct-api are changing this.
eddythompson80•7mo ago
I wonder if this is the solution something like SponsorBlock is looking for[1][2]. They have a similar-ish problem. How to replicate crowdsourced data that trickles in slowly, but ideally you want replicated quickly.

WAL replication, rsync, bittorrent, etc all things that don't quite work as needed.

[1] https://github.com/mchangrh/sb-mirror/blob/main/docs/breakdo...

[2] https://github.com/ajayyy/SponsorBlock/issues/1570

tonymet•7mo ago
Is any amateur or professional auditing done on the CA system? Something akin to amateur radio auditing?

Consumers and publishers take certificates and certs for granted. I see many broken certs, or brands using the wrong certs and domains for their services.

SSL/TLS has done well to prevent eavesdropping, but it hasn't done well to establish trust and identity.

sleevi•7mo ago
All the time. Many CA distrust events involved some degree of “amateurs” reporting issues. While I hesitate to call commenters like agwa an amateur, it certainly was not professionally sponsored work by root programs or CAs. This is a key thing that Certificate Transparency enables: amateurs, academics, and the public at large to report CA issues.

At the same time, it sounds like the issues you describe aren’t CA/issuance issues, but rather, simple misconfigurations. Those aren’t incidents for the ecosystem, although definitely can be disruptive to the site, but I also wouldn’t expect them to call trust or identity into disrepute. That’d be like arguing my drivers license is invalid if I handed you my passport; giving you the wrong doc doesn’t invalidate the claims of either, just doesn’t address your need.

tonymet•7mo ago
it seems more ad-hoc, bounty-driven , rather than systematic. is that a fair perspective?
agwa•7mo ago
I wish there were bounties :-)

There is systematic checking - e.g. crt.sh continuously runs linters on certificates found in CT logs, I continuously monitor domains which are likely to be used in test certificates (e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=1496088), and it appears the Chrome root program has started doing some continuous compliance monitoring based on CT as well.

But there is certainly a lot of ad-hoc checking by community members and academics, which as Sleevi said is one of the great things that CT enables.

tonymet•7mo ago
Thanks for highlighting that— and for the efforts to assemble this project. Honestly before this post about the CT logs i hadn’t been aware of systematic auditing being done.
Spivak•7mo ago
I think over the years trust and identity have gone out of scope for TLS—I think for the better. Your identity is your domain and it's not TLS's problem to connect that identity to any real life person or legal entity. I'm sure you still can buy EV certs but no one really cares about them anymore. Certainly browsers no longer care about them. And TLS makes no claim on the trustworthiness of the site you're connecting to, just that the owner of the cert proved control of the domain and that your connection is encrypted.

I can't even imagine how much a pain it would be to try and moderate certs based on some consistent international notion of trustworthiness. I think the best you could hope to do is have 3rd parties like the BBB sign your cert as a way of them "vouching" for you.

NovemberWhiskey•7mo ago
Meet the QWAC.

https://en.m.wikipedia.org/wiki/Qualified_website_authentica...

oasisbob•7mo ago
Yup, it happens. There was a case I remember where a CA was issuing certs using the .int TLD for their own internal use, which it should not be doing.

Happened to see it in the CT logs, and when that CA next came up for discussion on the Mozilla dev security policy list, their failure to address and disclose the misissuance in a timely manner was enough to stop the process to approve their request for EV recognition, and it ended in a denial from Mozilla.

dlgeek•7mo ago
Yes. All CAs trusted by browsers have to go through WebTRUST or ETSI audits by accredited auditors.

See https://www.mozilla.org/en-US/about/governance/policies/secu... and https://www.ccadb.org/auditors and https://www.ccadb.org/policy#51-audit-statement-content

tptacek•7mo ago
As I understand them, these are accounting audits, similar (if perhaps more detail) to a SOC2. The real thing keeping CAs from being gravely insecure is the CA death penalty Google will inflict if a CA suffers a security breach that results in any kind of misissuance.
creatonez•7mo ago
It's not just Google, but also Mozilla, Apple, and Microsoft. They all work together on shutting down bad behavior.

Apple and Microsoft mainly have power because they control Safari and Edge. Firefox is of course dying, but they still wield significant power because their trusted CA list is copied by all the major Linux distributions that run on servers.

tptacek•7mo ago
Sure. I think Google and Mozilla have been the prime movers to date, but everyone has upped their game since Verisign/Symantec.
tonymet•7mo ago
that's good news about the CA's , but how about the publisher certificates that are in use?
torbid•7mo ago
These sound like good improvements but I still don't really get why the ct log server is responsible for storage at all (as a 3rd party entity)..

Couldn't it just be responsible for its own key and signing incremental advances to a log that all publishers are responsible for storing up to their latest submission to it?

If it needed to restart and some last publisher couldn't give it its latest entries, well they would deserve that rollback to the last publish from a good publisher..

michaelt•7mo ago
The point of CT logging is to ensure a person can ask "What certificates were issued for example.com?" or "What certificates were issued by Example CA?" and get an answer that's correct - even if the website or CA fucked up or got hacked and certificates are in the hands of people who've tried to cover their tracks.

This requires the logs be held by independent parties, and retained forever.

torbid•7mo ago
I understand that. But..

If 12 CAs send to the same log and all have to save up to their latest entry not to be declared incompetent to be CAs, how would all 12 possibly do a worse job of providing that log on demand than a random 3rd party who has no particular investment at risk?

(Every other CA in a log is a 3rd party with respect to any other, but they are one who can actually be told to keep something indefinitely because they would also need to return it for legitimizing their own issuance.)

michaelt•7mo ago
As far as I know, CAs don't have to "save up to their latest entry"

The info they get back from the CT log may be a Merkle Hash that partly depends on the other entries in the log - but they don't have to store the entire log, just a short checksum.

torbid•7mo ago
Right and this is what I am saying is backwards with the protocol. It is not in anyone's best interest that some random 3rd party takes responsibility to preserve data for CAs indefinitely to prove things. The CA should identify where it has its copy in the extension and looking at one CAs copy one would find every other CAs copy of the same CT log.
singron•7mo ago
The publishers can't entirely do the storage themselves since the whole point of CT is that they can't retract anything. If they did their own storage, they could rollback any change. Even if the log forms a verification chain, they could do a rollback shortly after issuing a certificate without arousing too much suspicion.

Maybe there is an acceptable way to shift long-term storage to CAs while using CT verifiers only for short term storage? E.g. they keep track of their last 30 days of signatures for a CA, which can then get cross-verified by other verifiers in that timeframe.

The storage requirements don't seem that bad though and it might not be worth any reduced redundancy and increased complexity for a different storage scheme. E.g. what keeps me from doing this is the >1Gbps and >1 pager requirements.

torbid•7mo ago
If CAs have to share CTs and have to save everything the CT would save to their last submission then no CA can destroy the log without colluding with other CAs.

(I.e. your log ends abruptly but polling any other CA that published to the same CT shows there is more including reasons to shut you down.)

I don't see how a scheme where the CT signer has this responsibility makes any sense. If they stop operating because they are sick of it, all the CAs involved have a somewhat suspicious looking CT history on things already issued that has to be explained instead of having always had the responsibility to provide the history up to anything they have signed whether or not some CT goes away.

NoahZuniga•7mo ago
> Even if the log forms a verification chain, they could do a rollback shortly after issuing a certificate without arousing too much suspicion.

This is not true. A rollback is instantly noticeable (because the consistency of Signed True Heads can not be demonstrated) and is a very large failure of the log. What could happen is that a log issues a Signed Certificate Timestamp that can be used to show browsers that the cert is in the log, but never incorporating said cert in the log. This is less obvious, but doing this maliciously isn't really going to achieve much because all certs have to be logged in at least 2 logs to be accepted by browsers.

> Maybe there is an acceptable way to shift long-term storage to CAs while using CT verifiers only for short term storage? E.g. they keep track of their last 30 days of signatures for a CA, which can then get cross-verified by other verifiers in that timeframe.

An important source of stress in the PKI community is that there are many CAs, and a significant portion of them don't really want the system to be secure. (Their processes are of course perfect, so all this certificate logging is just them being pestered). Browser operators (and other cert users) do want the system to be secure.

An important design goal for CT was that it would require very little extra effort from CAs (and this drove many compromises). Google and other members of the CA/Browser would rather spend their goodwill on things that make the system more secure (ie shorter certificate lifetimes) than on getting CAs to pay for operating costs of CT logs. The cost for google to host a CT log is very little.

dboreham•7mo ago
Add an incentive mechanism to motivate runn a server, and hey it's a blockchain. But those have no practical application so it must not be a blockchain..
schoen•7mo ago
There is some historical connection between CT and blockchains.

http://www.aaronsw.com/weblog/squarezooko

Ben Laurie read this post by Aaron Swartz while thinking about how a certificate transparency mechanism could work. (I think Peter Eckersley may have told him about it!) The existence proof confirmed that we sort of knew how to make useful append-only data structures with anonymous writers.

CT dropped the incentive mechanism and the distributed log updates in favor of more centralized log operation, federated logging, and out-of-band audits of identified log operators' behavior. This mostly means that CT lacks the censorship resistance of a blockchain. It also means that someone has to directly pay to operate it, without recouping the expenses of maintaining the log via block rewards. And browser developers have to manually confirm logs' availability properties in order to decide which logs to trust (with -- returning to the censorship resistance property -- no theoretical guarantee that there will always be suitable logs available in the future).

This has worked really well so far, but everyone is clear on the trade-offs, I think.

Dylan16807•7mo ago
Yes, that is correct. (Other than the word "must"? I'm not entirely sure your intent there.) This is close to a blockchain in some ways, but a blockchain-style incentive mechanism would be a net negative, so it doesn't have that.

If you figure out a good way to involve an incentive structure like that, let us know!

some_random•7mo ago
I'm happy to offer an incentive of 100 Cert Points (issued by me, redeemable with me at my discretion) to anyone running CT /s

In all seriousness, the incentive is primarily in having the data imo

gslin•7mo ago
The original article seems deleted, so https://archive.ph/TTXnK this.
FiloSottile•7mo ago
My bad! This is what I get for doing a deploy to fix the layout while the post is on HN. Back up now.
ysnp•7mo ago
https://words.filippo.io/passkey-encryption/ also seems to be gone?
FiloSottile•7mo ago
That instead was a draft that should have not gone out yet, but the API filter didn’t work :)

I’ll mail that one towards the end of the week.

gucci-on-fleek•7mo ago
https://web.archive.org/web/20250707205158/https://words.fil...
gslin•7mo ago
> You Should Run a Certificate Transparency Log

And:

> Bandwidth: 2 – 3 Gbps outbound.

I am not sure if this is correct, is 2-3Gbps really required for CT?

remus•7mo ago
It seems like Fillipo has been working quite closely with people running existing ct logs to try and reduce the requirements for running a log, so I'd assume he has a fairly realistic handle on the requirements.

Do you have a reason to think his number is off?

ApeWithCompiler•7mo ago
> or an engineer looking to justify an overprovisioned homelab

In Germany 2 – 3 Gbps outbound is a milestone, even for enterprises. As a individual I am privileged to have 250Mbs down/50Mbs up.

So it`s at least off by what any individual in this country could imagine.

jeroenhd•7mo ago
You can rent 10gbps service from various VPS providers if you can't get the bandwidth at home. Your home ISP will probably have something to say about a continuous 2gbps upstream anyway, whether it's through data caps or fair use policy.

Still, even in Germany, with its particularly lacking internet infrastructure for the wealth the country possesses, M-net is slowly rolling out 5gbps internet.

nucleardog•7mo ago
Yeah the requirements aren't too steep here. I could easily host this in my "homelab" if I gave a friend a key to access my utility room if I were away / unavailable.

But 2-3Gbps of bandwidth makes this pretty inaccessible unless you're just offloading the bulk of this on to CloudFront/CloudFlare at which point... it seems to me we don't really have more people running logs in a very meaningful sense, just somebody paying Amazon a _lot_ of money. If I'm doing my math right this is something like 960TB/mo which is like a $7.2m/yr CloudFront bill. Even some lesser-known CDN providers we're still talking like $60k/yr.

Seems to me the bandwidth requirement means this is only going to work if you already have some unmetered connections laying around.

If anyone wants to pay the build out costs to put an unmetered 10Gbps line out to my house I'll happily donate some massively overprovisioned hardware, redundant power, etc!

gslin•7mo ago
Let's Encrypt issues 9M certs per day (https://letsencrypt.org/stats/), and its market share is 50%+ (https://w3techs.com/technologies/overview/ssl_certificate), so I assume there are <20M certs issued per day.

If all certs are sent to just one CT log server, and each cert generates ~10KBytes outbound traffic, it's ~200GB/day, or ~20Mbps (full & even traffic), not in the same ballpark (2-3Gbps).

So I guess there are something I don't understnad?

bo0tzz•7mo ago
I've been trying to get an understanding of this number myself as well. I'm not quite there yet, but I believe it's talking about read traffic, ie serving clients that are looking at the log, not handling new certificates coming in.
FiloSottile•7mo ago
I added a footnote about it. It’s indeed read traffic, so it’s (certificate volume x number of monitors x compression ratio) on average. But then you have to let new monitors catch up, so you need burst.

It’s unfortunately an estimate, because right now we see 300 Mbps peaks, but as Tuscolo moves to Usable and more monitors implement Static CT, 5-10x is plausible.

It might turn out that 1 Gbps is enough and the P95 is 500 Mbps. Hard to tell right now, so I didn’t want to get people in trouble down the line.

Happy to discuss this further with anyone interested in running a log via email or Slack!

bo0tzz•7mo ago
Thanks, that clarifies a lot!
xiconfjs•7mo ago
So we are talking about 650TB+ traffic per month or $700 per month just for bandwith…so surr not a one-man-project
dilyevsky•7mo ago
If you’re paying metered you’re off by an order of magnitude - much more expensive. Even bandwidth based transit will be more expensive than that at most colos
dpifke•7mo ago
I pay roughly $800/mo each for two 10 Gbps transit connections (including cross-connect fees), plus $150/mo for another 10 Gbps peering connection to my local IX. 2-3 Gbps works out to less than $200/mo. (This is at a colo in Denver for my one-man LLC.)
nomaxx117•7mo ago
I wonder how much putting a CDN in front of this would reduce this.

According to the readme, it seems like the bulk of the traffic is highly cacheable, so presumably you could park something a CDN in front and substantially reduce the bandwidth requirements.

mcpherrinm•7mo ago
Yes, the static-ct api is designed to be highly cacheable by a CDN.

That is one of the primary motivations of its design over the previous CT API, which had some relatively flexible requests that led to less good caching.

ncrmro•7mo ago
Seems like something that might be useful to store on Arweave a block chain for storage. Fees go to an endowment that’s has been calculated to far exceed the cost of growing storage
cypherpunks01•7mo ago
How does the CT system generally accomplish the goal of append-only entries, with public transparency of when entries were made?

Is this actually a good use case for (gasp) blockchains? Or would it be too much data?

udev4096•7mo ago
Instead of mainstreaming DANE, you want me to help a bunch of centralized CAs? No thanks. DANE is the future, it will happen
jcgl•7mo ago
I like the idea and I like DNSSEC too (well, well enough at least—lots of tooling could be better), but DANE can’t catch on faster than DNSSEC does. And DNSSEC isn’t exactly taking the world by storm.
udev4096•7mo ago
Even if everyone catches up on DANE, we still have ICANN controlling the registrars and root nameservers. It's absolutely crazy how everyone (incl. me, unfortunately) gave in to the centralized DNS model
jcgl•7mo ago
Two things:

1. Relying on just ICANN instead of ICANN+CA Forum would be an improvement. I assume, at least? Thinking about it though, the CA Forum setup with transparency logs and such does provide some safeguards against CA operator abuse. Those are safeguards that wouldn't be available in a DANE-only world where nameserver operators could surreptitiously inject malicious TLSA records at their whim. That could be safeguarded by DNSSEC where the domain owner does their own signing and then the nameserver operator simply serves those pre-signed records. However, that's a lot of complication. Gonna have to think about this...

2. Tbh I am not convinced of the virtues of decentralized DNS. If people use different roots in practice, then we lose the utility of a single view of names. At its most extreme, you then would not be able to reliably do things like publish a URL. However, maybe you're suggesting that DNS shouldn't be centralized with the root, but rather have a constellation of TLDs as roots? Obviously that would require shipping resolvers with hardcoded roots and wouldn't be robust when new TLDs are brought online. But maybe there'd be value in that...I'm not convinced yet though.

tptacek•7mo ago
Intrinsic to the DNSSEC PKI model: the parent domain controls your domain. Under DANE, .COM silently controls the TLS keys for MAIL.GOOGLE.COM.
jcgl•6mo ago
And [the union of CAs] not-so-silently controls TLS for the whole world. And if the transparency logs are the linchpin of trust for Web PKI, then I don't think it's too hard to imagine a system where you have a similar transparency system for zone-signing keys too.
tptacek•6mo ago
It's in fact very difficult to imagine mandatory transparency logs in the DNS PKI. The story of how mandatory logs came to be for TLS involved Google and Mozilla putting a gun to the heads of the CA industry, after murdering several of them. Nobody can do that to the DNS, and just as importantly, governments don't want them to.
jcgl•6mo ago
In a world where DANE catches on on the web, I don't see why Google and Mozilla couldn't do that again. I mean, presumably there'd need to be some evidence of malfeasance, like there was with Web PKI. I don't see why Mozilla alone couldn't start by putting the screws to a smaller CCTLD and some medium-sized DNS hosts for instance.

That said, I don't particularly see DANE growing on the web.

tptacek•6mo ago
Google and Mozilla can't "dis-trust" .COM. They're stuck with it.
udev4096•6mo ago
DANE does not even work without DNSSEC, it's a pre-requisite. The root nameservers should be distributed, anyone can run it. Kinda like bitcoin nodes
baobun•7mo ago
For the downstream side Mozilla has done some great progress with CRLite, which I think hasn't gotten enough attention.

https://youtube.com/watch?v=gnB76DQI1GE&t=19517s

https://research.mozilla.org/files/2025/04/clubcards_for_the...

1vuio0pswjnm7•7mo ago
Original HN title: "You Should Run a Certificiate Transparency Log"