Consumers and publishers take certificates and certs for granted. I see many broken certs, or brands using the wrong certs and domains for their services.
SSL/TLS has done well to prevent eavesdropping, but it hasn't done well to establish trust and identity.
At the same time, it sounds like the issues you describe aren’t CA/issuance issues, but rather, simple misconfigurations. Those aren’t incidents for the ecosystem, although definitely can be disruptive to the site, but I also wouldn’t expect them to call trust or identity into disrepute. That’d be like arguing my drivers license is invalid if I handed you my passport; giving you the wrong doc doesn’t invalidate the claims of either, just doesn’t address your need.
There is systematic checking - e.g. crt.sh continuously runs linters on certificates found in CT logs, I continuously monitor domains which are likely to be used in test certificates (e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=1496088), and it appears the Chrome root program has started doing some continuous compliance monitoring based on CT as well.
But there is certainly a lot of ad-hoc checking by community members and academics, which as Sleevi said is one of the great things that CT enables.
I can't even imagine how much a pain it would be to try and moderate certs based on some consistent international notion of trustworthiness. I think the best you could hope to do is have 3rd parties like the BBB sign your cert as a way of them "vouching" for you.
Happened to see it in the CT logs, and when that CA next came up for discussion on the Mozilla dev security policy list, their failure to address and disclose the misissuance in a timely manner was enough to stop the process to approve their request for EV recognition, and it ended in a denial from Mozilla.
See https://www.mozilla.org/en-US/about/governance/policies/secu... and https://www.ccadb.org/auditors and https://www.ccadb.org/policy#51-audit-statement-content
Apple and Microsoft mainly have power because they control Safari and Edge. Firefox is of course dying, but they still wield significant power because their trusted CA list is copied by all the major Linux distributions that run on servers.
Couldn't it just be responsible for its own key and signing incremental advances to a log that all publishers are responsible for storing up to their latest submission to it?
If it needed to restart and some last publisher couldn't give it its latest entries, well they would deserve that rollback to the last publish from a good publisher..
This requires the logs be held by independent parties, and retained forever.
If 12 CAs send to the same log and all have to save up to their latest entry not to be declared incompetent to be CAs, how would all 12 possibly do a worse job of providing that log on demand than a random 3rd party who has no particular investment at risk?
(Every other CA in a log is a 3rd party with respect to any other, but they are one who can actually be told to keep something indefinitely because they would also need to return it for legitimizing their own issuance.)
The info they get back from the CT log may be a Merkle Hash that partly depends on the other entries in the log - but they don't have to store the entire log, just a short checksum.
Maybe there is an acceptable way to shift long-term storage to CAs while using CT verifiers only for short term storage? E.g. they keep track of their last 30 days of signatures for a CA, which can then get cross-verified by other verifiers in that timeframe.
The storage requirements don't seem that bad though and it might not be worth any reduced redundancy and increased complexity for a different storage scheme. E.g. what keeps me from doing this is the >1Gbps and >1 pager requirements.
(I.e. your log ends abruptly but polling any other CA that published to the same CT shows there is more including reasons to shut you down.)
I don't see how a scheme where the CT signer has this responsibility makes any sense. If they stop operating because they are sick of it, all the CAs involved have a somewhat suspicious looking CT history on things already issued that has to be explained instead of having always had the responsibility to provide the history up to anything they have signed whether or not some CT goes away.
This is not true. A rollback is instantly noticeable (because the consistency of Signed True Heads can not be demonstrated) and is a very large failure of the log. What could happen is that a log issues a Signed Certificate Timestamp that can be used to show browsers that the cert is in the log, but never incorporating said cert in the log. This is less obvious, but doing this maliciously isn't really going to achieve much because all certs have to be logged in at least 2 logs to be accepted by browsers.
> Maybe there is an acceptable way to shift long-term storage to CAs while using CT verifiers only for short term storage? E.g. they keep track of their last 30 days of signatures for a CA, which can then get cross-verified by other verifiers in that timeframe.
An important source of stress in the PKI community is that there are many CAs, and a significant portion of them don't really want the system to be secure. (Their processes are of course perfect, so all this certificate logging is just them being pestered). Browser operators (and other cert users) do want the system to be secure.
An important design goal for CT was that it would require very little extra effort from CAs (and this drove many compromises). Google and other members of the CA/Browser would rather spend their goodwill on things that make the system more secure (ie shorter certificate lifetimes) than on getting CAs to pay for operating costs of CT logs. The cost for google to host a CT log is very little.
http://www.aaronsw.com/weblog/squarezooko
Ben Laurie read this post by Aaron Swartz while thinking about how a certificate transparency mechanism could work. (I think Peter Eckersley may have told him about it!) The existence proof confirmed that we sort of knew how to make useful append-only data structures with anonymous writers.
CT dropped the incentive mechanism and the distributed log updates in favor of more centralized log operation, federated logging, and out-of-band audits of identified log operators' behavior. This mostly means that CT lacks the censorship resistance of a blockchain. It also means that someone has to directly pay to operate it, without recouping the expenses of maintaining the log via block rewards. And browser developers have to manually confirm logs' availability properties in order to decide which logs to trust (with -- returning to the censorship resistance property -- no theoretical guarantee that there will always be suitable logs available in the future).
This has worked really well so far, but everyone is clear on the trade-offs, I think.
If you figure out a good way to involve an incentive structure like that, let us know!
In all seriousness, the incentive is primarily in having the data imo
I’ll mail that one towards the end of the week.
And:
> Bandwidth: 2 – 3 Gbps outbound.
I am not sure if this is correct, is 2-3Gbps really required for CT?
Do you have a reason to think his number is off?
In Germany 2 – 3 Gbps outbound is a milestone, even for enterprises. As a individual I am privileged to have 250Mbs down/50Mbs up.
So it`s at least off by what any individual in this country could imagine.
Still, even in Germany, with its particularly lacking internet infrastructure for the wealth the country possesses, M-net is slowly rolling out 5gbps internet.
But 2-3Gbps of bandwidth makes this pretty inaccessible unless you're just offloading the bulk of this on to CloudFront/CloudFlare at which point... it seems to me we don't really have more people running logs in a very meaningful sense, just somebody paying Amazon a _lot_ of money. If I'm doing my math right this is something like 960TB/mo which is like a $7.2m/yr CloudFront bill. Even some lesser-known CDN providers we're still talking like $60k/yr.
Seems to me the bandwidth requirement means this is only going to work if you already have some unmetered connections laying around.
If anyone wants to pay the build out costs to put an unmetered 10Gbps line out to my house I'll happily donate some massively overprovisioned hardware, redundant power, etc!
If all certs are sent to just one CT log server, and each cert generates ~10KBytes outbound traffic, it's ~200GB/day, or ~20Mbps (full & even traffic), not in the same ballpark (2-3Gbps).
So I guess there are something I don't understnad?
It’s unfortunately an estimate, because right now we see 300 Mbps peaks, but as Tuscolo moves to Usable and more monitors implement Static CT, 5-10x is plausible.
It might turn out that 1 Gbps is enough and the P95 is 500 Mbps. Hard to tell right now, so I didn’t want to get people in trouble down the line.
Happy to discuss this further with anyone interested in running a log via email or Slack!
According to the readme, it seems like the bulk of the traffic is highly cacheable, so presumably you could park something a CDN in front and substantially reduce the bandwidth requirements.
That is one of the primary motivations of its design over the previous CT API, which had some relatively flexible requests that led to less good caching.
Is this actually a good use case for (gasp) blockchains? Or would it be too much data?
1. Relying on just ICANN instead of ICANN+CA Forum would be an improvement. I assume, at least? Thinking about it though, the CA Forum setup with transparency logs and such does provide some safeguards against CA operator abuse. Those are safeguards that wouldn't be available in a DANE-only world where nameserver operators could surreptitiously inject malicious TLSA records at their whim. That could be safeguarded by DNSSEC where the domain owner does their own signing and then the nameserver operator simply serves those pre-signed records. However, that's a lot of complication. Gonna have to think about this...
2. Tbh I am not convinced of the virtues of decentralized DNS. If people use different roots in practice, then we lose the utility of a single view of names. At its most extreme, you then would not be able to reliably do things like publish a URL. However, maybe you're suggesting that DNS shouldn't be centralized with the root, but rather have a constellation of TLDs as roots? Obviously that would require shipping resolvers with hardcoded roots and wouldn't be robust when new TLDs are brought online. But maybe there'd be value in that...I'm not convinced yet though.
That said, I don't particularly see DANE growing on the web.
https://youtube.com/watch?v=gnB76DQI1GE&t=19517s
https://research.mozilla.org/files/2025/04/clubcards_for_the...
agwa•7mo ago
eddythompson80•7mo ago
WAL replication, rsync, bittorrent, etc all things that don't quite work as needed.
[1] https://github.com/mchangrh/sb-mirror/blob/main/docs/breakdo...
[2] https://github.com/ajayyy/SponsorBlock/issues/1570