The existing resources are mostly vibes — "Best of" listicles, anonymous Reddit threads, or LLM answers trained on that same content. Objective performance data is always missing.
Knock is a notification infrastructure platform, so we send billions of API requests to downstream email providers every year. We aggregated that data into a public dashboard covering 10 providers (SendGrid, Postmark, Resend, Mailgun, Amazon SES, Mandrill, Mailtrap, Mailersend, Sparkpost, and Mailjet) over a trailing 90-day window. What we measure:
- Response time — Time to receive a 200 from the provider, computed across p50/p90/p95/p99 using exact quantile calculations in ClickHouse. Includes connection pooling, network latency, and any automatic retries. - Error rate — Ratio of 5xx responses to total requests. Since Knock retries on most 5xx codes, a single request can produce multiple error entries if the backoff sequence runs. - Channel growth — Which providers are gaining or losing adoption on our platform. - Public incidents — We pull each provider's status page via RSS/API and overlay incidents on the performance data, so you can see how well observed degradation correlates with public acknowledgment. - Pricing comparisons — Modeled price curves with head-to-head comparisons at different volume tiers.
Some things that surprised us: SendGrid's median response time is 22ms at 500M+ messages, Postmark is 33ms, and there's a meaningful long tail gap between providers at p99. Volume bands indicate our confidence in each provider's data — low-volume providers are flagged accordingly.
We're already exploring deliverability and time-to-inbox metrics. Would love feedback from HN on what you'd want to see in a resource like this.
mikecarbone•1h ago