Over the last year, we worked with teams running high-throughput pipelines into self-hosted ClickHouse. Mostly for observability and real-time analytics.
A question that came repeatedly was: What happens when throughput grows?
Usually, things work fine at 10k events/sec, but we started seeing backpressure and errors at >100k.
When the throughput per pipeline stops scaling, then adding more CPU/memory doesn’t help because often parts of the pipeline are not parallelized or are bottlenecked by state handling.
At this point, engineers usually scale by adding more pipeline instances.
That works but comes with some trade-offs: - You have to split the workload (e.g., multiple pipelines reading from the same source) - Transformation logic gets duplicated across pipelines - Stateful logic becomes harder to manage and keep consistent - Debugging and changes get more difficult because the data flow is fragmented
Another challenge arises when working with high-cardinality keys like user IDs, session IDs, or request IDs, and when you need to handle longer time windows (24h or more). The state grows quickly and many systems rely on in-memory state, which makes it expensive and harder to recover from failures.
We wanted to solve this problem and rebuild our approach at GlassFlow.
Instead of scaling by adding more pipelines, we scale within a single pipeline by using replicas. Each replica consumes, processes, and writes independently, and the workload is distributed across them.
In the benchmarks we’re sharing, this scales to 500k+ events/sec while still running stateful transformations and writing into ClickHouse.
A few things we think are interesting: - Scaling is close to linear as you add replicas - Works with stateful transformations (not just stateless ingestion) - State is backed by a file-based KV store instead of relying purely on memory - The ClickHouse sink is optimized for batching to avoid small inserts - The product is built with Go
Full write-up + benchmarks: https://www.glassflow.dev/blog/glassflow-now-scales-to-500k-...
Repo: https://github.com/glassflow/clickhouse-etl
Happy to answer questions about the design or trade-offs.
MarkSfik•2h ago
super_ar•55m ago
Where we saw friction with Flink was mainly: 1.) Operational overhead (jobs, state backends, checkpointing) 2.) Generic sinks not being optimized for ClickHouse (batching, small inserts, etc.)
We focused on making scaling a property of the pipeline itself (just add replicas) and optimizing specifically for ClickHouse ingestion patterns.
So Flink is more general, this is more opinionated and focused on this specific use case.