How you actually interpret what you're seeing here? does it look like more like optimizer fragility (plans that assume ideal memory conditions) or more like runtime memory management limits (good plans, but no adaptive behavior under pressure)?
Any database should be able to handle 100 concurrent queries robustly, even if this means to slow down the execution of queries.
TPC-H benchmarks are what convinced us to purchase Exasol 10 years ago. Still happy with that decision! Congrats to the Exasol team on these results vs ClickHouse.
"200k values in a WHERE clause IN statement"? What is that column about?
Average concurrent query is ~7 in what time period?
Regarding the 200k values in a where clause, we have some users that will do research across published data source in Tableau. They will copy account IDs from one report and paste them into a filter in another. Our connections from Tableau to Exasol are live. Tableau doesn't have great guardrails on the SQL that gets issued to the database.
The concurrent query comes from a daily statistics table in Exasol. There is an average and max concurrency measure aggregated per day. I averaged the last 30 days. Exasol doesn't really explain their sampling methodology in their documentation: https://docs.exasol.com/db/latest/sql_references/system_tabl...
In Google, Ai summarized results are ClickHouse, StarRocks, Snowflake, and Google BigQuery.
Clickhouse is there in both of them and Exasol is not mentioned. If these claims were relevant, why is it not in the limelight?
Clickhouse is known to ingest and analyze massive volumes of time-series data in real-time. How good is Exasol for this use case?
Exasol has been performance leader for more than 15 years in the market, as you can see in the official TPC-H publications, but has not gotten the broader market attention yet. We are trying to change that now and have recently been more active in the developer communities. We also just launched a completely free Exasol Personal edition that can be used for production use cases.
Apache Pinot, Druid and Clickhouse are designed for low-latency analytical queries at high concurrency with continuous ingestion. Pinot is popular because of it's native integration with streaming systems like Kafka, varied indexing, and it's ability to scale efficiently. They're widely used in observability and user-facing analytics – which are how “real-time analytics databases” are commonly perceived today.
Exasol (and SingleStore, Snowflake, BigQuery, etc) are more focused on enterprise BI and complex SQL analytics rather than application serving, or ultra-high ingest workloads. It performs well for structured analytical queries and joins, but it’s less commonly deployed with the user-facing analytics or high volume usage.
A good rundown from Tim Berglund in this video here: https://startree.ai/resources/what-is-real-time-analytics/
exagolo•1w ago