The optimizations are basically kubernetes for data warehousing: we take over autoscaling and cluster selection to increase utilization without impacting latency. The scheduler is backed by ML models that predict runtime and capacity, which means we can run machines hotter than the providers can and thereby cut costs.
More info here: https://espresso.ai/post/launching-our-databricks-sql-optimi...