Hey HN! I'm Sascha, a technical founder who started coding at 9 and spent the last 2 years obsessing over Small Language Models, specifically, how to squeeze every drop of performance from fast, cheap, domain-specific models before touching slow, expensive flagships.
What it does: cascadeflow is an optimization layer that sits between your app/agent and LLM providers, intelligently cascading queries between cheap and expensive models—so you stop paying Opus 4.5 prices for "What's 2+2?"
Why this matters: Most companies I've talked to are running all their AI traffic through flagship models. They're burning 40-70% of their budget on queries that a $0.15/M token model handles just fine, including reasoning tasks and tool calls. But building intelligent routing is genuinely hard. You need quality validation, confidence scoring, format checking, graceful escalation, and ideally domain understanding. Most teams don't have bandwidth to build this infrastructure.
Backstory: After working on projects with JetBrains and IBM on developer tools, I kept seeing the same pattern: teams scaling AI features or agents hit a cost wall. I started prototyping cascading initially just for my own projects. When I saw consistent 60-80% cost reductions without quality loss, I realized this needed to be a proper cost optimization framework.
How it works: Speculative execution with quality validation. We try the cheap or domain-specific model first (auto-detects 15 domains), validate response quality across multiple dimensions (length, confidence via logprobs, format, semantic alignment), and only escalate to expensive models when validation fails. Framework overhead: <2ms.
First integrations: n8n and LangChain. Both connect any two AI Chat Model nodes (cheap drafter + powerful verifier) with domain-aware routing across code, medical, legal, finance, and 11 more domains. Mix Ollama locally with GPT-5 for verification. In n8n, you can watch cascade decisions live in the Logs tab.
Benchmarks: 69% savings on MT-Bench, 93% on GSM8K, 52% on MMLU—retaining 96% of GPT-5 quality. All reproducible in `/tests/benchmarks`.
What makes it different:
- Understands 15 domains out of the box (auto-detection, domain-specific quality validation, domain aware routing)
- User-tier and budget-based cascading with configurable model pipelines
- Learns and optimizes from your usage patterns
- Auto-benchmarks against your available models
- Works with YOUR models across 7+ providers (no infrastructure lock-in)
- Python + TypeScript with identical APIs
- Optional ML-based semantic validation (~80MB model, CPU-only)
- Production-ready: streaming, batch processing, tool calling, multi-step reasoning, cost tracking with optional OpenTelemetry export
Would love technical feedback, especially from anyone running AI at scale who's solved routing differently, or n8n power users who can stress-test the integration. What's broken? What's missing?
honeydew•1h ago
The benchmark numbers look strong but MT-Bench/GSM8K are pretty narrow. Have you tested on more open-ended tasks?
saschabuehrle•57m ago
For open-ended tasks we use embedding similarity + confidence scoring, not just format matching. If the draft response is semantically thin, it escalates.
The system also learns from your actual traffic patterns, after a few hundert queries, it knows which query shapes work on which models for your specific use case.
saschabuehrle•1h ago
What it does: cascadeflow is an optimization layer that sits between your app/agent and LLM providers, intelligently cascading queries between cheap and expensive models—so you stop paying Opus 4.5 prices for "What's 2+2?"
Why this matters: Most companies I've talked to are running all their AI traffic through flagship models. They're burning 40-70% of their budget on queries that a $0.15/M token model handles just fine, including reasoning tasks and tool calls. But building intelligent routing is genuinely hard. You need quality validation, confidence scoring, format checking, graceful escalation, and ideally domain understanding. Most teams don't have bandwidth to build this infrastructure.
Backstory: After working on projects with JetBrains and IBM on developer tools, I kept seeing the same pattern: teams scaling AI features or agents hit a cost wall. I started prototyping cascading initially just for my own projects. When I saw consistent 60-80% cost reductions without quality loss, I realized this needed to be a proper cost optimization framework.
How it works: Speculative execution with quality validation. We try the cheap or domain-specific model first (auto-detects 15 domains), validate response quality across multiple dimensions (length, confidence via logprobs, format, semantic alignment), and only escalate to expensive models when validation fails. Framework overhead: <2ms.
First integrations: n8n and LangChain. Both connect any two AI Chat Model nodes (cheap drafter + powerful verifier) with domain-aware routing across code, medical, legal, finance, and 11 more domains. Mix Ollama locally with GPT-5 for verification. In n8n, you can watch cascade decisions live in the Logs tab.
Benchmarks: 69% savings on MT-Bench, 93% on GSM8K, 52% on MMLU—retaining 96% of GPT-5 quality. All reproducible in `/tests/benchmarks`.
What makes it different:
- Understands 15 domains out of the box (auto-detection, domain-specific quality validation, domain aware routing) - User-tier and budget-based cascading with configurable model pipelines - Learns and optimizes from your usage patterns - Auto-benchmarks against your available models - Works with YOUR models across 7+ providers (no infrastructure lock-in) - Python + TypeScript with identical APIs - Optional ML-based semantic validation (~80MB model, CPU-only) - Production-ready: streaming, batch processing, tool calling, multi-step reasoning, cost tracking with optional OpenTelemetry export
n8n package: `npm install @cascadeflow/n8n-nodes-cascadeflow`
Would love technical feedback, especially from anyone running AI at scale who's solved routing differently, or n8n power users who can stress-test the integration. What's broken? What's missing?