frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Medinilla – an OCPP compliant .NET back end (partially done)

https://github.com/eliodecolli/Medinilla
1•rhcm•1m ago•0 comments

How Does AI Distribute the Pie? Large Language Models and the Ultimatum Game

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6157066
1•dkga•2m ago•1 comments

Resistance Infrastructure

https://www.profgalloway.com/resistance-infrastructure/
2•samizdis•6m ago•0 comments

Fire-juggling unicyclist caught performing on crossing

https://news.sky.com/story/fire-juggling-unicyclist-caught-performing-on-crossing-13504459
1•austinallegro•6m ago•0 comments

Restoring a lost 1981 Unix roguelike (protoHack) and preserving Hack 1.0.3

https://github.com/Critlist/protoHack
2•Critlist•8m ago•0 comments

GPS and Time Dilation – Special and General Relativity

https://philosophersview.com/gps-and-time-dilation/
1•mistyvales•11m ago•0 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
1•davidcondrey•11m ago•1 comments

Show HN: I built a clawdbot that texts like your crush

https://14.israelfirew.co
2•IsruAlpha•13m ago•1 comments

Scientists reverse Alzheimer's in mice and restore memory (2025)

https://www.sciencedaily.com/releases/2025/12/251224032354.htm
1•walterbell•16m ago•0 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
1•todsacerdoti•18m ago•0 comments

Show HN: Cymatica – an experimental, meditative audiovisual app

https://apps.apple.com/us/app/cymatica-sounds-visualizer/id6748863721
1•_august•19m ago•0 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
2•martialg•19m ago•0 comments

Horizon-LM: A RAM-Centric Architecture for LLM Training

https://arxiv.org/abs/2602.04816
1•chrsw•20m ago•0 comments

We just ordered shawarma and fries from Cursor [video]

https://www.youtube.com/shorts/WALQOiugbWc
1•jeffreyjin•21m ago•1 comments

Correctio

https://rhetoric.byu.edu/Figures/C/correctio.htm
1•grantpitt•21m ago•0 comments

Trying to make an Automated Ecologist: A first pass through the Biotime dataset

https://chillphysicsenjoyer.substack.com/p/trying-to-make-an-automated-ecologist
1•crescit_eundo•25m ago•0 comments

Watch Ukraine's Minigun-Firing, Drone-Hunting Turboprop in Action

https://www.twz.com/air/watch-ukraines-minigun-firing-drone-hunting-turboprop-in-action
1•breve•26m ago•0 comments

Free Trial: AI Interviewer

https://ai-interviewer.nuvoice.ai/
1•sijain2•26m ago•0 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
21•randycupertino•27m ago•10 comments

Supernote e-ink devices for writing like paper

https://supernote.eu/choose-your-product/
3•janandonly•29m ago•0 comments

We are QA Engineers now

https://serce.me/posts/2026-02-05-we-are-qa-engineers-now
1•SerCe•30m ago•0 comments

Show HN: Measuring how AI agent teams improve issue resolution on SWE-Verified

https://arxiv.org/abs/2602.01465
2•NBenkovich•30m ago•0 comments

Adversarial Reasoning: Multiagent World Models for Closing the Simulation Gap

https://www.latent.space/p/adversarial-reasoning
1•swyx•30m ago•0 comments

Show HN: Poddley.com – Follow people, not podcasts

https://poddley.com/guests/ana-kasparian/episodes
1•onesandofgrain•38m ago•0 comments

Layoffs Surge 118% in January – The Highest Since 2009

https://www.cnbc.com/2026/02/05/layoff-and-hiring-announcements-hit-their-worst-january-levels-si...
13•karakoram•38m ago•0 comments

Papyrus 114: Homer's Iliad

https://p114.homemade.systems/
1•mwenge•39m ago•1 comments

DicePit – Real-time multiplayer Knucklebones in the browser

https://dicepit.pages.dev/
1•r1z4•39m ago•1 comments

Turn-Based Structural Triggers: Prompt-Free Backdoors in Multi-Turn LLMs

https://arxiv.org/abs/2601.14340
2•PaulHoule•40m ago•0 comments

Show HN: AI Agent Tool That Keeps You in the Loop

https://github.com/dshearer/misatay
2•dshearer•42m ago•0 comments

Why Every R Package Wrapping External Tools Needs a Sitrep() Function

https://drmowinckels.io/blog/2026/sitrep-functions/
1•todsacerdoti•42m ago•0 comments
Open in hackernews

Open-source LLM cascading, up to 92% cost savings on benchmarks

https://github.com/lemony-ai/cascadeflow
12•saschabuehrle•1mo ago

Comments

saschabuehrle•1mo ago
Hey HN! I'm Sascha, a technical founder who started coding at 9 and spent the last 2 years obsessing over Small Language Models, specifically, how to squeeze every drop of performance from fast, cheap, domain-specific models before touching slow, expensive flagships.

What it does: cascadeflow is an optimization layer that sits between your app/agent and LLM providers, intelligently cascading queries between cheap and expensive models—so you stop paying Opus 4.5 prices for "What's 2+2?"

Why this matters: Most companies I've talked to are running all their AI traffic through flagship models. They're burning 40-70% of their budget on queries that a $0.15/M token model handles just fine, including reasoning tasks and tool calls. But building intelligent routing is genuinely hard. You need quality validation, confidence scoring, format checking, graceful escalation, and ideally domain understanding. Most teams don't have bandwidth to build this infrastructure.

Backstory: After working on projects with JetBrains and IBM on developer tools, I kept seeing the same pattern: teams scaling AI features or agents hit a cost wall. I started prototyping cascading initially just for my own projects. When I saw consistent 60-80% cost reductions without quality loss, I realized this needed to be a proper cost optimization framework.

How it works: Speculative execution with quality validation. We try the cheap or domain-specific model first (auto-detects 15 domains), validate response quality across multiple dimensions (length, confidence via logprobs, format, semantic alignment), and only escalate to expensive models when validation fails. Framework overhead: <2ms.

First integrations: n8n and LangChain. Both connect any two AI Chat Model nodes (cheap drafter + powerful verifier) with domain-aware routing across code, medical, legal, finance, and 11 more domains. Mix Ollama locally with GPT-5 for verification. In n8n, you can watch cascade decisions live in the Logs tab.

Benchmarks: 69% savings on MT-Bench, 93% on GSM8K, 52% on MMLU—retaining 96% of GPT-5 quality. All reproducible in `/tests/benchmarks`.

What makes it different:

- Understands 15 domains out of the box (auto-detection, domain-specific quality validation, domain aware routing) - User-tier and budget-based cascading with configurable model pipelines - Learns and optimizes from your usage patterns - Auto-benchmarks against your available models - Works with YOUR models across 7+ providers (no infrastructure lock-in) - Python + TypeScript with identical APIs - Optional ML-based semantic validation (~80MB model, CPU-only) - Production-ready: streaming, batch processing, tool calling, multi-step reasoning, cost tracking with optional OpenTelemetry export

n8n package: `npm install @cascadeflow/n8n-nodes-cascadeflow`

Would love technical feedback, especially from anyone running AI at scale who's solved routing differently, or n8n power users who can stress-test the integration. What's broken? What's missing?

SamAlarco•1mo ago
this is very cool.
honeydew•1mo ago
The benchmark numbers look strong but MT-Bench/GSM8K are pretty narrow. Have you tested on more open-ended tasks?
saschabuehrle•1mo ago
For open-ended tasks we use embedding similarity + confidence scoring, not just format matching. If the draft response is semantically thin, it escalates. The system also learns from your actual traffic patterns, after a few hundert queries, it knows which query shapes work on which models for your specific use case.
aregnzsdejan•1mo ago
Very real problem, and the focus on validation (not just routing) is the right direction.

How do you handle cases where validation is uncertain or the domain detector is wrong: do you default conservatively, and what false-negative rates are you seeing?

saschabuehrle•1mo ago
Yes, we default conservatively, when in doubt, escalate. A few specifics: Uncertain validation: We combine multiple signals (confidence scores, semantic similarity, format checks...). If any signal is borderline, we escalate. Better to overpay occasionally than return a bad response.

Wrong domain detection: The domain classifier isn't a gate, it selects which validator to apply. If the validator then fails, it escalates regardless. So a misclassified query still gets caught at the validation layer.

False-negative rates (good responses wrongly escalated): ~7-10% at the beginning, depending on domain. We're okay with this, it means slightly higher cost but never compromised quality. The self-learning engine tightens this over time as it sees your actual traffic patterns.

samnji•1mo ago
Nice. Routing is the hard part. Do you have numbers on false accepts vs false escalations? (i.e., how often you keep a bad cheap answer vs unnecessarily jump to the expensive model). Benchmarks are good, but those two rates are what will make or break it in prod.
saschabuehrle•1mo ago
Good question, these are the two metrics we obsess over: False accepts (bad response passed as good): <1% on benchmarks, ~2-3% in production pilots. This is the one that matters, we tune aggressively to keep it low. Every validator errs on the side of escalation.

False escalations (good response unnecessarily escalated): ~7-10% depending on domain. Costs you tokens, but doesn't hurt quality. The self-learning engine reduces this over time as it learns your traffic patterns.

The tradeoff is intentional: we'd rather waste some spend than serve bad answers. In practice, even with the conservative tuning, customers still see 30-60% cost reduction because the baseline.

Satyam2000•1mo ago
This is amazing and absolutely in the right direction. How do you decide, which queries are routed to less expensive models?

92% are super impressive and as with any of the impressive numbers, you have to try to understand what is behind those. Do cost savings come mostly from routing easy queries, or from heavier workloads?

Also, you mention 7-10% false-negative cases. Is this where your validator disagrees with the expensive flagship model? Are there cases where the flagship model is giving worse answers?