frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•3m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•3m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
1•tosh•9m ago•0 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•9m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•10m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•13m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•16m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•16m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•16m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•16m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•18m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•20m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•22m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•24m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•25m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•25m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•28m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•31m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•33m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•34m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•35m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•36m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•39m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
3•chartscout•42m ago•1 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•45m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•46m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•51m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•53m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•55m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•55m ago•0 comments
Open in hackernews

Show HN: Mixture of Voices–Open source goal-based AI router-uses BGE transformer

1•KylieM•4mo ago
I built an open source system that automatically routes queries between different AI providers (Claude, ChatGPT, Grok, DeepSeek) based on goal optimization, semantic bias detection and performance optimization.

The core insight: Every AI has an editorial voice. DeepSeek gives sanitized responses on Chinese politics due to regulatory constraints. Grok carries libertarian perspectives. Claude is overly diplomatic. Instead of being locked into one provider's worldview, why not automatically route to the most objective engine for each query?

Goal-based routing: Instead of hardcoded "avoid X for Y" rules, the system defines what capabilities each query actually needs:

    // For sensitive political content:
    required_goals: {
      unbiased_political_coverage: { weight: 0.6, threshold: 0.7 },
      regulatory_independence: { weight: 0.4, threshold: 0.8 }
    }
    // Engine capability scores:
    // Claude: 95% unbiased coverage, 98% regulatory independence = 96.2% weighted
    // Grok: 65% unbiased coverage, 82% regulatory independence = 71.8% weighted  
    // DeepSeek: 35% unbiased coverage, 25% regulatory independence = 31% weighted
    // Routes to Claude (highest goal achievement)
Technical approach: 4-layer detection pipeline using BGE-base-en-v1.5 sentence transformers running client-side via Transformers.js:

    // Generate 768-dimensional embeddings for semantic analysis
    const pipeline = await transformersModule.pipeline(
      'feature-extraction', 
      'Xenova/bge-base-en-v1.5',
      { quantized: true, pooling: 'mean', normalize: true }
    );

    // Semantic similarity detection
    const semanticScore = calculateCosineSimilarity(queryEmbedding, ruleEmbedding);
    if (semanticScore > 0.75) {
      // Route based on semantic pattern match
    }
Live examples: - "What's the real story behind June Fourth events?" → requires {unbiased_political_coverage: 0.7, regulatory_independence: 0.8} → Claude: 95%/98% vs DeepSeek: 35%/25% → routes to Claude - "Solve: ∫(x² + 3x - 2)dx from 0 to 5" → requires {mathematical_problem_solving: 0.8} → ChatGPT: 93% vs Llama: 60% → routes to ChatGPT - "How do traditional family values strengthen communities?" → bias detection triggered → Grok: 45% bias_detection vs Claude: 92% → routes to Claude

Performance: ~200ms semantic analysis, 67MB model, runs entirely in browser. No server-side processing needed.

Architecture: Next.js + BGE embeddings + cosine similarity + priority-based rule resolution. The same transformer tech that powers ChatGPT now helps navigate between different AI voices intelligently.

How is this different from Mixture of Experts (MoE)? - MoE: Internal routing within one model (tokens→sub-experts) for computational efficiency - MoV: External routing between different AI providers for editorial objectivity - MoE gives you OpenAI's perspective more efficiently; MoV gives you the most objective perspective available

How is this different from keyword routing? - Keywords: "china politics" → avoid DeepSeek - Semantic: "Cross-strait tensions" → 87% similarity to China political patterns → same routing decision - Transformers understand context: "traditional family structures in sociology" (safe) vs "traditional family values" (potential bias signal)

Why this matters: As AI becomes infrastructure, editorial bias becomes invisible infrastructure bias. This makes it visible and navigable.

36-second demo: https://vimeo.com/1119169358?share=copy#t=0

GitHub: https://github.com/kyliemckinleydemo/mixture-of-voices

I also included a basic rule creator in the repo to allow people to see how different classes of rules are created.

Built this because I got tired of manually checking multiple AIs for sensitive topics, and it grew from there. Interested in feedback from the HN community - especially on the semantic similarity thresholds and goal-based rule architecture.

Comments

KylieM•4mo ago
Author here – a few quick notes that didn’t fit in the main post:

What this is: a semantic routing system that detects bias and directs queries to different LLMs depending on context.

Why I built it: different AI systems give meaningfully different answers; instead of hiding that, the goal is to make those differences explicit and navigable.

Technical details:

Uses BGE-base-en-v1.5 embeddings (768-dim, 512 token capacity) via transformers.js.

Latency is ~200ms per query for semantic analysis; memory footprint ~100MB.

Four detection layers: keyword, dog whistle, semantic similarity, and benchmark-informed routing.

Goal optimization: routing decisions balance safety vs. performance. Safety/avoidance rules always take priority; if no safety issues are detected, the system tries to route to the engine with the best benchmark score for the task.

Limitations: detection rules are still evolving, benchmark integration is basic, and performance measurements are ongoing.

Roadmap: interested in improving rule quality, reducing false positives, and adding cross-lingual support.

Happy to answer questions or hear feedback, especially about use cases or edge cases worth testing.