frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•44s ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•1m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•1m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•1m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
1•Brajeshwar•1m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•2m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•3m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•4m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•10m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•11m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•11m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
10•bookofjoe•11m ago•3 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•12m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
1•ilyaizen•13m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•14m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•14m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•14m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•14m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•15m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•16m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•16m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•17m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•21m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•21m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•22m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•22m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•24m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•24m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•25m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•25m ago•0 comments
Open in hackernews

Sparse Mixture of Experts for Game AI: An Accidental Architecture

https://github.com/streamlineddesigns/Sparse-Mixture-of-Experts
2•ColorSwitchDev•1w ago

Comments

ColorSwitchDev•1w ago
I built a sparse MoE to train ML bots for Color Switch before I knew what one was. LSTM networks trained via PPO would overfit to obstacle subsets and fail to generalize. Routing inputs through clustered ensembles fixed it.

The Problem Color Switch is a mobile game where players navigate obstacles by matching colors. I trained bots using Unity ML-Agents with LSTM networks.

Individual networks would learn to pass ~30% of obstacles, then fail on the rest. Training new networks learned different subsets. No single network generalized.

The Architecture 1. Cluster obstacles by feature vectors

Each obstacle had metadata: colors, collider counts, rotation speeds, size. Encoded as min-max scaled feature vectors.

K-means clustering grouped visually and mechanically similar obstacles naturally.

2. Train one ensemble per cluster

Separate ensembles (multiple LSTMs each) for each cluster, trained independently.

3. Route inputs to correct ensemble

At inference:

Identify approaching obstacle via spatial hash (O(1) lookup) Look up obstacle's cluster ID Route observations to corresponding ensemble Weighted average of outputs → action Router was a cached lookup table. No learned routing, just precomputed K-means assignments.

What Worked Generalization: Bot trained on Classic mode played 5 different modes without retraining. No previous architecture achieved this.

Modular retraining: New obstacle in a cluster? Retrain one ensemble. Underperforming network? Retrain just that network. Ensembles trained in parallel.

Emergent disentanglement: I now think of this as disentangling the manifold at a coarse level before networks learned finer representations. Obstacles with similar dynamics got processed together. The network didn't have to learn "this is a circle thing" and "how to pass circle things" simultaneously.

What Didn't Work Random speed changes: Obstacles that changed speed mid-interaction broke the bots. Architecture helped but didn't solve this.

Superhuman performance: Never achieved. Ceiling was "good human player."

Connection to Transformer MoEs Didn't know this was even called a sparse MoE until the GPT-4 leak.

Same pattern: input arrives → router selects expert(s) → outputs combined.

DeepSeek's MoE paper describes "centroids" as expert identifiers with cosine similarity routing. Mine used Euclidean distance to K-means centroids. Same idea, less sophisticated.

Takeaways Routing to specialized sub-networks based on input similarity works without transformers K-means on feature vectors produces surprisingly good routing clusters Modular architectures enable incremental retraining Generalization improved when I stopped training one network to handle everything

Happy to answer implementation questions.