frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•3m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•4m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•17m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•20m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•22m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•30m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•32m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•33m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•33m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•36m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•37m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•41m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•43m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•43m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•44m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•46m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•49m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•51m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•57m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•59m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Building an Evolutionary Search for Attention Mechanisms

https://github.com/drhemanm/evo-attention
3•hemanm•3mo ago

Comments

hemanm•3mo ago
Building an Evolutionary Search for Attention Mechanisms (on Free Colab) I spent the last few weeks building a framework that allows evolution to design attention mechanisms instead of hand-crafting them. The results were interesting enough to share.

The Question: Transformers use scaled dot-product attention because it was shown to be effective in the "Attention is All You Need" paper. But was it actually optimal, or just the first thing that worked well enough? Most research tweaks hyperparameters. I wanted to explore the mechanism design space itself. The Constraint: I have no computing budget. No lab. No institutional backing. Just free Colab and curiosity.

This meant: Small models only (~500K parameters) Fast training (5K steps per model) Limited search (120 evaluations total) WikiText-2 (small enough to iterate quickly)

The Approach: I encoded attention mechanisms as genes with 4 components:pythongene = AttentionGene( similarity='dot', # How Q and K compute scores normalization='sparsemax', # How scores become weights gating='output_gate', # Optional gating mechanism temperature='learned' # How to scale attention

This creates a discrete search space of 384+ possible mechanisms.

Then I ran a simple genetic algorithm: Initialize 12 random attention mechanisms Train each for 5K steps on WikiText-2 Keep top 3 (elitism) Generate 9 offspring via crossover + mutation Repeat for 10 generations Each generation takes ~2 hours on free Colab. Total: ~20 GPU hours.What Evolution FoundBest mechanism: dot + sparsemax + output_gate + learned_temperatureResults:

Evolved: 98.45 perplexity Baseline (dot + softmax): 102.90 perplexity Improvement: 4.3% The interesting part isn't the 4% improvement. It's what evolution consistently chose:

Finding #1: Sparsemax > Softmax. Every top performer used sparsemax normalization instead of softmax. Sparsemax (from a 2016 paper) creates sparse attention - many weights become exactly zero. The ML community largely ignored it. Evolution rediscovered it works.

Finding #2: Output Gating is a Universal top mechanism used output gating:pythonoutput = attention_result gate = sigmoid(linear(input)) output = output * gate. This wasn't in the original Transformer. Evolution found it's critical.

Finding #3: Highway Gating Always FailsHighway connections (borrowed from Highway Networks) were the worst performers across all generations. Average perplexity: 115.8.This surprised me - highway connections work elsewhere. But for attention, they consistently failed.

Finding #4: Dot-Product is Actually Good. The winner uses standard dot-product similarity, not some exotic function. The improvement comes from normalization + gating, not from replacing the core similarity function. This makes the result more practical - dot-product is fast.

The Honest Part: This is proof-of-concept, not production-ready: Not tested:

Large models (100M+ params) Other datasets Other domains (vision, audio) Production deployment Known issues:

Training variance is ±1 perplexity Only 93 mechanisms evaluated (~24% of search space) Single run per mechanism (no statistical tests) Baseline wasn't hyperparameter-tuned With enough evolutionary steps, you can probably find "good" hyperparameters for any mechanism. I don't know if I discovered better mechanisms or just better hyperparameters. What I Learned 1. Evolutionary Search is Viable at Small Scale. You don't need massive compute to explore architecture spaces. 20 GPU hours found something interesting.

That's 0.8 points of noise. My "4% improvement" has ~1 point of uncertainty baked in. Proper validation requires multiple runs. I didn't do this (compute constraints). Search Space Design is Everything. I spent more time designing the search space than writing the evolution code. What components to include? What ranges? What's too complex? Bad search space = wasted compute.