frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Exploring 1,400 reusable skills for AI coding tools

https://ai-devkit.com/skills/
1•hoangnnguyen•45s ago•0 comments

Show HN: A unique twist on Tetris and block puzzle

https://playdropstack.com/
1•lastodyssey•3m ago•0 comments

The logs I never read

https://pydantic.dev/articles/the-logs-i-never-read
1•nojito•5m ago•0 comments

How to use AI with expressive writing without generating AI slop

https://idratherbewriting.com/blog/bakhtin-collapse-ai-expressive-writing
1•cnunciato•6m ago•0 comments

Show HN: LinkScope – Real-Time UART Analyzer Using ESP32-S3 and PC GUI

https://github.com/choihimchan/linkscope-bpu-uart-analyzer
1•octablock•6m ago•0 comments

Cppsp v1.4.5–custom pattern-driven, nested, namespace-scoped templates

https://github.com/user19870/cppsp
1•user19870•7m ago•1 comments

The next frontier in weight-loss drugs: one-time gene therapy

https://www.washingtonpost.com/health/2026/01/24/fractyl-glp1-gene-therapy/
1•bookofjoe•10m ago•1 comments

At Age 25, Wikipedia Refuses to Evolve

https://spectrum.ieee.org/wikipedia-at-25
1•asdefghyk•13m ago•3 comments

Show HN: ReviewReact – AI review responses inside Google Maps ($19/mo)

https://reviewreact.com
2•sara_builds•13m ago•1 comments

Why AlphaTensor Failed at 3x3 Matrix Multiplication: The Anchor Barrier

https://zenodo.org/records/18514533
1•DarenWatson•15m ago•0 comments

Ask HN: How much of your token use is fixing the bugs Claude Code causes?

1•laurex•18m ago•0 comments

Show HN: Agents – Sync MCP Configs Across Claude, Cursor, Codex Automatically

https://github.com/amtiYo/agents
1•amtiyo•19m ago•0 comments

Hello

1•otrebladih•20m ago•0 comments

FSD helped save my father's life during a heart attack

https://twitter.com/JJackBrandt/status/2019852423980875794
2•blacktulip•23m ago•0 comments

Show HN: Writtte – Draft and publish articles without reformatting, anywhere

https://writtte.xyz
1•lasgawe•25m ago•0 comments

Portuguese icon (FROM A CAN) makes a simple meal (Canned Fish Files) [video]

https://www.youtube.com/watch?v=e9FUdOfp8ME
1•zeristor•27m ago•0 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
2•gnufx•29m ago•0 comments

Transcribe your aunts post cards with Gemini 3 Pro

https://leserli.ch/ocr/
1•nielstron•33m ago•0 comments

.72% Variance Lance

1•mav5431•34m ago•0 comments

ReKindle – web-based operating system designed specifically for E-ink devices

https://rekindle.ink
1•JSLegendDev•35m ago•0 comments

Encrypt It

https://encryptitalready.org/
1•u1hcw9nx•35m ago•1 comments

NextMatch – 5-minute video speed dating to reduce ghosting

https://nextmatchdating.netlify.app/
1•Halinani8•36m ago•1 comments

Personalizing esketamine treatment in TRD and TRBD

https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1736114
1•PaulHoule•38m ago•0 comments

SpaceKit.xyz – a browser‑native VM for decentralized compute

https://spacekit.xyz
1•astorrivera•38m ago•0 comments

NotebookLM: The AI that only learns from you

https://byandrev.dev/en/blog/what-is-notebooklm
2•byandrev•39m ago•2 comments

Show HN: An open-source starter kit for developing with Postgres and ClickHouse

https://github.com/ClickHouse/postgres-clickhouse-stack
1•saisrirampur•39m ago•0 comments

Game Boy Advance d-pad capacitor measurements

https://gekkio.fi/blog/2026/game-boy-advance-d-pad-capacitor-measurements/
1•todsacerdoti•39m ago•0 comments

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
2•layer8•40m ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•42m ago•2 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•43m ago•2 comments
Open in hackernews

Building an Evolutionary Search for Attention Mechanisms

https://github.com/drhemanm/evo-attention
3•hemanm•3mo ago

Comments

hemanm•3mo ago
Building an Evolutionary Search for Attention Mechanisms (on Free Colab) I spent the last few weeks building a framework that allows evolution to design attention mechanisms instead of hand-crafting them. The results were interesting enough to share.

The Question: Transformers use scaled dot-product attention because it was shown to be effective in the "Attention is All You Need" paper. But was it actually optimal, or just the first thing that worked well enough? Most research tweaks hyperparameters. I wanted to explore the mechanism design space itself. The Constraint: I have no computing budget. No lab. No institutional backing. Just free Colab and curiosity.

This meant: Small models only (~500K parameters) Fast training (5K steps per model) Limited search (120 evaluations total) WikiText-2 (small enough to iterate quickly)

The Approach: I encoded attention mechanisms as genes with 4 components:pythongene = AttentionGene( similarity='dot', # How Q and K compute scores normalization='sparsemax', # How scores become weights gating='output_gate', # Optional gating mechanism temperature='learned' # How to scale attention

This creates a discrete search space of 384+ possible mechanisms.

Then I ran a simple genetic algorithm: Initialize 12 random attention mechanisms Train each for 5K steps on WikiText-2 Keep top 3 (elitism) Generate 9 offspring via crossover + mutation Repeat for 10 generations Each generation takes ~2 hours on free Colab. Total: ~20 GPU hours.What Evolution FoundBest mechanism: dot + sparsemax + output_gate + learned_temperatureResults:

Evolved: 98.45 perplexity Baseline (dot + softmax): 102.90 perplexity Improvement: 4.3% The interesting part isn't the 4% improvement. It's what evolution consistently chose:

Finding #1: Sparsemax > Softmax. Every top performer used sparsemax normalization instead of softmax. Sparsemax (from a 2016 paper) creates sparse attention - many weights become exactly zero. The ML community largely ignored it. Evolution rediscovered it works.

Finding #2: Output Gating is a Universal top mechanism used output gating:pythonoutput = attention_result gate = sigmoid(linear(input)) output = output * gate. This wasn't in the original Transformer. Evolution found it's critical.

Finding #3: Highway Gating Always FailsHighway connections (borrowed from Highway Networks) were the worst performers across all generations. Average perplexity: 115.8.This surprised me - highway connections work elsewhere. But for attention, they consistently failed.

Finding #4: Dot-Product is Actually Good. The winner uses standard dot-product similarity, not some exotic function. The improvement comes from normalization + gating, not from replacing the core similarity function. This makes the result more practical - dot-product is fast.

The Honest Part: This is proof-of-concept, not production-ready: Not tested:

Large models (100M+ params) Other datasets Other domains (vision, audio) Production deployment Known issues:

Training variance is ±1 perplexity Only 93 mechanisms evaluated (~24% of search space) Single run per mechanism (no statistical tests) Baseline wasn't hyperparameter-tuned With enough evolutionary steps, you can probably find "good" hyperparameters for any mechanism. I don't know if I discovered better mechanisms or just better hyperparameters. What I Learned 1. Evolutionary Search is Viable at Small Scale. You don't need massive compute to explore architecture spaces. 20 GPU hours found something interesting.

That's 0.8 points of noise. My "4% improvement" has ~1 point of uncertainty baked in. Proper validation requires multiple runs. I didn't do this (compute constraints). Search Space Design is Everything. I spent more time designing the search space than writing the evolution code. What components to include? What ranges? What's too complex? Bad search space = wasted compute.