frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
1•michaelchicory•2m ago•0 comments

Show HN: Ensemble – macOS App to Manage Claude Code Skills, MCPs, and Claude.md

https://github.com/O0000-code/Ensemble
1•IO0oI•5m ago•1 comments

PR to support XMPP channels in OpenClaw

https://github.com/openclaw/openclaw/pull/9741
1•mickael•6m ago•0 comments

Twenty: A Modern Alternative to Salesforce

https://github.com/twentyhq/twenty
1•tosh•7m ago•0 comments

Raspberry Pi: More memory-driven price rises

https://www.raspberrypi.com/news/more-memory-driven-price-rises/
1•calcifer•13m ago•0 comments

Level Up Your Gaming

https://d4.h5go.life/
1•LinkLens•17m ago•1 comments

Di.day is a movement to encourage people to ditch Big Tech

https://itsfoss.com/news/di-day-celebration/
2•MilnerRoute•18m ago•0 comments

Show HN: AI generated personal affirmations playing when your phone is locked

https://MyAffirmations.Guru
4•alaserm•19m ago•3 comments

Show HN: GTM MCP Server- Let AI Manage Your Google Tag Manager Containers

https://github.com/paolobietolini/gtm-mcp-server
1•paolobietolini•20m ago•0 comments

Launch of X (Twitter) API Pay-per-Use Pricing

https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476
1•thinkingemote•20m ago•0 comments

Facebook seemingly randomly bans tons of users

https://old.reddit.com/r/facebookdisabledme/
1•dirteater_•22m ago•1 comments

Global Bird Count

https://www.birdcount.org/
1•downboots•22m ago•0 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
2•soheilpro•24m ago•0 comments

Jon Stewart – One of My Favorite People – What Now? with Trevor Noah Podcast [video]

https://www.youtube.com/watch?v=44uC12g9ZVk
2•consumer451•26m ago•0 comments

P2P crypto exchange development company

1•sonniya•40m ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
2•jesperordrup•45m ago•0 comments

Write for Your Readers Even If They Are Agents

https://commonsware.com/blog/2026/02/06/write-for-your-readers-even-if-they-are-agents.html
1•ingve•45m ago•0 comments

Knowledge-Creating LLMs

https://tecunningham.github.io/posts/2026-01-29-knowledge-creating-llms.html
1•salkahfi•46m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•53m ago•0 comments

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•1h ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
6•keepamovin•1h ago•1 comments

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•1h ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
2•sickthecat•1h ago•1 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•1h ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
2•imthepk•1h ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•1h ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•1h ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•1h ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
3•breve•1h ago•1 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•1h ago•0 comments
Open in hackernews

Attention Lottery: DeepSeek, Sparse Attention, and the Future of AI Cognition

https://geeksinthewoods.substack.com/p/attention-lottery-deepseek-sparse
1•artur_makly•2mo ago

Comments

artur_makly•2mo ago
“The degradation is subtle. The missing insights are rare, deferred, and distributed. Everyone notices a tenfold speed improvement; few notice the disappearance of an idea that might have changed the world.”

— funny correlation — this is the story of humanity’s biological, psychological, and philosophical evolution as well.

this is no difference.. History doing its thing again. Same Darwinian optimization, just swapped out the substrate. Silicon moves faster than carbon, which means we're speed-running toward some endpoint we can't quite see yet. Maybe we still get to choose architectural diversity before everything locks in. Or maybe we're already too late and just don't know it yet. To what final end?

Some uncanny correlations:

Biological Evolution: Just as DeepSeek's sparse attention sacrifices rare token connections for computational efficiency, biological evolution has consistently pruned "expensive" cognitive capabilities that didn't offer immediate survival advantage. The human brain operates on roughly 20 watts, an engineering marvel achieved through ruthless optimization. We lost the ability to synthesize vitamin C, to regenerate limbs, to perceive ultraviolet light, not because these capacities were useless, but because maintaining the metabolic infrastructure for rarely-used functions was too costly in ancestral environments where caloric scarcity was the norm. The neurological pathways that might have enabled eidetic memory or synesthetic cross-modal perception were likely discarded in favor of "good enough" pattern recognition optimized for predator avoidance and social navigation. Every human today is the descendant of ancestors whose brains kept the top-k survival-relevant features and let the outliers die in the attention lottery of natural selection.

Psychological Evolution: Our cognitive architecture exhibits the same sparse attention dynamics the article describes. Confirmation bias, the availability heuristic, and attentional blindness are not bugs but features, Bayesian priors that let us operate in real-time by ignoring the vast majority of sensory and conceptual space. We don't process all possible interpretations of a social interaction; we route attention to the handful that match our existing mental models, discarding the weak signals that might reveal we've misunderstood someone entirely. The psychological research on "inattentional blindness" (the invisible gorilla experiments) reveals that humans already run on learned sparsity, we literally cannot see what falls outside our predictive frame. The rare insights that change lives often come from those improbable, low-priority connections our brains almost filtered out: the shower thought, the hypnagogic flash, the accidental conversation with a stranger. Optimizing for cognitive efficiency means most humans spend their lives in a "tenfold speed improvement" of habitual thinking, never noticing the transformative ideas their sparse attention mechanisms prevented from ever reaching consciousness.

Philosophical Evolution: The history of thought reveals how philosophical paradigms function as civilizational sparse attention mechanisms, collective cognitive shortcuts that determine which questions a culture deems worth asking. The mechanistic worldview of the Enlightenment achieved extraordinary predictive power by treating nature as clockwork, but it systematically ignored (rendered computationally irrelevant) questions about consciousness, teleology, and qualitative experience. Logical positivism declared vast domains of human concern literally meaningless because they couldn't be empirically verified, a top-k selection rule for acceptable philosophical inquiry. Each dominant paradigm is a trained router deciding which intellectual pathways get attention and which get pruned. We celebrate the speed improvements: from Aristotelian physics to Newtonian mechanics in centuries, from Newtonian to relativistic in decades, from relativistic to quantum field theory in years. But the article's warning applies: we may never notice the metaphysical frameworks, the "ideas that might have changed the world," that were filtered out because they didn't fit the salience patterns of the prevailing epistemic architecture. The philosophical sparsity we inhabit isn't consciously chosen; it's the inherited result of centuries of optimizing for ideological efficiency, leaving vast regions of conceptual space unexplored because our collective attention mechanisms never computed those connections in the first place.

geeksinthewoods•2mo ago
Ya. It seems like evolution itself has been running a sparsity experiment for millions of years. Sparse attention may be the universal price of survival: efficiency over imagination, precision over possibility.

The line about "missing insights being rare, deferred, and distributed" is like the hardest to notice in practice: optimization wins are loud (speed, cost, scores). Meanwhile the things we prune are often counterfactual ideas that never form, weird bridges that never get built, questions that never feel worth asking because our router did not surface them.

One thing I'm still unsure about (and would love to think about more) is how direct the analogy should be. In models, sparsity is engineered / learned under explicit objectives. In biology and culture it's much more emergent and multi-objective.

geeksinthewoods•2mo ago
The attention lottery framing feels especially timely now that DeepSeek's V3.2 tech report is out in the open. Seeing the actual top-k sparse routing and the post-training RL numbers spelled out makes the trade-offs concrete. Huge wins on speed and context, but every pruned token really is a quiet bet against the weird tail stuff that sometimes sparks real leaps...

What struck me most is how much DeepSeek's transparency accidentally lights up the closed models too. Long-context traces and million-token windows almost certainly lean on some variant of this under the hood. This article makes those black boxes feel a lot less mysterious. It leaves me both impressed by the engineering and quietly worried about the curiosity cost.

Also, the song / music video at the end is absurd in the best way!