frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
1•archb•1m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•1m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•2m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•2m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•8m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
2•dragandj•9m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•10m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•11m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•12m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•13m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•15m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•15m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•15m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•16m ago•0 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•17m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•19m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•19m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•20m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•21m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•22m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
2•paulpauper•25m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•25m ago•1 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•25m ago•1 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•25m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•28m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•29m ago•0 comments

The age of a treacherous, falling dollar

https://www.economist.com/leaders/2026/02/05/the-age-of-a-treacherous-falling-dollar
2•stopbulying•29m ago•0 comments

Ask HN: AI Generated Diagrams

1•voidhorse•31m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
8•josephcsible•32m ago•3 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
6•jdjuwadi•35m ago•1 comments
Open in hackernews

Attention Lottery: DeepSeek, Sparse Attention, and the Future of AI Cognition

https://geeksinthewoods.substack.com/p/attention-lottery-deepseek-sparse
1•artur_makly•2mo ago

Comments

artur_makly•2mo ago
“The degradation is subtle. The missing insights are rare, deferred, and distributed. Everyone notices a tenfold speed improvement; few notice the disappearance of an idea that might have changed the world.”

— funny correlation — this is the story of humanity’s biological, psychological, and philosophical evolution as well.

this is no difference.. History doing its thing again. Same Darwinian optimization, just swapped out the substrate. Silicon moves faster than carbon, which means we're speed-running toward some endpoint we can't quite see yet. Maybe we still get to choose architectural diversity before everything locks in. Or maybe we're already too late and just don't know it yet. To what final end?

Some uncanny correlations:

Biological Evolution: Just as DeepSeek's sparse attention sacrifices rare token connections for computational efficiency, biological evolution has consistently pruned "expensive" cognitive capabilities that didn't offer immediate survival advantage. The human brain operates on roughly 20 watts, an engineering marvel achieved through ruthless optimization. We lost the ability to synthesize vitamin C, to regenerate limbs, to perceive ultraviolet light, not because these capacities were useless, but because maintaining the metabolic infrastructure for rarely-used functions was too costly in ancestral environments where caloric scarcity was the norm. The neurological pathways that might have enabled eidetic memory or synesthetic cross-modal perception were likely discarded in favor of "good enough" pattern recognition optimized for predator avoidance and social navigation. Every human today is the descendant of ancestors whose brains kept the top-k survival-relevant features and let the outliers die in the attention lottery of natural selection.

Psychological Evolution: Our cognitive architecture exhibits the same sparse attention dynamics the article describes. Confirmation bias, the availability heuristic, and attentional blindness are not bugs but features, Bayesian priors that let us operate in real-time by ignoring the vast majority of sensory and conceptual space. We don't process all possible interpretations of a social interaction; we route attention to the handful that match our existing mental models, discarding the weak signals that might reveal we've misunderstood someone entirely. The psychological research on "inattentional blindness" (the invisible gorilla experiments) reveals that humans already run on learned sparsity, we literally cannot see what falls outside our predictive frame. The rare insights that change lives often come from those improbable, low-priority connections our brains almost filtered out: the shower thought, the hypnagogic flash, the accidental conversation with a stranger. Optimizing for cognitive efficiency means most humans spend their lives in a "tenfold speed improvement" of habitual thinking, never noticing the transformative ideas their sparse attention mechanisms prevented from ever reaching consciousness.

Philosophical Evolution: The history of thought reveals how philosophical paradigms function as civilizational sparse attention mechanisms, collective cognitive shortcuts that determine which questions a culture deems worth asking. The mechanistic worldview of the Enlightenment achieved extraordinary predictive power by treating nature as clockwork, but it systematically ignored (rendered computationally irrelevant) questions about consciousness, teleology, and qualitative experience. Logical positivism declared vast domains of human concern literally meaningless because they couldn't be empirically verified, a top-k selection rule for acceptable philosophical inquiry. Each dominant paradigm is a trained router deciding which intellectual pathways get attention and which get pruned. We celebrate the speed improvements: from Aristotelian physics to Newtonian mechanics in centuries, from Newtonian to relativistic in decades, from relativistic to quantum field theory in years. But the article's warning applies: we may never notice the metaphysical frameworks, the "ideas that might have changed the world," that were filtered out because they didn't fit the salience patterns of the prevailing epistemic architecture. The philosophical sparsity we inhabit isn't consciously chosen; it's the inherited result of centuries of optimizing for ideological efficiency, leaving vast regions of conceptual space unexplored because our collective attention mechanisms never computed those connections in the first place.

geeksinthewoods•2mo ago
Ya. It seems like evolution itself has been running a sparsity experiment for millions of years. Sparse attention may be the universal price of survival: efficiency over imagination, precision over possibility.

The line about "missing insights being rare, deferred, and distributed" is like the hardest to notice in practice: optimization wins are loud (speed, cost, scores). Meanwhile the things we prune are often counterfactual ideas that never form, weird bridges that never get built, questions that never feel worth asking because our router did not surface them.

One thing I'm still unsure about (and would love to think about more) is how direct the analogy should be. In models, sparsity is engineered / learned under explicit objectives. In biology and culture it's much more emergent and multi-objective.

geeksinthewoods•2mo ago
The attention lottery framing feels especially timely now that DeepSeek's V3.2 tech report is out in the open. Seeing the actual top-k sparse routing and the post-training RL numbers spelled out makes the trade-offs concrete. Huge wins on speed and context, but every pruned token really is a quiet bet against the weird tail stuff that sometimes sparks real leaps...

What struck me most is how much DeepSeek's transparency accidentally lights up the closed models too. Long-context traces and million-token windows almost certainly lean on some variant of this under the hood. This article makes those black boxes feel a lot less mysterious. It leaves me both impressed by the engineering and quietly worried about the curiosity cost.

Also, the song / music video at the end is absurd in the best way!