frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•1m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
1•pastage•1m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•2m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•7m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•13m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•14m ago•1 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•19m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•21m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•27m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•31m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•31m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•35m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•36m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•38m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•40m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•43m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•44m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•46m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•47m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•49m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•52m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•57m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•59m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments
Open in hackernews

Attention Lottery: DeepSeek, Sparse Attention, and the Future of AI Cognition

https://geeksinthewoods.substack.com/p/attention-lottery-deepseek-sparse
1•artur_makly•2mo ago

Comments

artur_makly•2mo ago
“The degradation is subtle. The missing insights are rare, deferred, and distributed. Everyone notices a tenfold speed improvement; few notice the disappearance of an idea that might have changed the world.”

— funny correlation — this is the story of humanity’s biological, psychological, and philosophical evolution as well.

this is no difference.. History doing its thing again. Same Darwinian optimization, just swapped out the substrate. Silicon moves faster than carbon, which means we're speed-running toward some endpoint we can't quite see yet. Maybe we still get to choose architectural diversity before everything locks in. Or maybe we're already too late and just don't know it yet. To what final end?

Some uncanny correlations:

Biological Evolution: Just as DeepSeek's sparse attention sacrifices rare token connections for computational efficiency, biological evolution has consistently pruned "expensive" cognitive capabilities that didn't offer immediate survival advantage. The human brain operates on roughly 20 watts, an engineering marvel achieved through ruthless optimization. We lost the ability to synthesize vitamin C, to regenerate limbs, to perceive ultraviolet light, not because these capacities were useless, but because maintaining the metabolic infrastructure for rarely-used functions was too costly in ancestral environments where caloric scarcity was the norm. The neurological pathways that might have enabled eidetic memory or synesthetic cross-modal perception were likely discarded in favor of "good enough" pattern recognition optimized for predator avoidance and social navigation. Every human today is the descendant of ancestors whose brains kept the top-k survival-relevant features and let the outliers die in the attention lottery of natural selection.

Psychological Evolution: Our cognitive architecture exhibits the same sparse attention dynamics the article describes. Confirmation bias, the availability heuristic, and attentional blindness are not bugs but features, Bayesian priors that let us operate in real-time by ignoring the vast majority of sensory and conceptual space. We don't process all possible interpretations of a social interaction; we route attention to the handful that match our existing mental models, discarding the weak signals that might reveal we've misunderstood someone entirely. The psychological research on "inattentional blindness" (the invisible gorilla experiments) reveals that humans already run on learned sparsity, we literally cannot see what falls outside our predictive frame. The rare insights that change lives often come from those improbable, low-priority connections our brains almost filtered out: the shower thought, the hypnagogic flash, the accidental conversation with a stranger. Optimizing for cognitive efficiency means most humans spend their lives in a "tenfold speed improvement" of habitual thinking, never noticing the transformative ideas their sparse attention mechanisms prevented from ever reaching consciousness.

Philosophical Evolution: The history of thought reveals how philosophical paradigms function as civilizational sparse attention mechanisms, collective cognitive shortcuts that determine which questions a culture deems worth asking. The mechanistic worldview of the Enlightenment achieved extraordinary predictive power by treating nature as clockwork, but it systematically ignored (rendered computationally irrelevant) questions about consciousness, teleology, and qualitative experience. Logical positivism declared vast domains of human concern literally meaningless because they couldn't be empirically verified, a top-k selection rule for acceptable philosophical inquiry. Each dominant paradigm is a trained router deciding which intellectual pathways get attention and which get pruned. We celebrate the speed improvements: from Aristotelian physics to Newtonian mechanics in centuries, from Newtonian to relativistic in decades, from relativistic to quantum field theory in years. But the article's warning applies: we may never notice the metaphysical frameworks, the "ideas that might have changed the world," that were filtered out because they didn't fit the salience patterns of the prevailing epistemic architecture. The philosophical sparsity we inhabit isn't consciously chosen; it's the inherited result of centuries of optimizing for ideological efficiency, leaving vast regions of conceptual space unexplored because our collective attention mechanisms never computed those connections in the first place.

geeksinthewoods•2mo ago
Ya. It seems like evolution itself has been running a sparsity experiment for millions of years. Sparse attention may be the universal price of survival: efficiency over imagination, precision over possibility.

The line about "missing insights being rare, deferred, and distributed" is like the hardest to notice in practice: optimization wins are loud (speed, cost, scores). Meanwhile the things we prune are often counterfactual ideas that never form, weird bridges that never get built, questions that never feel worth asking because our router did not surface them.

One thing I'm still unsure about (and would love to think about more) is how direct the analogy should be. In models, sparsity is engineered / learned under explicit objectives. In biology and culture it's much more emergent and multi-objective.

geeksinthewoods•2mo ago
The attention lottery framing feels especially timely now that DeepSeek's V3.2 tech report is out in the open. Seeing the actual top-k sparse routing and the post-training RL numbers spelled out makes the trade-offs concrete. Huge wins on speed and context, but every pruned token really is a quiet bet against the weird tail stuff that sometimes sparks real leaps...

What struck me most is how much DeepSeek's transparency accidentally lights up the closed models too. Long-context traces and million-token windows almost certainly lean on some variant of this under the hood. This article makes those black boxes feel a lot less mysterious. It leaves me both impressed by the engineering and quietly worried about the curiosity cost.

Also, the song / music video at the end is absurd in the best way!