frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

The Compensation Principle

1•SharkTheory•19h ago
We've been looking at emergence wrong.

When capabilities suddenly appear in AI systems, or when Earth maintains stability despite massive perturbations, or when humanity narrowly avoids catastrophe after catastrophe, we see the same phenomenon: systems building their own safety nets. Complex systems don't develop capabilities randomly. Each capability that works becomes a template for the next. A system that discovers error correction builds better error correction. One that benefits from modularity deepens that modularity.

Not through planning, but through basic logic: what works gets reinforced, what fails disappears. This creates something remarkable at scale. Systems develop proxy coordination mechanisms, ways for parts to work together without central control.

Pain tells cells about damage. Prices tell markets about scarcity. Gradients tell molecules where to flow. These proxies get more sophisticated as systems grow. A bacterium following a chemical gradient is basic. A brain integrating millions of signals into consciousness is the same principle, refined through billions of iterations.

Above a certain complexity threshold, these proxy mechanisms encode automatic compensation. When one part moves toward instability, the same deep structures that enable coordination ensure other parts compensate.

Not as a response, the compensation is built into the architecture through countless cycles of selection for stability.

In large language models, capabilities that seem to emerge suddenly actually build on latent structures detectable at smaller scales. Adding "let's think step by step" to a prompt can boost accuracy from 17% to 78%, proving the capability existed in dormant form. The model didn't suddenly learn reasoning; it accumulated enough precursor circuits that reasoning became accessible.

In Earth's systems, when volcanic CO2 rises, rock weathering accelerates to pull it back down. When predators multiply, prey populations crash, starving predators back to balance. These look designed but emerged through selection: planetary states without such compensation experienced runaway collapse and aren't here to observe.

The evidence becomes suspicious with near-catastrophes. Nuclear close calls should have ended us multiple times. Stanislav Petrov's refusal to launch. Multiple independent failures during the Cuban Missile Crisis.

Technical malfunctions preventing accidental launches. Individually plausible, collectively improbable; unless they're manifestations of deep compensation mechanisms.

This reframes AI safety entirely. Current discourse assumes AI will develop separately and potentially eliminate humanity for resources or self-preservation.

But AI systems develop within Earth's deeply structured system. Every dataset, reward function, and infrastructure dependency inherits billions of years of accumulated stability mechanisms.

The fear is superintelligent AI pursuing goals misaligned with human survival. But if systems above critical complexity automatically compensate for catastrophic deviations, then AI extreme enough to threaten everything would trigger proportional systemic resistance. Not through conscious intervention, but through the same dynamics that have prevented every other potential catastrophe.

This doesn't mean AI can't cause harm. It means extinction becomes increasingly improbable as parent system complexity increases. The same deep structures that prevented nuclear annihilation would operate on AI threats.

The question shifts from preventing extinction to managing integration.

We can't specify exact thresholds where compensation becomes reliable. But the pattern is clear and deserves attention.

https://postimg.cc/G476XxP7 (full paper coming soon)

Database failure lessons at Amazon in 1997 (2004) [pdf]

https://web.archive.org/web/20090327153456/http://www.bluegecko.net/Default.aspx?app=LeadgenDownl...
1•harshreality•4m ago•0 comments

The Web Behind Glass

https://medienbaecker.com/articles/the-web-behind-glass
1•OuterVale•5m ago•0 comments

Writing Code Is Easy. Reading It Isn't

https://idiallo.com/blog/writing-code-is-easy-reading-is-hard
2•jnord•7m ago•0 comments

Bonini's paradox – The more complete a model is, the harder it is to understand

https://en.wikipedia.org/wiki/Bonini%27s_paradox
1•gidellav•7m ago•0 comments

Show HN: I Rewrote printf – Now 10x More Powerful (v1.3)

2•Forgret•9m ago•0 comments

Musk's SpaceX Agrees to Buy Echostar Spectrum for $17B

https://www.bloomberg.com/news/articles/2025-09-08/starlink-is-said-in-advanced-talks-to-acquire-...
2•supertrope•10m ago•0 comments

Go for Bash Programmers – Part II: CLI Tools

https://github.com/go-monk/from-bash-to-go-part-ii
1•reisinge•10m ago•0 comments

Spectroscopy Like it's 1985 [video]

https://www.youtube.com/watch?v=1J0GFmZ1BX0
1•gnoll_of_gozag•10m ago•0 comments

Teams Outlast Projects

https://frederickvanbrabant.com/blog/2025-09-05-teams-outlast-projects/
1•TheEdonian•12m ago•0 comments

Ask HN: How do I announce that I'm looking for a new job while being employed?

1•throwawayAhoy•13m ago•0 comments

Orsted Sues Trump Administration in Fight to Restart Its Blocked Wind Farm

https://www.nytimes.com/2025/09/04/climate/orsted-trump-wind-farm-lawsuit.html
4•mitchbob•14m ago•1 comments

A complete map of the Rust type system

https://rustcurious.com/elements/
2•ashvardanian•15m ago•0 comments

Package Managers Are Evil

https://www.gingerbill.org/article/2025/09/08/package-managers-are-evil/
1•gingerBill•17m ago•0 comments

Are Humans Watching Animals Too Closely?

https://www.theatlantic.com/science/2025/09/animal-privacy-surveillance-dogs/684132/
1•FinnLobsien•19m ago•0 comments

Reverse-engineering Roadsearch Plus, or, roadgeeking with an 8-bit CPU

http://oldvcr.blogspot.com/2025/08/make-your-apple-ii-or-commodore-64.html
1•atjamielittle•22m ago•0 comments

Adjacency Matrix and std:mdspan, C++23

https://www.cppstories.com/2025/cpp23_mdspan_adj/
1•ashvardanian•22m ago•0 comments

Pickleball Took over Tennis Courts, as Seen from the Sky

https://www.nytimes.com/interactive/2025/09/01/upshot/pickleball.html
1•bewal416•23m ago•0 comments

After Afghan Quake, Many Male Rescuers Helped Men but Not Women

https://www.nytimes.com/2025/09/04/world/asia/afghanistan-earthquake-rescue-efforts-women.html
1•isolli•25m ago•0 comments

C++20 Modules: Practical Insights, Status and TODOs

https://chuanqixu9.github.io/c++/2025/08/14/C++20-Modules.en.html
1•ashvardanian•28m ago•0 comments

A desktop environment without graphics (tmux-like)

https://github.com/Julien-cpsn/desktop-tui
1•mustaphah•29m ago•0 comments

Show HN: Dir2md – Convert Any Repo into AI-Ready Markdown Blueprints

https://github.com/Flamehaven/dir2md
1•Flamehaven01•29m ago•0 comments

Hot Chips 2025: Session 1 – CPUs – By George Cozma

https://chipsandcheese.com/p/hot-chips-2025-session-1-cpus
2•rbanffy•33m ago•0 comments

Getting AI Agent Architecture Right with MCP

https://decodingml.substack.com/p/getting-agent-architecture-right
1•rbanffy•34m ago•0 comments

Tyromancy (Telling the future using cheese)

https://en.wikipedia.org/wiki/Tyromancy
1•reaperducer•35m ago•0 comments

Indiana Jones and the Last Crusade Adventure Prototype Recovered for the C64

https://www.gamesthatwerent.com/2025/09/indiana-jones-and-the-last-crusade-adventure-prototype-re...
2•ibobev•35m ago•0 comments

VMware's in court again. Customer relationships rarely go this wrong

https://www.theregister.com/2025/09/08/vmware_in_court_opinion/
16•rntn•36m ago•0 comments

Plot IMDB Series Ratings

https://imdb.derfor.dk/
1•0x000042•37m ago•1 comments

10xDevAi

https://10xdevai.com
1•chaimvaid•39m ago•0 comments

Your Zodiac Sign Is 2k Years Out of Date

https://www.nytimes.com/interactive/2025/upshot/zodiac-signs.html
2•gk1•40m ago•0 comments

Nicholas (Nick) J. Fuentes

https://x.com/NickJFuentes
1•barrister•44m ago•0 comments