frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
1•birdculture•50s ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
1•doener•1m ago•0 comments

MyFlames: Visualize MySQL query execution plans as interactive FlameGraphs

https://github.com/vgrippa/myflames
1•tanelpoder•2m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•2m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
1•tanelpoder•3m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•4m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
1•elsewhen•7m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•8m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•12m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
1•mooreds•12m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•13m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•13m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•13m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•13m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•15m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•15m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
3•nick007•16m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•17m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•18m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
2•belter•20m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•21m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
2•momciloo•21m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•21m ago•2 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•21m ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•22m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•22m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•22m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
3•Keyframe•25m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•26m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
2•valyala•27m ago•0 comments
Open in hackernews

The Compensation Principle

1•SharkTheory•5mo ago
We've been looking at emergence wrong.

When capabilities suddenly appear in AI systems, or when Earth maintains stability despite massive perturbations, or when humanity narrowly avoids catastrophe after catastrophe, we see the same phenomenon: systems building their own safety nets. Complex systems don't develop capabilities randomly. Each capability that works becomes a template for the next. A system that discovers error correction builds better error correction. One that benefits from modularity deepens that modularity.

Not through planning, but through basic logic: what works gets reinforced, what fails disappears. This creates something remarkable at scale. Systems develop proxy coordination mechanisms, ways for parts to work together without central control.

Pain tells cells about damage. Prices tell markets about scarcity. Gradients tell molecules where to flow. These proxies get more sophisticated as systems grow. A bacterium following a chemical gradient is basic. A brain integrating millions of signals into consciousness is the same principle, refined through billions of iterations.

Above a certain complexity threshold, these proxy mechanisms encode automatic compensation. When one part moves toward instability, the same deep structures that enable coordination ensure other parts compensate.

Not as a response, the compensation is built into the architecture through countless cycles of selection for stability.

In large language models, capabilities that seem to emerge suddenly actually build on latent structures detectable at smaller scales. Adding "let's think step by step" to a prompt can boost accuracy from 17% to 78%, proving the capability existed in dormant form. The model didn't suddenly learn reasoning; it accumulated enough precursor circuits that reasoning became accessible.

In Earth's systems, when volcanic CO2 rises, rock weathering accelerates to pull it back down. When predators multiply, prey populations crash, starving predators back to balance. These look designed but emerged through selection: planetary states without such compensation experienced runaway collapse and aren't here to observe.

The evidence becomes suspicious with near-catastrophes. Nuclear close calls should have ended us multiple times. Stanislav Petrov's refusal to launch. Multiple independent failures during the Cuban Missile Crisis.

Technical malfunctions preventing accidental launches. Individually plausible, collectively improbable; unless they're manifestations of deep compensation mechanisms.

This reframes AI safety entirely. Current discourse assumes AI will develop separately and potentially eliminate humanity for resources or self-preservation.

But AI systems develop within Earth's deeply structured system. Every dataset, reward function, and infrastructure dependency inherits billions of years of accumulated stability mechanisms.

The fear is superintelligent AI pursuing goals misaligned with human survival. But if systems above critical complexity automatically compensate for catastrophic deviations, then AI extreme enough to threaten everything would trigger proportional systemic resistance. Not through conscious intervention, but through the same dynamics that have prevented every other potential catastrophe.

This doesn't mean AI can't cause harm. It means extinction becomes increasingly improbable as parent system complexity increases. The same deep structures that prevented nuclear annihilation would operate on AI threats.

The question shifts from preventing extinction to managing integration.

We can't specify exact thresholds where compensation becomes reliable. But the pattern is clear and deserves attention.

https://postimg.cc/G476XxP7 (full paper coming soon)