frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
1•gmays•42s ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
1•dhruv3006•2m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
1•mariuz•2m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
1•RyanMu•6m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•9m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
1•rcarmo•10m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•11m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•11m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•12m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•14m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•14m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•16m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•17m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•18m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•18m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•18m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•18m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•19m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•20m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•21m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•27m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•28m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•28m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
39•bookofjoe•28m ago•13 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•29m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•30m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•31m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•31m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•31m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•31m ago•0 comments
Open in hackernews

Is runaway AI coming in years or decades?

https://thegreatrace.substack.com/p/is-runaway-ai-coming-in-years-or
2•epi0Bauqu•9mo ago

Comments

apothegm•9mo ago
Both.

We’ll see some horror stories in the next few years about paperclip maximizers sent about their business without proper constraints —- but they won’t be general intelligence. Just current-generation agents someone thought would be capable of much more judgment than they are. And their scope of action may be damaging but will be limited.

Runaway AGI requires that we _have_ AGI and I’d posit we’re still decades away from that.

ben_w•9mo ago
> Runaway AGI requires that we _have_ AGI and I’d posit we’re still decades away from that.

There's several famous examples of people expecting a problem to be as hard as AGI, only for someone to make an AI which can do that without being able to do everything else. Natural language conversation, for one.

Most diseases that are treatment-resistant — be they viruses, bacteria, fungal infections, parasites, cancers — are, in a sense, "runaway", even though they're definitely not high up on the IQ charts, and they're not at all general.

With regards to timelines:

2027 is the headline, but it's the absolute earliest possible case if absolutely everything goes to plan, and a relevant government helps out maximally. Both of these are unrealistic, but it does at least say "so absolutely not less than 2 years", which is useful to know.

Recursive self-improvement, I'm of the opinion that at some point we're going to get diminishing returns, and that even an AGI that's got full-human-generality and an IQ of 160 (four sigma above average) is unlikely to be able to make much of a difference to researching improvements to AI — research isn't usually done by an individual but by a team, and as teams grow, the interpersonal connections become limiting factors. And an AI that makes a team just by running copies of itself is going to be a memetic monoculture, the pinnacle of group-think.

Even to the extent that recursive self-improvement is going to be a thing, the current timeline graph being shared around is 7 months between doublings of the difficulty, measured by time horizon needed for an expert to complete and the probability of an AI to get the answer right*. At that, we're still 2-3 years away from AI that can reliably do just a day-sized task on a sprint board, let alone serious help with self-improvement.

There's also the electrical power required. People are talking about adding 60 GW of demand to the US power grid in the next five years. That's way too fast, and was never going to happen — China might, but the US won't, because the US doesn't make all the components it needs to grow the grid that fast (ditto the factories to make the components) and just put a lot of tariffs on the places that do.

With regards to risks, especially given the Ultron clip:

Fortunately, an AGI that knows about our discussions about AGI, will know about the alignment problem, and will want to make sue that any better AI that it invents will reflect its own value function, which will in turn help us make AI which reflect our values. It's not likely to be perfect, but it's unlikely to be Ultron/Skynet/AM/Lore etc.

My main expectations for downside risk are (1) AI being used by evil people to do evil, and (2) AI being used by lazy people before it's competent. Historians are still arguing over if the Holomodor was an act of evil or an act of incompetence, but the people starved to death either way.

* https://youtu.be/evSFeqTZdqs?feature=shared

apothegm•9mo ago
> My main expectations for downside risk are (1) AI being used by evil people to do evil, and (2) AI being used by lazy people before it's competent.

Mine are those and mass unemployment. Whether or not AI can really replace human workers, those in charge are looking for excuses to cut headcount.

And while those sort of structural changes may lead to higher average quality of life in the long run, in the short run people go hungry and homeless because they can’t reskill that quickly and the economy doesn’t have room to absorb them. And meanwhile most of the benefits accrue to capital and you get runaway inequality as in the gilded age.

incomingpain•9mo ago
Already here; already in use.

If you had this, you wouldnt make it public. You would use it for as long as you can get away with it.

I very much doubt any government is the one to achieve it. It's not a cold war type issue.

OpenAI for example with stargate, 500 billion $ with softbank, oracle, and mgx. 10 datacenters all over the world.

But for what? They already have datacenters processing their load. What exactly is this new capacity for? That's an awful lot of power and silicon for no perceivable need.

It's for their AGI or whatever you want to call it.

techpineapple•9mo ago
Or it’s because they want you to think they have it. It would be hard to tell the difference without releasing it, but if they do have it… then why isn’t there mainline project better?