frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

The Adolescence of Technology

https://www.darioamodei.com/essay/the-adolescence-of-technology
56•jasondavies•2h ago

Comments

Balgair•1h ago
Initial thought about 1/5th of the way through: Wow, that's a lot of em-dashes! i wonder how much of this he actually wrote?

Edit:

Okay, section 3 has some interesting bits in it. It reminds me of all those gun start-ups in Texas that use gyros and image recognition to turn a C- shooter into an A- shooter. They all typically get bought up quite fast by the government and the tech shushed away. But the ideas are just too easy now to implement these days. Especially with robots and garage level manufacturing, people can pretty much do what they want. I think that means we have to make people better people then? Is that even a thing?

Edit 2:

Wow, section 4 on the abuse by organizations with AI is the most scary. Yikes, I feel that these days with Minneapolis. They're already using Palantir to try some of it out, but are being hampered by, well, themselves. Not a good fallback strat for anyone that is not the government. The thing about the companies just doing it before releasing it, that I think is underrated. Whats to stop sama from just, you know, taking one of these models and taking over the world? Like, is this paper saying that nothing is stopping him?

The big one that should send huge chills down the spines of any country is this bit:

"My worry is that I’m not totally sure we can be confident in the nuclear deterrent against a country of geniuses in a datacenter: it is possible that powerful AI could devise ways to detect and strike nuclear submarines, conduct influence operations against the operators of nuclear weapons infrastructure, or use AI’s cyber capabilities to launch a cyberattack against satellites used to detect nuclear launches"

What. The. Fuck. Is he saying that the nuclear triad is under threat here from AI? Am I reading this right? That alone is reason to abolish the whole thing in the eyes of nuclear nations. This, I think, is the most important part of the whole essay. Holy shit.

Edit 3:

Okay, section 4 on the economy is likely the most relevant for all of us readers. And um, yeah, no, this is some shit. Okay, okay, even if you take the premise as truth, then I want no part of AI (and I don't take his premise as truth). He's saying that the wealth concentration will be so extreme that the entire idea of democracy will break down (oligarchies and tyrants, of course, will be fine. Ignoring that they will probably just massacre their peoples when the time is right). So, combined with the end of a nuclear deterrence, we'll have Elon (lets be real here, he means sama and Elon and those people that we already know the names of) taking all of the money. And everyone will then be out of a job as the robots do all the work that is left. So, just, like if you're not already well invested in a 401k, then you're just useless. Yeah, again, I don't buy this, but I can't see how the intermediate steps aren't ust going to tank the whole thought exercise. Like, I get that this is a warning, but my man, no, this is unreasonable.

Edit 4:

Section 5 is likely the most interesting here. It's the wild cards, the cross products, that you don't see coming. I think he undersells this. The previous portions are all about 'faster horses' in the world where the cars is coming. It's the stuff we know. This part is the best, I feel. His point about robot romances is really troubling, because, like, yeah, I can't compete with a algorithmically perfect robo-john/jane. It's just not possible, especially if I live in a world where I never actually dated anyone either. Then add in an artificial womb, and there goes the whole thing, we're just pets for the AI.

One thing that I think is an undercurrent in this whole piece is the use of AI for propaganda. Like, we all feel that's already happening, right? Like, I know that the crap my family sees online about black women assaulting ICE officers is just AI garbage like the shrimp jesus stuff they choke down. But I kinda look at reddit the same way. I've no idea if any of that is AI generated now or manipulated. I already index the reddit comments at total Russian/CCP/IRG/Mossad/Visa/Cokeacola/Pfiser garbage. But the images and the posts themselves, it just feels increasingly clear that it's all just nonsense and bots. So, like Rao said, it's time for the cozy web of Discord servers, and Signal groups, and Whatsapp, and people I can actually share private keys with (not that we do). It's already just so untrustworthy.

The other undercurrent here, that he can't name for obvious reasons, is Donny and his rapid mental and physical deterioration. Dude clearly is unfit at this point, regardless of the politics. So the 'free world' is splintering at the exact wrong time to make any rational decisions. It's all going to be panic mode after panic mode. Meaning that the people in charge are going to fall to their training and not rise to the occassion. And that training is from like 1970/80 for the US now. So, in a way, its not going to be AI based, as they won't trust it or really use it at all. Go gen-z I think?

Edit 5:

Okay, last bit and wrap up. I think this is a good wrap up, but overall, not tonally consistent. He wants to end on a high note, and so he does. The essay says that he should end on the note of 'Fuck me, no idea here guys', but he doesn't. Like he want 3 things here, and I'll speak to them in turn:

Honesty from those closest to the technology _ Clearly not happening already, even in this essay. He's obviously worried about Donny and propaganda. He;s clearly trying but still trying to be 'neutral' and 'above it all.' Bud, if you're saying that nuclear fucking triad is at stake, then you can't be hedging bets here. You have to come out and call balls and strikes. If you;re worried about things like MAGA coming after you, you already have 'fuck you' money. Go to New Zealand or get a security detail or something. You're saying that now is the time, we have so little of it left, and then you pull punches. Fuck that.

Urgent prioritization by policymakers, leaders, and the public _ Clearly also not going to happen. Most of my life, the presidents have been born before 1950. They are too fucking old to have any clue of what you're talking about. Again, this is about Donny and the Senate. He's actually talking about like 10 people here max. Sure, Europe and Canada and yadda yadda yadda. We all know what the roadblocks are, and they clearly are not going anywhere. Maybe Vance gets in, but he's already on board with all this. And if the author is not already clear on this here: You have 'fuck you' money, go get a damn hour of their time, you have the cash already, you say we need to do this, so go do it.

Courage to act on principle despite economic and political pressure _ Buddy, show us the way. This is a matter of doing what you said you would do. This essay is a damn good start towards it. I'm expecting you on Dwarkesh any day this week now. But you have to go on Good Morning America too, and Joe Rogan, and whatever they do in Germany and Canada too. It;s a problem for all of us.

Overall: Good essay, too long, should be good fodder for AstralCodexTen folks. Unless you get out and on mainstream channels, then I assume this is some hype for your product to say 'invest in me!' as things are starting to hit walls/sigmoids internally.

tadfisher•52m ago
Dario and Anthropic's strategy has been to exaggerate the harmful capabilities of LLMs and systems driven by LLMs, positioning Anthropic themselves as the "safest" option. Take from this what you will.

As an ordinary human with no investment in the game, I would not expect LLMs to magically work around the well-known physical phenomena that make submarines hard to track. I think there could be some ability to augment cybersecurity skill just through improved pattern-matching and search, hence real teams using it at Google and the like, but I don't think this translates well to attacks on real-world targets such as satellites or launch facilities. Maybe if someone hooked up Claude to a Ralph Wiggum loop and dumped cash into a prompt to try and "fire ze missiles", and it actually worked or got farther than the existing state-sponsored black-hat groups at doing the same thing to existing infrastructure, then I could be convinced otherwise.

foobar10000•43m ago
For your Edit 2 - yes. Being discussed and looked at actively in both the open and (presumably being looked at) closed communities. Open communities being, for example : https://ssp.mit.edu/cnsp/about. They just published a series of lectures with open attendance if you wanted to listen in via zoom - but yeap - that's the gist of it. Spawned a huge discussion :)
firasd•1h ago
When I see Dario say all this I wonder what he's seeing that seems so differentiated from my POV

I wonder how much of Anthropic's success is based on a horde of prompt to code examples that they post train on. We shouldn't assume Opus' web dev skills are the same as general intelligence

Here's something Sonnet 4 did (Jul 2025) that I don't think a human intern would do when I made a wikisource tool (transcript follows)

#-#-#

F:

let’s see how the wikisource tool wrks.. let’s try to match “Mary hath chosen that good part” in one of the bible sources they should have

Claude: Great idea! Let me first search for some biblical content on WikiSource, then use the content match tool.

wiki_source_search

Perfect! Let me try the Douay-Rheims version of Luke, which should have the Mary and Martha story:

wiki_source_page_content_match

No matches! Let me try a slightly different phrasing – maybe it’s “hath chosen the good part”:

wiki_source_page_content_match

Still nothing. Let me try searching for just “Martha” to see if the story is there:

F:

dummy .. why would use kjv wording in a different edition lol

Claude: Haha you’re absolutely right! I was being completely silly – of course the KJV wording “Mary hath chosen that good part” wouldn’t be in the Douay-Rheims translation. Different translations use different language entirely.

What I Did Wrong: • Used King James Version phrasing in a Catholic translation • The Douay-Rheims says “Mary hath chosen the best part” • I was looking for “good part” (KJV wording)

Should Have Started With: Just searching for “Martha” or “Mary” to see what the actual text says in that translation, then used the exact wording I found.

#-#-#

It's almost comedic.. "Perfect! Let me try the Douay-Rheims version of Luke" is where if you were watching a horror movie you'd say "Don't go down to the basement fridge!"

And I think a human when they first see the text match fail would go "hmm--ohh" but Claude starts doing permutations. This I guess is the side effect of Reinforcement Learning and system prompts that amount to saying: "Just do it. Don't ask questions. Just do it."

l1n•54m ago
> Here's something Sonnet 4 did last year

Hate to be that gal but a lot has changed in the past year

root_axis•52m ago
Not with respect to this particular type of failure.
tines•8m ago
Last year was a month ago.
jonas21•31m ago
Is this how you always prompt LLMs? I have no idea what you are even asking Claude to do here.
firasd•10m ago
I was asking it to see if the wikisource tools are working by looking up a Bible quote. There was no ambiguity about the task itself; what I'm saying is that Claude 'knows' a bunch of things (the Bible has different translations) that it doesn't operationalize when doing a task--issues that would would be glaringly obvious to a human who knows the same things
drewchew•1h ago
I wish he would have used AI to make the essay shorter…
augusteo•1h ago
The framing of AI risk as a "rite of passage" resonates with me.

The "autonomy risks" section is what I think about most. We've seen our agents do unexpected things when given too much latitude. Not dangerous, just wrong in ways we didn't anticipate. The gap between "works in testing" and "works in production" is bigger than most people realize.

I'm less worried about the "power seizure" scenario than the economic disruption one. AI will take over more jobs as it gets better. There's no way around it. The question isn't whether, it's how we handle the transition and what people will do.

One thing I'd add: most engineers are still slow to adopt these tools. The constant "AI coding is bad" posts prove this while cutting-edge teams use it successfully every day. The adoption curve matters for how fast these risks actually materialize.

BinaryIgor•1h ago
What makes you think that they will just keep improving? It's not obvious at all, we might soon hit a ceiling, if we haven not already - time will tell.

There are lots of technologies that have been 99% done for decades; it might be the same here.

Philpax•1h ago
From the essay - not presented in agreement (I'm still undecided), but Dario's opinion is probably the most relevant here:

> My co-founders at Anthropic and I were among the first to document and track the “scaling laws” of AI systems—the observation that as we add more compute and training tasks, AI systems get predictably better at essentially every cognitive skill we are able to measure. Every few months, public sentiment either becomes convinced that AI is “hitting a wall” or becomes excited about some new breakthrough that will “fundamentally change the game,” but the truth is that behind the volatility and public speculation, there has been a smooth, unyielding increase in AI’s cognitive capabilities.

> We are now at the point where AI models are beginning to make progress in solving unsolved mathematical problems, and are good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI. Three years ago, AI struggled with elementary school arithmetic problems and was barely capable of writing a single line of code. Similar rates of improvement are occurring across biological science, finance, physics, and a variety of agentic tasks. If the exponential continues—which is not certain, but now has a decade-long track record supporting it—then it cannot possibly be more than a few years before AI is better than humans at essentially everything.

> In fact, that picture probably underestimates the likely rate of progress. Because AI is now writing much of the code at Anthropic, it is already substantially accelerating the rate of our progress in building the next generation of AI systems. This feedback loop is gathering steam month by month, and may be only 1–2 years away from a point where the current generation of AI autonomously builds the next. This loop has already started, and will accelerate rapidly in the coming months and years. Watching the last 5 years of progress from within Anthropic, and looking at how even the next few months of models are shaping up, I can feel the pace of progress, and the clock ticking down.

ctoth•58m ago
Which technologies have been 99% "done" for "decades?"

Bicycles? carbon fiber frames, electronic shifting, tubeless tires, disc brakes, aerodynamic research

Screwdrivers? impact drivers, torque-limiting mechanisms, ergonomic handles

Glass? gorilla glass, smart glass, low-e coatings

Tires? run-flats, self-sealing, noise reduction

Hell even social technologies improve!

How is a technology "done?"

tadfisher•44m ago
It's not! But each one of your examples is in a phase of chasing diminishing returns from ever-expanding levels of capital investment.
nancyminusone•23m ago
It's done when there is no need to improve it anymore. But you can still want to improve it.

A can opener from 100 years ago will open today's cans just fine. Yes, enthusiasts still make improvements; you can design ones that open cans easier, or ones that are cheaper to make (especially if you're in the business of making can openers).

But the main function (opening cans) has not changed.

philipkglass•1h ago
Some people say that human jobs will move to the physical world, which avoids the whole category of “cognitive labor” where AI is progressing so rapidly. I am not sure how safe this is, either. A lot of physical labor is already being done by machines (e.g., manufacturing) or will soon be done by machines (e.g., driving). Also, sufficiently powerful AI will be able to accelerate the development of robots, and then control those robots in the physical world.

I would like to believe that we're about to see a rapid proliferation of useful robots, but progress has been much slower with the physical world than with information-based tasks.

After the DARPA Urban Challenge of 2007, I thought that massive job losses from robotic car and truck drivers were only 5-8 years away. But in 2026 in the US only Waymo has highly autonomous driving systems, in only a few markets. Most embodied tasks don't even have that modest level of demonstrated capability.

I actually worry that legislators -- people with white collar jobs -- will overestimate the near-term capabilities of AI to handle jobs in general, and prematurely build solutions for a "world without work" that will be slow to arrive. (Like starting UBI too early instead of boosting job retraining, leaving health care systems understaffed for hands-on work.)

NiloCK•1h ago
Technological adolescence indeed!

In the most straightforward way possible, the commoditized intelligence-as-a-service of a technologically mature civilization must be a public utility, rather than a handful of walled gardens competing over territory, or worse, a single one that has won all.

Lerc•1h ago
One of my formative impressions of AI came from the depiction of the Colligatarch from Alan Dean Foster's The I Inside.

The AI in the book is almost feels like it is the main message masquerading as a subplot.

Asimov new the risks, and I had assumed until fairly recently that the lessons and explorations that he had imparted into the Robot books had provided a level of cultural knowledge of what we were about to face. Perhaps the movie of I Robot was a warning of how much the signal had decayed.

I worry that we are sociologically unprepared, and sometimes it seems wilfully so.

People discussed this potential in great detail decades ago, Indeed the Sagan reference at the start of this post points to one of the significant contributors to the conversation, but it seems by the time it started happening, everyone had forgotten.

People are talking in terms of who to blame, what will be taken from me, and inevitability.

Any talk of a future we might want dismissed as idealistic or hype. Any depiction of a utopian future is met with derision far too often. Even worse the depiction can be warped to an evil caricature of "What they really meant".

How do we know what course to take if we can't talk about where we want to end up?

root_axis•58m ago
I don't think we have much to worry about in terms of economic disruption. At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental.

Even in the software world, the effect of being able to build software a lot faster isn't really leading to a fundamentally different software landscape. Yes, you can now pump out a month's worth of CRUD in a couple days, but ultimately it's just the same CRUD, and there's no reason to expect that this will change because of LLMs.

Of course, creative people with innovative ideas will be able to achieve more, a talented engineer will be able to embark on a project that they didn't have the time to build before, and that will likely lead to some kind of software surplus that the economy feels on the margins, but in practical terms the economy will continue to chug along at a sustained pace that's mostly inline with e.g. economic projections from 10 years ago.

jonas21•27m ago
> At this point it seems pretty clear that LLMs are having a major impact on how software is built, but for almost every other industry the practical effects are mostly incremental.

Even just a year ago, most people thought the practical effects in software engineering were incremental too. It took another generation of models and tooling to get to the point where it could start having a large impact.

What makes you think the same will not happen in other knowledge-based fields after another iteration or two?

root_axis•13m ago
Software is more amenable to LLMs because there is a rich source of highly relevant training data that corresponds directly to the building blocks of software, and the "correctness" of software is quasi-self-verifiable. This isn't true for pretty much anything else.
xcodevn•51m ago
> we may have AI that is more capable than everyone in only 1-2 years

There's no evidence this will be the case...

krunck•13m ago
I fear that when this technology grows up it will first be in the hands of the propagandists and war mongers. The rest of use won't stand a chance against the real-time propaganda streams convincing us why "we" needs to attack the bad guy country of the month die so we can take their stuff. Or maybe we'll be so sedated by genAI, 24/7, always new, personally customized entertainment that we won't care.
direwolf20•7m ago
It's already there. Propaganda was one of the first uses of LLMs, and before that, they used humans.

Television is 100 years old today

https://diamondgeezer.blogspot.com/2026/01/tv100.html
190•qassiov•4h ago•52 comments

Qwen3-Max-Thinking

https://qwen.ai/blog?id=qwen3-max-thinking
318•vinhnx•4h ago•265 comments

JuiceSSH – Give me my pro features back

https://nproject.io/blog/juicessh-give-me-back-my-pro-features/
75•jandeboevrie•1h ago•33 comments

Fedora Asahi Remix is now working on Apple M3

https://bsky.app/profile/did:plc:okydh7e54e2nok65kjxdklvd/post/3mdd55paffk2o
194•todsacerdoti•1h ago•48 comments

The Adolescence of Technology

https://www.darioamodei.com/essay/the-adolescence-of-technology
57•jasondavies•2h ago•25 comments

MapLibre Tile: a modern and efficient vector tile format

https://maplibre.org/news/2026-01-23-mlt-release/
336•todsacerdoti•9h ago•69 comments

Find 'Abbey Road when type 'Beatles abbey rd': Fuzzy/Semantic search in Postgres

https://rendiment.io/postgresql/2026/01/21/pgtrgm-pgvector-music.html
37•nethalo•5d ago•0 comments

There is an AI code review bubble

https://www.greptile.com/blog/ai-code-review-bubble
48•dakshgupta•3h ago•35 comments

The mountain that weighed the Earth

https://signoregalilei.com/2026/01/18/the-mountain-that-weighed-the-earth/
44•surprisetalk•2h ago•6 comments

Google AI Overviews cite YouTube more than any medical site for health queries

https://www.theguardian.com/technology/2026/jan/24/google-ai-overviews-youtube-medical-citations-...
257•bookofjoe•5h ago•134 comments

What "The Best" Looks Like

https://www.kuril.in/blog/what-the-best-looks-like/
71•akurilin•3h ago•34 comments

Not all Chess960 positions are equally complex

https://arxiv.org/abs/2512.14319
20•MaysonL•3d ago•1 comments

OSS ChatGPT WebUI – 530 Models, MCP, Tools, Gemini RAG, Image/Audio Gen

https://llmspy.org/docs/v3
79•mythz•4h ago•21 comments

France Aiming to Replace Zoom, Google Meet, Microsoft Teams, etc.

https://twitter.com/lellouchenico/status/2015775970330882319
280•bwb•3h ago•215 comments

Things I've learned in my 10 years as an engineering manager

https://www.jampa.dev/p/lessons-learned-after-10-years-as
439•jampa•5d ago•113 comments

Show HN: Only 1 LLM can fly a drone

https://github.com/kxzk/snapbench
105•beigebrucewayne•8h ago•60 comments

Exactitude in Science – Borges (1946) [pdf]

https://kwarc.info/teaching/TDM/Borges.pdf
66•jxmorris12•4h ago•20 comments

Notice of Collective Action Lawsuit Against Workday, INC

https://workdaycase.com
30•mooreds•1h ago•2 comments

San Francisco Graffiti

https://walzr.com/sf-graffiti
75•walz•9h ago•76 comments

OpenFlexure Microscope

https://openflexure.org/projects/microscope/
4•o4c•5d ago•1 comments

The Holy Grail of Linux Binary Compatibility: Musl and Dlopen

https://github.com/quaadgras/graphics.gd/discussions/242
183•Splizard•11h ago•143 comments

Show HN: TetrisBench – Gemini Flash reaches 66% win rate on Tetris against Opus

https://tetrisbench.com/tetrisbench/
4•ykhli•55m ago•0 comments

When AI 'builds a browser,' check the repo before believing the hype

https://www.theregister.com/2026/01/26/cursor_opinion/
23•CrankyBear•39m ago•0 comments

Google Books has been effectively killed by the last algorithm update

https://old.reddit.com/r/google/comments/1qn1hk1/google_has_seemingly_entirely_removed_search/
55•adamnemecek•1h ago•24 comments

The browser is the sandbox

https://simonwillison.net/2026/Jan/25/the-browser-is-the-sandbox/
298•enos_feedler•14h ago•161 comments

Text Is King

https://www.experimental-history.com/p/text-is-king
127•zdw•5d ago•53 comments

House of Lords Votes to Ban UK Children from Using Internet VPNs

https://www.ispreview.co.uk/index.php/2026/01/house-of-lords-votes-to-ban-uk-children-from-using-...
56•donpott•1h ago•54 comments

Blade Runner Costume Design (2020)

https://costumedesignarchive.blogspot.com/2020/12/blade-runner-1982.html
59•exvi•5d ago•12 comments

After two years of vibecoding, I'm back to writing by hand

https://atmoio.substack.com/p/after-two-years-of-vibecoding-im
502•mobitar•6h ago•381 comments

Scientists identify brain waves that define the limits of 'you'

https://www.sciencealert.com/scientists-identify-brain-waves-that-define-the-limits-of-you
287•mikhael•19h ago•82 comments