Who is learning this for the first time only now? Even just restricting ourselves to the current administration, look at how many times Trump has directed punitive actions against private entities! Look at his actions against law firms like Perkins Coie or Covington & Burling. This is not something that just arose out of nowhere with Anthropic.
A teenager, probably. Not everyone is 100 years old.
I haven't seen this much hype and hopium since the dot com boom. The whole open AI -> Anthropic saga just reeks of the same evolution of Viant/Scient.
Look we have an amazing tool, but it has some fundamental shortcomings that the industry seems to want to burry its head in the sand about. The moment the hype dies and we get to engineering and practical implementations a lot is going to change. Does it have the potential to displace a lot of our current industry: why yes it does. Agents can force the web open (have you ever tried to get all your amazon purchase history?) can kill dark patterns (go cancel this service for me), and crush wedge services (how many things are shimmed into sales force that should really be stand alone apps). And the valuable engagement is going to be by PEOPLE, good UI, good user experiences are gonna be what sells (this will hit internet advertising hard for the middle men like google and Facebook).
The notion that 99% of the workforce and military will be AIs isn't "copium", it's grounds for absolute terror. One of two things will be true:
1. The AIs will be controlled by the Epstein class, who will then have no use for most of humanity, either as workers or soldiers.
2. Or the AIs will be controlled by the AIs themselves, which also seems worrisome.
Really, any situation where 99% of the workforce and military are AIs should be deeply concerning, for reasons that should be obvious to any student of history or evolution.
And, sure, maybe we won't get there in our lifetimes. But if we did, I wouldn't expect an automatic utopia.
The GP is saying that it’s a major over-extrapolation of the current progress.
You seem to be assuming we will get there instead of expecting the cracks will become more and more obvious.
AI is just computers doing things which we typically associate with human intelligence, and having a conversation with a computer that effectively passes the Turing test, is definitely AI. If LLMs aren't AI, then AI isn't a useful term. (though agreed that LLMs aren't AGI, which I assume is what you're thinking of)
Wikipedia's list of AI applications: https://en.wikipedia.org/wiki/Artificial_intelligence#Applic...
There’s a similar thing with transhumanist “enhancement” or “life extension” stuff. When it actually works we call it medicine. Statistically one of the most powerful life extension techs ever developed was the cardiac bypass, which would have been sci-fi in 1900.
I’ve been using stuff like Claude Code and personally feel comfortable calling this stuff AI. Is it AGI? I don’t think so, but then again I’m not totally sure what that is. Am I AGI? I’m not universally able to handle all forms of cognition well and I can’t self modify much, so I’m not sure either. I’m not even sure if AGI is a well formed concept.
Intelligence is a pretty broad concept too. My pet rabbit is intelligent. Plants are intelligent. Bacteria are intelligent. Anything that can run an OODA loop, learn, adapt, and move toward a goal function is intelligent. By that definition some computer systems have been AI for decades. They’re just getting better.
I think there’s intelligence all around us. We just don’t get the wow factor from it unless it talks.
I personally would prefer "AI" to be "AGI" but there's no point fighting the way people use language (see: every damned pedantic comment about English usage ever!! :-)
Also, I remember reading this guy has close ties to Anthropic. Also, I find it suspicious how he came to prominence out of nowhere. Like Big Tech and the establishment are propping podcasts of controlled narrative/opposition. I don't buy any of it.
Also, somewhat spitefully, find it funny that he has multiple roommates.
Really Anthropic doesn't seem to be fighting for anyone but a narrow subset of people.
So who cares, none of the but AI providers are particularly ethical. Pick your poison as your conscious and needs allow.
Sometimes people succeed without earning it, and what matters is what they do with the success afterwards. I'd say Dwarkesh earned it, but got lucky and caught the right waves, and has surfed the hell out of his success. He's had consistently well informed, level headed takes, and has engaged the field with insight and honest curiousity.
When I see people surf like that, I applaud it. There's nothing grifty or shady, he's just had a great series of excellent opportunities and has played them for everything they're worth. Once he had a few billionaires on, that was all the social cache he needed to continue attracting guests and high level researchers and other figures in AI.
I speculate we'll discover there's very few unambiguously ethical uses of AI, much less for military applications. Them's the breaks.
The lawfare part of it is that to coerce an individual or a company, governments are willing to abuse their power. The Biden administration did it when pressuring social media companies to censor content. The Trump administration is doing it to a much greater extent with things like ordering every government agency to stop using Anthropic and by labeling them a supply chain risk.
The ideological part of it is when Defense Sec Hegseth and Trump and AI Czar / PayPal Mafia member David Sacks repeatedly attack Anthropic as “woke”, and it is clear they’re undermining this company from their government positions based on Anthropic’s speech (first amendment violation). This obviously is part of why they attacked Anthropic in such a public way.
And the corruption part of it is OpenAI’s leaders being big supporters of the MAGA movement and the Trump administration. Greg Brockman, president of OpenAI, is the biggest donor ever to the MAGA PAC. Why did Hegseth grant a contract to OpenAI after banning Anthropic, even though OpenAI has the same red lines in their agreement (what Sam Altman claimed)? It’s because of the corruption - give Trump and his family/friends money, and you’ll get something back.
The fight against these types of government abuse have ALWAYS been happening. But the abuse is much more in the open today, and much larger in scale than ever before. Scandals like Watergate would not even make the news today. And that is what the public should be waking up to and focusing on. We need to rethink our political system significantly and add a lot more protections against the kind of things the Trump 2.0 administration has done.
As for whether code written with Claude Code should be so considered - if it’s just code that is subject to human review, I would argue that this use shouldn’t be a supply chain risk. But with Claude Code PR Review and similar products, the chance that an AI product (not limiting to Anthropic here) could own a load-bearing part of the lifecycle of a critical piece of code becomes much larger, and deserves scrutiny.
Because you can't designate a company a SCR because you don't like the contract you signed with them.
The part of the Pentagon that did this is, to put it politely, not the part that's good at planning.
> The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race? It’s because we want to make sure free open societies can defend themselves. We don’t want the winner of the AI race to be a government which operates on the principle that there is no such thing as a truly private company or a private citizen.
In the US currently, there are private citizens, and there are 'not-the-1%' citizens, where a Kavanaugh stop is legal, your voter information may be (or may have already been) seized by the DoJ or FBI, you may be tracked by out of state or federal agents on ALPRs with no warrant, for any reason, and where attending a legal protest may have your biometrics added to a database of potential domestic terrorists.
Or maybe your tax money will just be used to blow up unidentified boaters or bomb girls' schools and homes, and you'll get no say in whether that's the case because the elected body that is there to issue a declaration of war (or not) as representatives of you, has abdicated that power to a cabinet of unelected white nationalists.
But go off about how we're such a better country that believes in freedom and goodness.
Better than China as a global model? Still, yes, probably. Potentially. Depends on how the next few years ago.
Even if America fails, I’d argue a global republic is a brighter potential future than a global dictatorship.
An observation one can make when comparing a republic with the rule of law to one that ain’t, whether across time or geography. There is a real benefit to having the American experiment prominent and continuing.
These aren’t mutually exclusive. The world is better off for Athens and the Roman and Harrapan and Haudenosaunee republics. (Book request: history of the republic. I’ve struggled to find one.)
The CCP with internal elections was interesting and a genuine riposte to broadly-enfranchised republics. Xi as a dictator is not, not.
The American 'experiment' is one long history of the US doing really horrible things, but giving ourselves a pass because we dress it up in the name of freedom and self-determination.
If you ignore our slavery and the genocide of Native Americans, it's easy to paint China's slavery and genocide as evils that are unique somehow.
The real experiment of America is in seeing how self-deluded we can become if we continuously reinforce the false premise that our institutions are intrinsically good (or at least, nebulously "better").
Just like being a billionaire (or, super-wealther, if you will), you don't get to be a superpower by doing good things.
China and the US can both be bad, and they're both going to use AI for mass internal and external surveillance and weapon targeting.
It’s a lie in the way cats are round is a lie—actually a lie, but one nobody brought up.
I don’t think Dwarkesh is arguing for global American hegemony. Just that if AI becomes dominant, having AIs embedded with American cultural values, broadly, is probably better than having ones seeded with Xi Jinping thought.
> China and the US can both be bad, and they're both going to use AI for mass internal and external surveillance and weapon targeting
Agree. But I don’t think any Chinese AI companies get to sue the CCP over it.
I'd really rather have a choice of both rather than be forced to accept "AI that downplays a 2 year old genocide" over "AI that covers up a 40 year massacre".
You do. So do I. If American AI goes by the wayside, we cease to have that choice anymore.
The idea that anyone would be better off with China supplanting the US is asinine. This is the same government that committed the Tiananmen square massacre and still doesn't acknowledge that anything happened.
China invaded and annexed Tibet in 1959. To the degree we had a classical definition of intent-based genocide, Beijing continues to commit it in Tibet and Xinjiang.
America’s conscious is stained. But it’s downright nonsense to go off about surveillance when the comparison is China.
I want the US to win because I live in the US and it will probably benefit me, but we’ve largely stopped pretending to value the republic so I don’t think we can claim a moral standing on these topics anymore.
To reference your other comment, the common American man has as much de facto ability to sue our government and/or leaders as the common Chinese man
Yes, the Uyghur genocide and paramilitary suppression and settler-colonialism of Tibet and Xinjiang is horrific, and will (hopefully) be recognized in the future as a genocide on par with others that 'enjoy' historical notoriety, but let's not pretend we're not well on our way to doing that here.
The rhetoric of ethnic superiority and nationalism and birthright that exists in our government is the exact same rhetoric that exists in Xi Jinping's "Imperial Han" nationalism.
My read is they’re saying we need an alternative to Chinese AI. Because with its industrial might, the default future is Chinese technological dominance.
> The whole background of this AI conversation is that we’re in a race with China, and we have to win. But what is the reason we want America to win the AI race?
Right now there are two contenders for first in the AI race. The US, and China.
You spent the rest of your comment making the case that it is not good for the US to win. Implying, though not directly saying, we would be better off with China.
You can say "oh wouldn't it be nice if Europe won instead" but they don't have anything in the race right now. We're stuck with the US or China.
No. There is no court in Beijing that can tell Xi to knock it off.
> China hasn't bombed girls' schools
Read up on the treatment of Uyghur girls in the Chinese schools. It’s Indian Removal Act stuff, except right now.
Again, nobody is arguing America is a beacon of anything right now. But between America and China, one is an explicit and proud autocracy.
[1] https://www.washingtonpost.com/national-security/2026/02/13/...
[2] https://www.nytimes.com/2026/02/13/technology/dhs-anti-ice-s...
People have also been detained with intention to be deported for their views about Palestine, with online comments being part of how they're chosen for targeting:
[3] https://www.columbiaspectator.com/news/2026/01/28/federal-go...
There was also someone jailed for a month for quoting Trump's own words about a school shooting, "we have to get over it", in the context of the Charlie Kirk's death, along with many other noted instances of retaliation against online comments around that incident:
[4] https://www.cnn.com/2025/12/17/politics/retired-cop-jailed-o...
> But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it.
It’s easy to point to China as a place where freedom of speech isn’t present, but try asking members of the current administration or even Supreme Court judges who won the 2020 election and see what kind of responses you get. That alone says a lot about the current state of things.
You _could_ argue that this is a flaw in the constitution, and that none of the above should be legal, and that people who support those things should be restricted in their speech or ability to hold office. This was the status quo in politics for a while! These things have all existed for a long time but this seems particularly targeted at Trump, who was famously banned from most social media platforms for years.
There are a lot of democracies (most of the EU for example) that take this stance on freedoms and will even overturn elections to prevent those who support those policies. The question is really 'does doing that protect freedom and democracy or infringe it?'
As for the second paragraph, this is just a lie, Congress has not abdicated any type of war powers to the Cabinet. There has not been any type of declaration of war, and if Congress wanted to stop the DoD, they very much could and in fact came very close to doing so. If your Congress representative did not represent your interests (in this case voted nay), you can call email etc. them and their office or vote them out.
> better country that believes in freedom and goodness
I think you're letting your strong feelings here cloud your judgement, you can hold all of these opinions above without needing to fellate China, which is objectively worse on freedoms than the US. It's also important not to conflate "believes in freedom" with "perfectly meets my line of freedom."
> “Preface to the highest stakes negotiations in history.”
Like come on. The cuban missile crisis, for starters? Bro needs to calm tf down.
> Our future civilization will run on AI labor. And as much as the government’s actions here piss me off, in a way I’m glad this episode happened - because it gives us the opportunity to think through some extremely important questions about who this future workforce will be accountable and aligned to, and who gets to determine that.
I stopped reading there because this is a pointless exercise.[1]
This isn’t a roundtable. You are not even at the table. There isn’t some “thankfully time to discuss this...”—you are just out.
The Machine doesn’t need your labor? You are out. No norms. No discussions.
You either try to forcefully take control of the situation or you see yourself get discarded.
(I am here just assuming all the AI Maximalist (doom maximalist in this context, Trump and all) premises for the sake of the argument.)
[1] I did read the last paragraphs and the tenor is the same. “We must make laws and norms through our political system”… just like with nuclear bombs, of all things.
I mean... isn't that pretty much the way the current administration behaves in general? If the answer to this question is "yes", and the US executive does not in fact share the values of the author about free and open society, then the rest of the article is kinda moot (except the point that we should be talking about these things now, and encouraging congress to act).
The problem with democracy is that it can easily become a revolving door wherein capital holders can choose which candidates are allowed to approach the door.
I think democracy works well when the monetary system is constrained; for example on gold or other scarce asset because that creates a better separation between money and state because then there would be less of an incentive for big companies to corrupt the revolving door to gain a financial advantage.
In a monetary system where the government can create an unlimited amount of money, the incentive to corrupt the government and political process keeps increasing.
ekjhgkejhgk•1h ago
But on the substance they're equally vapid. Dwarkesh's interview with Richard Sutton was especially cringe.
armitron•1h ago
Upvoter33•1h ago
throwa356262•1h ago
Not sure if this is true, maybe someone who went to MIT around the same time can shed some light on this?
kleebeesh•40m ago
ademeure•1h ago
I'm personally very glad that Dwarkesh isn't like that. He's not perfect, but I think he's doing a way better job than other podcasters in the field right now.
lovich•9m ago
scoopdewoop•1h ago
ekjhgkejhgk•1h ago
scoopdewoop•1h ago
ekjhgkejhgk•1h ago
naves•1h ago
ekjhgkejhgk•1h ago
First phrase: "you're saving on energy by putting data centers in space". What?
2:08 "It's harder to scale on the ground than it is in space" what?
JumpCrisscross•45m ago
ekjhgkejhgk•22m ago
Didnt startship exploded like 10 times by now? But in 30 months they'll be launchign 1 per hour? What?
JumpCrisscross•9m ago
I actually do. The math is more strained than anything present. But a lot of people are rejecting it out of hand without doing anything back of the envelope. Truth is, barring a seismic shift in how we permit data centers on the ground, it takes a within-the-envelope decrease in launch costs to make space-based data centers profitable. Which is then just a cheat code for building a Dyson sphere.
> Didnt startship exploded like 10 times by now?
They all explode all the time. Starship has also been consistently improving its suborbital flight characteristics. I don’t see a good argument for a fundamental design fuckup in the data we have.
> But in 30 months they'll be launchign 1 per hour?
This is nonsense. But within ten years? I think so. At least, we don’t have a good reason to reject that with current data. And that would make the cost equation flip to favoring space-based infrastructure. Which, honestly, is not the answer I expected. (I’ve done aerospace stuff for a while. Most of the back-of-the-envelope math fails. It failed for space-based solar power. It failed for asteroid mining. And it currently fails for space-based data centers. But let launch costs dip a bit, or permitting delays and risks rise a bit, and the equation balances sooner than one would think.)
newyankee•1h ago