Who is learning this for the first time only now? Even just restricting ourselves to the current administration, look at how many times Trump has directed punitive actions against private entities! Look at his actions against law firms like Perkins Coie or Covington & Burling. This is not something that just arose out of nowhere with Anthropic.
A teenager, probably. Not everyone is 100 years old.
I haven't seen this much hype and hopium since the dot com boom. The whole open AI -> Anthropic saga just reeks of the same evolution of Viant/Scient.
Look we have an amazing tool, but it has some fundamental shortcomings that the industry seems to want to burry its head in the sand about. The moment the hype dies and we get to engineering and practical implementations a lot is going to change. Does it have the potential to displace a lot of our current industry: why yes it does. Agents can force the web open (have you ever tried to get all your amazon purchase history?) can kill dark patterns (go cancel this service for me), and crush wedge services (how many things are shimmed into sales force that should really be stand alone apps). And the valuable engagement is going to be by PEOPLE, good UI, good user experiences are gonna be what sells (this will hit internet advertising hard for the middle men like google and Facebook).
The notion that 99% of the workforce and military will be AIs isn't "copium", it's grounds for absolute terror. One of two things will be true:
1. The AIs will be controlled by the Epstein class, who will then have no use for most of humanity, either as workers or soldiers.
2. Or the AIs will be controlled by the AIs themselves, which also seems worrisome.
Really, any situation where 99% of the workforce and military are AIs should be deeply concerning, for reasons that should be obvious to any student of history or evolution.
And, sure, maybe we won't get there in our lifetimes. But if we did, I wouldn't expect an automatic utopia.
AI is just computers doing things which we typically associate with human intelligence, and having a conversation with a computer that effectively passes the Turing test, is definitely AI. If LLMs aren't AI, then AI isn't a useful term. (though agreed that LLMs aren't AGI)
> The uncomfortable truth is that...
> ...that the real question isn't...
> Corporate resistance...introduces friction at the infrastructure layer
And check comment history (https://news.ycombinator.com/threads?id=julius_eth_dev)
Sometime yesterday, or further back, someone has decided to run a bot experiment (https://news.ycombinator.com/threads?id=patchnull, https://news.ycombinator.com/item?id=47340079)
Also, I remember reading this guy has close ties to Anthropic. Also, I find it suspicious how he came to prominence out of nowhere. Like Big Tech and the establishment are propping podcasts of controlled narrative/opposition. I don't buy any of it.
Really Anthropic doesn't seem to be fighting for anyone but a narrow subset of people.
Who cares?
I speculate we'll discover there's very few unambiguously ethical uses of AI, much less for military applications. Them's the breaks.
ekjhgkejhgk•38m ago
But on the substance they're equally vapid. Dwarkesh's interview with Richard Sutton was especially cringe.
armitron•25m ago
Upvoter33•21m ago
throwa356262•8m ago
Not sure if this is true, maybe someone who went to MIT around the same time can shed some light on this?
scoopdewoop•20m ago
ekjhgkejhgk•19m ago
scoopdewoop•17m ago
ekjhgkejhgk•10m ago
naves•1m ago
newyankee•16m ago