> Blitzhires are another form of an acquisition.. not everybody may be thrilled of the outcome.. employees left behind may feel betrayed and unappreciated.. investors may feel founders may have broken a social contract. But, for a Blitzhire to work, usually everybody needs to work together and align. The driver behind these deals is speed. Maybe concerns over regulatory scrutiny are part of it, but more importantly speed. Not going through the [Hart-Scott-Rodino Antitrust Act] HSR process at all may be worth the enormous complexity and inefficiency of foregoing a traditional acquisition path.
From comment on OP:
> In 2023–2024, our industry witnessed massive waves of layoffs, often justified as “It’s just business, nothing personal.” These layoffs were carried out by the same companies now aggressively competing for AI talent. I would argue that the transactional nature of employer-employee relationships wasn’t primarily driven by a talent shortage or human greed. Rather, those factors only reinforced the damage caused by the companies’ own culture-destroying actions a few years earlier.
2014, https://arstechnica.com/tech-policy/2014/06/should-tech-work...
> A group of big tech companies, including Apple, Google, Adobe, and Intel, recently settled a lawsuit over their "no poach" agreement for $324 million. The CEOs of those companies had agreed not to do "cold call" recruiting of each others' engineers until they were busted by the Department of Justice, which saw the deal as an antitrust violation. The government action was followed up by a class-action lawsuit from the affected workers, who claimed the deal suppressed their wages.
Is it? This whole piece just reads of mega funds and giga corps throwing ridiculous cash for pay to win. Nothing new there.
We can’t train more people? I didn’t know Universities were suddenly producing waaaay less talent and that intelligence fell off a cliff.
Things have gone parabolic! It’s giga mega VC time!! Adios early stage, we’re doing $200M Series Seed pre revenue! Mission aligned! Giga power law!
This is just M2 expansion and wealth concentration. Plus a total disregard for 99% of employees. The 1000000x engineer can just do everyone else’s job and the gigachad VCs will back them from seed to exit (who even exits anymore, just hyper scale your way to Google a la Windsurf)!
Everything else is just executives doing a bit of dance for their investors ala "We won't need employees anymore!"
We need more wealth concentration, simply because opposite of this is the prevailing normie zeitgeist. You can just write it off based on how popular it is to hate wealth.
If you talked to any of these folks worth billions they arent particularly smart, their ideas not really interesting. it took us a few years to go from gpt-3 to deepseek v3 and then another few years to go from sonnet 4 to kimi k2, both being open source models on way lower funding. This hints at a deeper problem than what "hypercapitalism" suggests. In fact, it suggests that capital distribution as of its current state is highly inefficient and we are simply funding the wrong people.
Smart AI talent aren't going to out there constantly trying to get funding or the best deals. They would want to work. Capital is getting too used to not doing the ground work to seek them out. Capital needs to be more tech savvy.
VCs and corporate development teams don't actually understand the technology deeply enough to identify who's doing the important work.
At that point, are you the 18-year-old phenom who got the big payday and sort of went downhill from there?
I imagine the biggest winners will be the ones who were doubted, not believed in, and had to fight to build real, profitable companies that become the next trillion-dollar companies.
Not that it would be bad to be Mark Cuban, but Mark Cuban is not Jeff Bezos.
And for posterity, I respect Mark Cuban. It's just that his exit came at a time when he was fortunate, as he got his money without having to go all the way through to the end.
If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds. Give anyone who shows genuine interest some amount of compute resources to test their ideas in exchange for X% of the payoff should their approach lead to some step function improvement in capability. The current “AI talent war” is very different than sports, because unlike a star tennis player, it’s not clear at all whose novel approach to machine learning is ultimately going to pay off the most.
The argument goes: if AI is going to destroy humanity, even if that is a 0.001% chance, we should all totally re-wire society to prevent that from happening, because the _potential_ risk is so huge.
Same goes with these AI companies. What they are shooting for, is to replace white collar workers completely. Every single white collar employee, with their expensive MacBooks, great healthcare and PTO, and lax 9-5 schedule, is to be eliminated completely. IF this is to happen, even if it's a 0.001% chance, we should totally re-wire capital markets, because the _potential reward_ is so huge.
And indeed, this idea is so strongly held (especially in silicon valley) that we see these insanely frothy valuations and billion dollar deals in what should be a down market (tremendous macro uncertainty, high interest rates, etc).
AI doomerism seemed to lack finish, though. Anyone remember Eliezer Yudkowsky? Haven't heard from him in a while.
So, no more bitter lesson?
zer00eyz•6h ago
The fact that the author brings this up and fails to realize that the behavior of current staff shows we have hit or have passed peak AI.
Moores Law is dead and it isn't going to come through and make AI any more affordable. Look at the latest GPU's: IPC is flat. And no one is charging enough to pay for running (bandwidth, power) of the computer that is being used, never mind turning NVIDA into a 4 trillion dollar company.
> Meta’s multi-hundred million dollar comp offers and Google’s multi-billion dollar Character AI and Windsurf deals signal that we are in a crazy AI talent bubble.
All this signals is that those in the know have chosen to take their payday. They don't see themselves building another google scale product, they dont see themselves delivering on samas vision. They KNOW that they are never going to be the 1% company, the unicorn. It's a stark admission that there is NO break out.
The math isnt there in the products we are building today: to borrow a Bay Area quote there is no there there. And you can't spend your way to market capture / a moat, like every VC gold rush of the past.
Do I think AI/ML is dead. NO, but I dont think that innovation is going to come out of the big players, or the dominant markets. Its going to take a bust, cheap and accessable compute (fire sale on used processing) and a new generation of kids to come in hungry and willing to throw away a few years on a big idea. Then you might see interesting tools and scaling down (to run localy).
The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.
Avicebron•6h ago
"Local Model Augmentation", a sort of standardized local MCP that serves as a layer between a model and a traditional app like a game client. Neat :3
ythiscoyness•6h ago
Plenty out there who want authors like this believing it enough to write it
harimau777•6h ago
asdf6969•5h ago
nopinsight•5h ago
seanp2k2•3h ago
woah•3h ago
kilpikaarna•1h ago
Fair enough, they're probably worth the money it takes to poach them. But trying to stretch the (arguably already tenous) "10x engineer" model to explain why is just ridiculous.
Kapura•6h ago
this feels like a fundamental misunderstanding of how video game dialogue writing works. it's actually an important that a player understand when the mission-critical dialogue is complete. While the specifics of a line becoming a meme may seem undesirable, it's far better that a player hears a line they know means "i have nothing to say" 100 times than generating ai slop every time the player passes a guard.
zer00eyz•5h ago
Factorio, Dwarf Fortress, Minecraft.
There are plenty of games where the whole story is driven by cut scenes.
There are plenty of games that shove your quests into their journal/pip boy to let you know how to drive game play.
Dont get me wrong, I loved Zork back in the day (and still do) but we have evolved past that and the tools to move us further could be there.
Kapura•5h ago
Dwarf Fortress, in fact, shows just how much is possible by committing to deep systemic synthesis. Without souped-up chatbots Dwarf Fortress creates emergent stories about cats who tread in beer and cause the downfall of a fortress, or allow players to define their own objectives and solutions, like flooding a valley full of murderous elephants with lava.
My original point is that papering over important affordances with AI slop may actually work against the goals of the game. If they were good and fun, there is no reason a company like Microsoft couldn't have added the technology to Starfield.
goopypoop•4h ago
ehnto•4h ago
These are systems sandboxes, places for players to explore a world and build their own stories. There are not many examples of popular games where the story is heavily procedural, and I think the reason is obvious. Players notice the pattern very quickly and get bored.
Stories are entertainment and are meant to entertain you, but systemic games are different, they are designed for you to entertain yourself with your own creativity. The proc gen just helps paint the background.
I think it's important to look at procedural generation in games through that lens, otherwise you're likely criticising proc gen for something it's not really used for that much. Proc gen content is rarely the cornerstone of a game.
ehnto•2m ago
Shadows of Doubt would benefit from being able to more dynamically interview people about information they hold. Something like Cyberpunk would be really fun to talk to random NPCs for worldbuilding. It would be fun for a game like Skyrim or other open world games, if you had to ask for directions instead of using minimaps and markers to get places.
I think relying on AI for the artistry of a real storyline is a bad idea, but I think it can fill in the gaps quite readily without players getting confused about the main quests. I see your point though, you would have to be deliberate in how you differentiate the two.