frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Hypercapitalism and the AI talent wars

https://blog.johnluttig.com/p/hypercapitalism-and-the-ai-talent
82•walterbell•11h ago

Comments

zer00eyz•6h ago
> If the top 1% of companies drive the majority of VC returns

The fact that the author brings this up and fails to realize that the behavior of current staff shows we have hit or have passed peak AI.

Moores Law is dead and it isn't going to come through and make AI any more affordable. Look at the latest GPU's: IPC is flat. And no one is charging enough to pay for running (bandwidth, power) of the computer that is being used, never mind turning NVIDA into a 4 trillion dollar company.

> Meta’s multi-hundred million dollar comp offers and Google’s multi-billion dollar Character AI and Windsurf deals signal that we are in a crazy AI talent bubble.

All this signals is that those in the know have chosen to take their payday. They don't see themselves building another google scale product, they dont see themselves delivering on samas vision. They KNOW that they are never going to be the 1% company, the unicorn. It's a stark admission that there is NO break out.

The math isnt there in the products we are building today: to borrow a Bay Area quote there is no there there. And you can't spend your way to market capture / a moat, like every VC gold rush of the past.

Do I think AI/ML is dead. NO, but I dont think that innovation is going to come out of the big players, or the dominant markets. Its going to take a bust, cheap and accessable compute (fire sale on used processing) and a new generation of kids to come in hungry and willing to throw away a few years on a big idea. Then you might see interesting tools and scaling down (to run localy).

The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.

Avicebron•6h ago
> The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.

"Local Model Augmentation", a sort of standardized local MCP that serves as a layer between a model and a traditional app like a game client. Neat :3

ythiscoyness•6h ago
> the 10x engineer meme doesn’t go far enough – there are clearly people that are 1,000x the baseline impact.

Plenty out there who want authors like this believing it enough to write it

harimau777•6h ago
Obviously the specifics are going to depend on exactly how a team pegs story points, but if an average engineer delivers 10 story points during a two week sprint, then that would mean that a 1000x engineer would deliver 10000 story points, correct? I don't see how someone can actually believe that.
asdf6969•5h ago
1000x revenue not 1000x developer productivity is possible sometimes. There are lots of jobs where developers also decide on the roadmap and requirements along with the execution instead of just being ticket monkey and a good idea executed well could easily be worth 1000x changing button colours and adding pagination to an API
nopinsight•5h ago
impact != story points
seanp2k2•3h ago
Ladies and gentlemen, the problem with The Valley in 2025.
woah•3h ago
These companies spend hundreds of millions of dollars to train these models and (hope to) make billions from them. The researchers are the people who know how to do it. These aren't guys cranking out React buttons.
kilpikaarna•1h ago
They know how to train the models because they were part of a team that did it once at a competitor already. They bring with them very domain specific knowledge and experience. It's not something you can learn at college or hacking away in your spare time.

Fair enough, they're probably worth the money it takes to poach them. But trying to stretch the (arguably already tenous) "10x engineer" model to explain why is just ridiculous.

Kapura•6h ago
> The first team to deliver a model that can run on a GPU alongside a game, so that there is never an "I took an arrow to the knee" meme again is going to make a LOT of money.

this feels like a fundamental misunderstanding of how video game dialogue writing works. it's actually an important that a player understand when the mission-critical dialogue is complete. While the specifics of a line becoming a meme may seem undesirable, it's far better that a player hears a line they know means "i have nothing to say" 100 times than generating ai slop every time the player passes a guard.

zer00eyz•5h ago
> this feels like a fundamental misunderstanding of how video game dialogue writing works.

Factorio, Dwarf Fortress, Minecraft.

There are plenty of games where the whole story is driven by cut scenes.

There are plenty of games that shove your quests into their journal/pip boy to let you know how to drive game play.

Dont get me wrong, I loved Zork back in the day (and still do) but we have evolved past that and the tools to move us further could be there.

Kapura•5h ago
I am not sure what point you're trying to make here; none of the games you mentioned contain the famous "arrow to the knee" line.

Dwarf Fortress, in fact, shows just how much is possible by committing to deep systemic synthesis. Without souped-up chatbots Dwarf Fortress creates emergent stories about cats who tread in beer and cause the downfall of a fortress, or allow players to define their own objectives and solutions, like flooding a valley full of murderous elephants with lava.

My original point is that papering over important affordances with AI slop may actually work against the goals of the game. If they were good and fun, there is no reason a company like Microsoft couldn't have added the technology to Starfield.

goopypoop•4h ago
procedurally generated content is the most onanistic form of art
ehnto•4h ago
It is no coincidence that the most popular procedurally generated games feature highly systemic gameplay.

These are systems sandboxes, places for players to explore a world and build their own stories. There are not many examples of popular games where the story is heavily procedural, and I think the reason is obvious. Players notice the pattern very quickly and get bored.

Stories are entertainment and are meant to entertain you, but systemic games are different, they are designed for you to entertain yourself with your own creativity. The proc gen just helps paint the background.

I think it's important to look at procedural generation in games through that lens, otherwise you're likely criticising proc gen for something it's not really used for that much. Proc gen content is rarely the cornerstone of a game.

ehnto•2m ago
There are a lot of games and gamers I guess that would benefit from very dynamic dialogue. It would mostly focus on long term games where the feeling of immersion in the world is valuable long after completing the main quests. Or systemic games where the main focus is concrete systems gameplay, but dialogue could help with idle chit chat and immersion.

Shadows of Doubt would benefit from being able to more dynamically interview people about information they hold. Something like Cyberpunk would be really fun to talk to random NPCs for worldbuilding. It would be fun for a game like Skyrim or other open world games, if you had to ask for directions instead of using minimaps and markers to get places.

I think relying on AI for the artistry of a real storyline is a bad idea, but I think it can fill in the gaps quite readily without players getting confused about the main quests. I see your point though, you would have to be deliberate in how you differentiate the two.

walterbell•6h ago
https://medium.com/@villispeaks/the-blitzhire-acquisition-e3...

> Blitzhires are another form of an acquisition.. not everybody may be thrilled of the outcome.. employees left behind may feel betrayed and unappreciated.. investors may feel founders may have broken a social contract. But, for a Blitzhire to work, usually everybody needs to work together and align. The driver behind these deals is speed. Maybe concerns over regulatory scrutiny are part of it, but more importantly speed. Not going through the [Hart-Scott-Rodino Antitrust Act] HSR process at all may be worth the enormous complexity and inefficiency of foregoing a traditional acquisition path.

From comment on OP:

> In 2023–2024, our industry witnessed massive waves of layoffs, often justified as “It’s just business, nothing personal.” These layoffs were carried out by the same companies now aggressively competing for AI talent. I would argue that the transactional nature of employer-employee relationships wasn’t primarily driven by a talent shortage or human greed. Rather, those factors only reinforced the damage caused by the companies’ own culture-destroying actions a few years earlier.

2014, https://arstechnica.com/tech-policy/2014/06/should-tech-work...

> A group of big tech companies, including Apple, Google, Adobe, and Intel, recently settled a lawsuit over their "no poach" agreement for $324 million. The CEOs of those companies had agreed not to do "cold call" recruiting of each others' engineers until they were busted by the Department of Justice, which saw the deal as an antitrust violation. The government action was followed up by a class-action lawsuit from the affected workers, who claimed the deal suppressed their wages.

aspenmayer•21m ago
Workers of the world unite?
bix6•5h ago
“The AI capital influx means that mega-projects no longer seem outlandishly expensive. This is good for the world!”

Is it? This whole piece just reads of mega funds and giga corps throwing ridiculous cash for pay to win. Nothing new there.

We can’t train more people? I didn’t know Universities were suddenly producing waaaay less talent and that intelligence fell off a cliff.

Things have gone parabolic! It’s giga mega VC time!! Adios early stage, we’re doing $200M Series Seed pre revenue! Mission aligned! Giga power law!

This is just M2 expansion and wealth concentration. Plus a total disregard for 99% of employees. The 1000000x engineer can just do everyone else’s job and the gigachad VCs will back them from seed to exit (who even exits anymore, just hyper scale your way to Google a la Windsurf)!

anovikov•3h ago
Ironically, there's no M2 expansion going on since Covid days and M2 to GDP is back to what it was pre-Covid and overall didn't even increase much at all even since GFC. It's only 1.5x of what it was at the bottom in 1997 when cost of capital was much higher than today. I think this concern is misplaced.
sfblah•27m ago
M2 is the wrong statistic for sure, but the thrust of GP's comment is accurate, IMO. Fed intervention has not remotely been removed from the economy. The "big beautiful bill" probably just amounts to another round of it (fiscal excess will lead to a crisis which will force a monetary bailout).
pj_mukh•2h ago
I think the author should've clarified that this is purely a conversation about the platform plays. There will be 100's of thousands of companies on the application layer, and mini-platform plays that will have your run-of-the-mill new grad or 1x engineer.

Everything else is just executives doing a bit of dance for their investors ala "We won't need employees anymore!"

systemvoltage•24m ago
What’s wrong with extreme wealth concentration? It’s not like hoarding cash. The wealth is the stake in companies they built or own.

We need more wealth concentration, simply because opposite of this is the prevailing normie zeitgeist. You can just write it off based on how popular it is to hate wealth.

pringk02•5m ago
Can you write off everything popular with the normie zeitgeist?
alganet•3h ago
> If the top 1% of companies drive the majority of VC returns, why shouldn’t the same apply to talent? Our natural egalitarian bias makes this unpalatable to accept, but the 10x engineer meme doesn’t go far enough – there are clearly people that are 1,000x the baseline impact.

https://www.youtube.com/watch?v=0obMRztklqU

danieltanfh95•3h ago
These "talent wars" are overblown and a result of money having nowhere else to go. People are banking on AI and robotics for human progress to take off and that's just a result of all other ventures fizzling out with this left for capital to migrate to.

If you talked to any of these folks worth billions they arent particularly smart, their ideas not really interesting. it took us a few years to go from gpt-3 to deepseek v3 and then another few years to go from sonnet 4 to kimi k2, both being open source models on way lower funding. This hints at a deeper problem than what "hypercapitalism" suggests. In fact, it suggests that capital distribution as of its current state is highly inefficient and we are simply funding the wrong people.

Smart AI talent aren't going to out there constantly trying to get funding or the best deals. They would want to work. Capital is getting too used to not doing the ground work to seek them out. Capital needs to be more tech savvy.

VCs and corporate development teams don't actually understand the technology deeply enough to identify who's doing the important work.

the_precipitate•2h ago
I think one of the main issue is that the 10x or 100x talents in AI have not yet really show their value yet. None of these AI companies are making any money, and they are valued over highly successful and profitable companies out there because of their "potentials". ARR is nothing if you sell goods valued at 1 dollar for 90 cents.
ijidak•26m ago
I wonder at what point this becomes like guaranteed salaries in sports, like prominent NBA players, where you work hard to get the salary. And then once you've made it, you are basically done, and it's hard to get up and motivate yourself. You've got acolytes and groupies distracting you, you're flush with cash without ever having really shipped anything or made any money. You're busy giving TED talks...

At that point, are you the 18-year-old phenom who got the big payday and sort of went downhill from there?

I imagine the biggest winners will be the ones who were doubted, not believed in, and had to fight to build real, profitable companies that become the next trillion-dollar companies.

Not that it would be bad to be Mark Cuban, but Mark Cuban is not Jeff Bezos.

And for posterity, I respect Mark Cuban. It's just that his exit came at a time when he was fortunate, as he got his money without having to go all the way through to the end.

Xcelerate•2h ago
I find the current VC/billionaire strategy a bit odd and suboptimal. If we consider the current search for AGI as something like a multi-armed bandit seeking to identify “valuable researchers”, the industry is way over-indexing on the exploitation side of the exploitation/exploration trade-off.

If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds. Give anyone who shows genuine interest some amount of compute resources to test their ideas in exchange for X% of the payoff should their approach lead to some step function improvement in capability. The current “AI talent war” is very different than sports, because unlike a star tennis player, it’s not clear at all whose novel approach to machine learning is ultimately going to pay off the most.

entropi•1h ago
Agreed, and I suspect the explanation is that these plays are done not to search for a true AGI, but to drive up hype (and 'the line').
aspenmayer•17m ago
The higher the line goes, the higher the expected value of return on investment. It’s a virtuous cycle based on a bet on all horses, but since the EV is so high for first mover advantage for AGI, it might be worth it to overleverage compared to the past for your top picks? These are still small sums for Zuckerberg to pay personally, let alone for Meta to pay. This is already priced in.
mlsu•2h ago
The full bodied palate of this AI market mirrors the sharp nose of 2023 AI doomerism.

The argument goes: if AI is going to destroy humanity, even if that is a 0.001% chance, we should all totally re-wire society to prevent that from happening, because the _potential_ risk is so huge.

Same goes with these AI companies. What they are shooting for, is to replace white collar workers completely. Every single white collar employee, with their expensive MacBooks, great healthcare and PTO, and lax 9-5 schedule, is to be eliminated completely. IF this is to happen, even if it's a 0.001% chance, we should totally re-wire capital markets, because the _potential reward_ is so huge.

And indeed, this idea is so strongly held (especially in silicon valley) that we see these insanely frothy valuations and billion dollar deals in what should be a down market (tremendous macro uncertainty, high interest rates, etc).

AI doomerism seemed to lack finish, though. Anyone remember Eliezer Yudkowsky? Haven't heard from him in a while.

HPsquared•10m ago
Maybe some of that frothy capital will finally leave the housing market.
ngruhn•1h ago
> AI catch-up investment has gone parabolic, initially towards GPUs and mega training runs. As some labs learned that GPUs alone don't guarantee good models, the capital cannon is shifting towards talent.

So, no more bitter lesson?

Show HN: Refine – A Local Alternative to Grammarly

https://refine.sh
69•runjuu•2h ago•16 comments

Let's Learn x86-64 Assembly (2020)

https://gpfault.net/posts/asm-tut-0.txt.html
246•90s_dev•9h ago•51 comments

Show HN: Ten years of running every day, visualized

https://nodaysoff.run
402•friggeri•3d ago•165 comments

Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs

https://arxiv.org/abs/2502.17424
100•martythemaniak•8h ago•29 comments

OpenCut: The open-source CapCut alternative

https://github.com/OpenCut-app/OpenCut
317•nateb2022•10h ago•94 comments

A Century of Quantum Mechanics

https://home.cern/news/news/physics/century-quantum-mechanics
28•bookofjoe•3d ago•11 comments

The underground cathedral protecting Tokyo from floods (2018)

https://www.bbc.com/future/article/20181129-the-underground-cathedral-protecting-tokyo-from-floods
105•barry-cotter•3d ago•33 comments

APKLab: Android Reverse-Engineering Workbench for VS Code

https://github.com/APKLab/APKLab
104•nateb2022•10h ago•6 comments

How does a screen work?

https://www.makingsoftware.com/chapters/how-a-screen-works
411•chkhd•17h ago•82 comments

A technical look at Iran's internet shutdowns

https://zola.ink/blog/posts/a-technical-look-at-irans-internet-shutdown
170•znano•15h ago•72 comments

Hypercapitalism and the AI talent wars

https://blog.johnluttig.com/p/hypercapitalism-and-the-ai-talent
82•walterbell•11h ago•33 comments

Binding Application in Idris

https://andrevidela.com/blog/2025/binding-application/
8•matt_d•3d ago•0 comments

Burning a Magnesium NeXT Cube (1993)

https://simson.net/ref/1993/cubefire.html
34•leoapagano•3d ago•5 comments

Myanmar’s proliferating scam centers

https://asia.nikkei.com/static/vdata/infographics/myanmar-scam-centers/
64•WaitWaitWha•3h ago•10 comments

Show HN: FFmpeg in plain English – LLM-assisted FFmpeg in the browser

https://vidmix.app/ffmpeg-in-plain-english/
85•bjano•3d ago•17 comments

The Scourge of Arial (2001)

https://www.marksimonson.com/notebook/view/the-scourge-of-arial/
32•andsoitis•7h ago•15 comments

James Webb, Hubble space telescopes face reduction in operations

https://www.astronomy.com/science/james-webb-hubble-space-telescopes-face-reduction-in-operations-over-funding-shortfalls/
80•geox•5h ago•48 comments

GLP-1s Are Breaking Life Insurance

https://www.glp1digest.com/p/how-glp-1s-are-breaking-life-insurance
311•alexslobodnik•13h ago•353 comments

The upcoming GPT-3 moment for RL

https://www.mechanize.work/blog/the-upcoming-gpt-3-moment-for-rl/
199•jxmorris12•4d ago•82 comments

Show HN: A Raycast-compatible launcher for Linux

https://github.com/ByteAtATime/raycast-linux
162•ByteAtATime•15h ago•43 comments

Five companies now control over 90% of the restaurant food delivery market

https://marketsaintefficient.substack.com/p/five-companies-now-control-over-90
218•goinggetthem•11h ago•211 comments

C3 solved memory lifetimes with scopes

https://c3-lang.org/blog/forget-borrow-checkers-c3-solved-memory-lifetimes-with-scopes/
109•lerno•2d ago•80 comments

Show HN: Learn LLMs LeetCode Style

https://github.com/Exorust/TorchLeet
148•Exorust•19h ago•18 comments

How to scale RL to 10^26 FLOPs

https://blog.jxmo.io/p/how-to-scale-rl-to-1026-flops
70•jxmorris12•3d ago•4 comments

Fine dining restaurants researching guests to make their dinner unforgettable

https://www.sfgate.com/food/article/data-deep-dives-bay-area-fine-dining-restaurants-20404434.php
84•borski•16h ago•160 comments

Black hole merger challenges our understanding of black hole formation

https://gizmodo.com/astronomers-detect-a-black-hole-merger-thats-so-massive-it-shouldnt-exist-2000628197
46•Bluestein•7h ago•40 comments

Infisical (YC W23) Is Hiring DevRel Engineers

https://www.ycombinator.com/companies/infisical/jobs/qCrLiJb-developer-relations
1•vmatsiiako•15h ago

Apple's Browser Engine Ban Persists, Even Under the DMA

https://open-web-advocacy.org/blog/apples-browser-engine-ban-persists-even-under-the-dma/
6•yashghelani•38m ago•0 comments

Monitoring My Homelab, Simply

https://b.tuxes.uk/simple-homelab-monitoring.html
136•Bogdanp•3d ago•47 comments

Show HN: ArchGW – An intelligent edge and service proxy for agents

https://github.com/katanemo/archgw/
40•honorable_coder•1d ago•7 comments