Rich kids get a bunch of darts to throw at the carnival game. Middle-class kids get one dart. Poor kids don't get any darts.
That's still mostly true in 2026. AI hasn't fixed it. But it has narrowed the gap at one specific stage, and that matters.
A Harvard grad and a self-taught founder from Tbilisi or Lagos now use the same models. Same Claude, same GPT, same Gemini. Same army of agents that can chew through terabytes of data overnight.
If you have a technical background, energy, and you build with more care than the next person, you can match or beat the competition at the product stage. That part of the moat is gone.
The asymmetry shows up later. Look at any recent YC batch. Still mostly Americans. Look at large rounds. Roughly 90% go to founders from top US schools, and you don't get into those schools without family money behind you. Kids from poor families don't have time to study in the States. They're busy covering rent and groceries.
So when it's time to push a product to market, your US-based competitor walks in with 10x the round size, grants from local programs, and a network you cannot replicate from outside the country. At that point it isn't a product competition anymore. It's a budget competition and a lead-gen competition. The fight for talent is always about burn rate and where the company is headquartered.
Still, the kid who used to have zero darts can now pay for Claude Code and take a shot. That isn't equality. But a few years ago it was zero, and now it's something
jdw64•1h ago
Umberto Eco once said that the internet amplifies the wealth gap. AI is the absolute pinnacle of that phenomenon.
I'm from South Korea, and recent studies here are already showing a severe 'AI divide' emerging among middle and high school students. Lower-income households struggle to maintain educational engagement, premium AI subscriptions are prohibitively expensive, and crucially, the inputs (prompts) these students provide simply aren't good enough to get valuable outputs.
Let’s be honest: the free tiers of GPT and Gemini are terrible. For context, I spend around $300 a month on premium models and APIs, and the gap between free and paid—from output limits to tool availability—is massive.
More importantly, the current architecture of AI dictates that output is strictly bounded by input. Because LLMs navigate a latent semantic space based on token distribution, the deeper and more specific your terminology, the higher the quality of the response.
Asking an LLM to simply 'add a login feature' versus asking it to 'build a login feature while designing the core logic for the Auth server; keep the access token to a 15-minute lifespan in client memory; issue the refresh token as a Secure cookie, and apply Refresh Token Rotation (RTR) to prevent token hijacking' yields entirely different dimensions of code.
This creates a paradox: to use AI effectively, you ironically need deep, pre-existing domain expertise. Yet, the more you rely on AI, the more you outsource your critical thinking to it, making it harder to cultivate that exact expertise over time.
I do not believe AI is an egalitarian tool
CodingJeebus•1h ago