Microchips were never “products” in themselves. They were compute primitives. The real value emerged in operating systems, developer tools, and applications built on top.
Today, LLMs increasingly feel similar. OpenAI, Anthropic, Google, etc. are building cognitive compute layers. Most “AI startups” look like early PC software – wrappers around a new primitive.
Agents then feel like early operating systems: orchestration layers around probabilistic compute, adding memory, tool access, execution loops.
If this analogy holds, the long-term value might not sit in the base models themselves, but in: - workflow integration - vertical domain systems - data pipelines - distribution - orchestration layers
The open question: Is this closer to the Intel/AMD era (infrastructure shift), or something fundamentally different because the primitive is stochastic and language-native?
Curious how others see the analogy.
0xecro1•1h ago
derverstand•35m ago
At the same time, I’m not fully convinced that owning the primitive automatically means owning most of the value on top of it in the long run.
Nvidia owns the GPU layer, but it doesn’t own the majority of the software built on top of GPUs. AWS owns infrastructure, but SaaS value still fragments heavily across vertical domains. Infrastructure providers often try to move up the stack, but specialization and domain depth tend to create space above them.
It’s possible that frontier labs will capture more horizontal value than chip companies ever did. But I’m not sure language-native compute completely collapses the stack. It might just reduce friction and lower the barrier for building vertical systems.
The interesting question to me is whether this really eliminates the ecosystem layer — or just reshapes it.