The future seems very grim. I find it highly unlikely that we will reliably solve alignment, and even if we do it seems equally unlikely that whoever controls AI will act on behalf of humanity’s interests, and not in pursuit of whatever their own goals are. Even where the AI revolution goes relatively well, it’s hard to see a future where the economics of physical labor vs robot labor don’t play out to the detriment of most humans, or one where humans cease being actors and become mere observers of something far beyond their reach.
Truly solving alignment would mean alignment for entirety of humanity.
shawnjharris•7mo ago
This isn't sci-fi. Agentic AI is already setting agendas, optimizing workflows, and nudging our choices. Drawing from Heidegger, Arendt, Borgmann, and Habermas, I trace how we got here and what we risk losing: agency, purpose, and the meaning of work.
The piece also dives into surveillance capitalism, instrumental convergence, and how enterprise AI adoption is accelerating this role reversal. I end with practical ways to stay human in the loop, because reclaiming purpose isn’t optional.
Would love your thoughts, critiques, and counterarguments. This is the future we're all stepping into.
ednite•7mo ago
Thanks for the essay and keep them coming.
shinryuu•7mo ago
cookiengineer•7mo ago
The dystopian world in that trilogy is becoming more and more reality and humanity will become a tool for programming more or less in the future, similar to how humans help a supercomputer solve "puzzles" a machine cannot solve.
I always mention a quote from an old friend in this context, who said: "The rat race of economy will force humans to have to do programming in one form or the other, there is no way to avoid it".
To me, the first level of software automation was the 2000s, when each and every company automated themselves with Excel spreadsheets. You'd be mind blown working for a corporate enterprise just by looking a the crazy use cases Excel has. That's the real reason companies cannot migrate away from Microsoft software.
Now we have the breakthrough of agentic coding agents, which eventually will be tools to plug and play into the machine via MCP.
And the next big thing will be the one that helps to make this possible in a training loop, where humans are in the testing and discovery loop of "what to write next", similar to how the supercomputer in Zero Theorem was portrayed (or your story's marketing AI).
In an economic context I highly recommend to watch CGP Grey's "Humans need not apply" (2012) video. Even when the predictions are off, the general truth in it remains, and that's what the job market is seeing now. To me the current software engineer market reflects what CGP Grey predicted at the time, even when it still might be too early to bet all-in into AI and LLMs.
collingreen•7mo ago
NietzscheanNull•7mo ago
fluidcruft•7mo ago