A little about me: I started CS in 2020. I liked being able to instruct and interact with the world through code. I especially liked how code runs on hardware, and how software plus hardware can make machines learn patterns.
Coding in the pre-LLM era felt like real engineering. I don’t have the words for it, but things felt grounded, discrete math, fundamentals, networking protocols, it felt like hard engineering. A simple web server on a Sunday morning felt like an intellectual endeavour. After reading docs, architecture patterns, and obscure engineer blogs, you actually learned something that stayed with you, plus a small proud GitHub project (think Sophomore CS grad perspective). Life was simpler.
Fast forward 6 years. People are obsessed with AI. Obsessed with results, not process or learning. Not a day goes by without “cracked builders”, “token maxing”, “AI first”. Non-technical people ship features, then break core infra.
I feel coding agents (and AI tools that remove difficulty from tasks where difficulty was the learning) are, for many, less about value and more about dopamine hits, quick stimulation, and attention reward loops. Like short-form content, engagement over substance.
I’m not an AI doomer, but I think we need to better train people to use it properly.
AI has shifted reward from learning fundamentals to just optimizing output, does it work or not?
The issue is most people use AI to skip learning instead of augmenting it. Even first-time learners jump straight to results because others are doing the same, and not skipping feels like falling behind.
AI hasn’t just automated hard work, it has made it easy to avoid learning anything even once, and still ship outputs.
Ideal use: > You don’t know something -> use AI to plan, gather material, and understand deeply.
Reality: > Ask AI to do it -> if it works, move on -> if not, “try again”.
I saw this myself in a take-home project: > read spec > made simple architecture > used AI to improve design > refined doc > built one feature at a time > reviewed and tested before committing
Then it got harder, deadline pressure, token limits, rate limits: > started dumping full feature requests > committing without review or testing > bundling multiple changes in prompts
Eventually: > “fix it” > “metrics wrong” > “UI broken” > “use cache instead of DB”
Why? > lost context > re-reading code too slow > side effects piled up > token limits > cognitive and technical debt
*I wasn’t emotionally invested in code; I only cared: does it work or not?*
It became a spiral, each unreviewed change adding more debt. Still, I pushed through to meet the deadline with limited resources.
---
I think there are two types of engineers.
One type wants their code to work in real systems, solve real problems, and build serious engineering. They care about ownership, tests, structure, correctness, SLAs. If you don’t relate, you may not have felt that “I built this from scratch” ownership.
The other type is reward driven. They just want output. They don’t care much about learning or problem depth. They use AI as leverage to move faster in other directions, money, power, etc.
Some people are both. I think they’ll adapt best to the AI shift and likely move faster in their careers.
vrganj•44m ago
I no longer build intricate, satisfying systems that make my brain tingle in all the right ways and like nothing else ever has. I just argue with the machine until it produces something good enough. There's no grace, no elegance in it. No hard thinking, no moment of triumph.
It all feels brittle and poorly thought out, and I don't know if it'll scale. It all makes me feel very sad. I'm considering leaving the profession over it, but it's coming for all sorts of other intellectually interesting pursuits as well.