* actually writing more on my own - created a personal blog just to get myself to write more
* upleveling my thinking - think more about problems and framing
* leverage my experience - guide (or sometimes force) the AI assistant to leverage my experience to avoid problems
* learning new things - rather than let AI just replace things I can do, I use AI to help me learn new things/technology faster than I would have pre-AI
I wonder lately, doesn't that all new knowledge push out the old knowledge? As in new things replace old things we know. I don't know any studies on this but do we have infinite capacity for knowledge?
What about retaining it? I catch myself asking AI wondering about random things that pop into my head, reading it, maybe using that knowledge once and later no longer remembering what it was. Maybe if you use that knowledge in practice from the get go but projects get so complicated sometimes it seems like there is not enough space in my brain for things AI is learning me.
Retaining (again just speaking for myself) requires actually using / applying the knowledge at some point within some timeframe of learning it. Otherwise yeah it fades to the point of disappearing over time.
Another way of looking at what you said is that the practicing the new knowledge takes the place of practicing the old knowledge. So it isn't the knowledge that is replaced, but the learning (imprinting).
2. we will think of verification loop more - tasks will be chosen that have more ability to be easily verified
3. the concept of the difference between "generation" and "verification" will be more mainstream [1]
4. spec driven development will become more common
5. scenario testing will become mainstream
i have few more predictions like these.
[1] I wrote a blog post on this explaining why I keep this generation vs verification difference in many parts of life https://simianwords.bearblog.dev/the-generation-vs-verificat...
> 4. spec driven development will become more common
I do believe both of these, recently someone created an rar open source alternative for all its version using LLM agents because of that specs and in some sense verification/easy debug (or compile time) aspect.
On the other hand, I was making a GUI application (a rough scratchpad app) in Odin and there were so many bugs that I had to explain it and even then it was like lottery or just about unpredictable would be the better word as it would fix one thing and break another or just not fix it.
At the end of the day for GUI apps, it just doesn't have any way of testing them that greatly perhaps. There are many GUI things which I feel like LLM's are still underwhelming in, especially if you wish to create a GUI in say any niche language.
It can do that but the workflow is so bad that it might just not be worth it. i do wonder if GUI development becomes the one thing that AI can't do and their software development jobs are safe.
I was just scrolling upwork randomly and I saw tons of flutter & wordpress jobs.
Like GC langauges help me do more productive work by hiding useless info about memory
I don't think I think less when writing Clojure or Rust than I would writing raw assembly code, I just broaden the scope of my projects to fill up my thinking capacity.
Can you point a LLM at a body of code, and tell it "give me a concise UML chart of what this does"? I'm not advocating humans writing UML, but some representation like that may be useful to AIs. Except that they don't really do graphs very well. We may need a specification language intended to be read and written by AIs, readable by humans but seldom written by them. Going directly from natural language specifications to code causes the LLM blithering problem to generate too much code.
To do the thing I hate and use an analogy: It's not like asking a furniture maker to start using power tools; it's like asking a furniture maker to start telling a robot to make the furniture, in English. Yes, the people who were already good at furniture-making will have an advantage in how to direct the robot - but the salient point is that it's a recipe for misery for many people.
Assembly is a stretch (albeit a few applications still need it), but otherwise that sentiment (and people who actually believe it) speaks a lot to me about what makes today's PCs slower, more latent, and less enjoyable to use than the machines of the past. Today's world sucks.
Im still concerned enough about the specifics to show concern about background refresh tokens silently failing in OAuth in a mission critical real-time system.
Im not coding it, but im still thinking it. That's the important part, ain't it? Is it dumb or just clever delegation?
I like making things myself, I have self-navigating robotics projects I do on my own time, but I'm not gonna use an AI to do it for me, the joy I get is figuring it out myself.
I will use AI if I'm stuck on something or need a specific algo written that I've spent enough time on and couldn't figure out.
You lose some ideas of thinking if AI does all the coding unless you study that code. But that’s still different to creating the code yourself.
Same with authors or songwriters. Their brain is used to create stories and songs that‘s why it’s easier for them to come up with ideas.
If you are just a reader or listener your brain doesn’t get wired in the same way.
AI is a lesser problem for already experienced developers, because they just lose some abilities but new developers will never get those abilities in the first place, which will limit their thinking especially for edge cases that need creativity
I do think that what people are being paid might get adjusted to whatever is happening.
Firstly off-shores, then now Tech companies have convenient response to lay off people and they genuinely believe that companies can be 5-10x shorter with AI and 90% of code will be written by AI.
They then push it on engineers and some adopt, some don't. It becomes a goodhart's law and people just start spending tokens to look good too and just spearhead using AI because hey 1) the corporate is recommending you to do this and then 2) the points you talked about.
The AI bill blows up (Cloudflare spends 5 million $ per month probably more in AI bills iirc) and with all of these, the company fires people off.
The amount of software engineers laid off all then try to create another AI tool (...using AI) or try to overcompete when the job market is at one of its all time low. Combine this with the overall trillion dollar and more of US stock market which is attached to the AI bubble.
I do think that you are paying a price in all of this, I feel like job insecurity is at an all time high, people are just scared of losing jobs from my understanding within this career. Some are closer to retirement than others but that's about it.
I think that nobody is that happy to be honest, the software engineer is worried about his job, the CEO is worried about being replaced or his product replaced by AI, the AI company is worried about how it would be profitable in first place, the investors are worried if they got into a bubble, the government is worried about all these people and other so distractions (think UFO files for example) and wars are happening and its successfully diverting our attention from real issues.
I don't know but I think that we are all paying a price and I say this as I sometimes feel the most over-empowered by AI, (like young guy in his teens) but I just feel like we lost something more critical along the way. We lost some senses of our humanity and peace as we are embedding this technology and just have people who are only thinking about it 24/7. I have to be honest but I do sometimes feel like I would've fared off okay without the AI thing too and I don't care about my personal gains so much sometimes but I do think that the world would've probably been net positive if AI's plateaued or were never created.
And how are people forgetting to code by using LLMs? Do they just mean they forgot the syntax of a particular language? Or forgot how to architect features or how the development lifecycle works?
I've mostly used LLMs to build more complex things that would have been a lot to manage previously, or to build something completely new and learn how it works. I feel like I've only become a better engineer (and programmer too) because of LLMs.
That said, I have experience. I could absolutely see myself falling into this as a junior or even mid level dev. I'd no doubt not feel that feeling on my neck if it wasn't scarred from code review lashings early in my career by knowledgeable mentors.
That's still 2-3x the velocity, but you get a better result because you went deeper on the paths-not-taken when designing.
A junior has managers pushing them to do more, faster. You review the code but do you really understand it the same as if you struggled through it? Do you ever build the muscle memory of what works and what doesn't?
It is the thought process that builds skills. I've seen some projects trying to be deliberate about learning from the agent as it writes to code - but I'm not sure there is a substitute for struggling and learning by doing.
Don't vibe-code, it's a joke someone coined in the moment, that somehow the industry decided shouldn't be a joke, and some people think it's a feasible way of developing stuff, it's not.
Find a better way of working together with agent, where you get the review what's important to be reviewed by a human, and "outsource" the rest, and you'll end up with code and a design that works the way you'd program it yourself, you just get there faster. I probably end up reviewing maybe 90% of the code that the agent writes, but still it's a hell of a lot more pleasant writing/dictating a few prompts over typing tens of thousands of characters and constantly moving between files. Maybe I'm just tired of typing...
I am trying super hard to give the tools to validate everything to AI.
I finish by opening a draft PR and then I go through doing a deep review myself.
If I didn't already have 10+ years experience, it would be hard to learn and not atrophy with easy shortcuts being so available.
You still need people who know stuff in detail and can own the code... for now
If you try and get AI to do anything meaningful, it will be riddled with footguns and bizarre choices. Maybe if you have hundreds of dollars worth of tokens that might not be the case - but for someone who spends $10 a month, it's just not worth the headache.
Besides, for me these are hobby projects and writing code is still fun, I just make AI write the boring parts (good examples: saving and loading, parsing of data files and settings menu functionality) - but I keep it away from anything that needs a humans judgement to create.
I code review everything that Claude produces, and I'd estimate about 90-95% of the time, my reaction is WOW it works but too much code dude, let's take 3 hours to handhold you through simplifying it until nothing more can be removed.
You can also tell it to periodically summarize the "lessons learned" from the recent session(s)
You can certainly steer them a bit to reduce the issue parent talks about, but they still go into that direction whenever they can, adding stuff on top of stuff, piling hacks/shim on top of other hacks/shims, just like many human developers :)
Tell it "Do not change any files yet, just listen." Then we discuss the problem. Then I have it write to a file it's understanding of the change.
I review that carefully. Then I let it implement. I approve each change after manually looking at it. I already know what it should be doing.
Make smaller changes and check each one carefully before and after.
Can relate but the only thing I do different is I teach AI how to cleanup after herself in followup prompts, sessions and refining AGENTS.md. Static code quality analysis tools are also really good to keep the agent on its toes.
But the industry is changing around you fast.
If MIT-bred devs were already building crap in faang before, the trend has been getting nothing short of worse across the industry.
Expectations are rising, the field is becoming a rat race of which engineer can output the most mediocre/acceptable/good enough amount of features in the least time as possible.
Let me make this clear: you're in an increasingly rarer bubble where you have a luxury that is disappearing in this industry, plain and simple.
I have the fortune of having stellar devs around me, people that contributed to projects and software you use every day.
They are also outputting magnitude of order more than they ever did, and none of them is getting genuinely better at the craft, but it is what it is.
On the flip side, I'm working on stuff FAR more challenging than I would ever be able to do on my own.
My brain is melting because I can barely keep up with learning how to figure out if I'm even doing what I'm trying to do.
AI might be making me a worse coder, but I don't care. If it hasn't "solved" coding now, I'm pretty confident it will long before my career is over. I don't have a job because I can write code - that's a small part of it. I have a job because I can get things to work. Anyone can code things that don't work (especially AI).
AI is certainly making me a far better overall engineer. Instead of spending my time trying to make the compiler happy (or fixing dynamic type errors at runtime), I can spend my time trying to solve substantially harder problems that I would never even dare try without an entire team to back me up (i.e. never).
Coding - imo - is VERY low on the totem pole of engineering skills.
I don't care if the function is pretty. I care if the system is upholding invariants and performing as expected, and there's adequate testing in place to PROVE to me that it ACTUALLY works.
High performance concurrent code has always blurred the line between sorcery and arcana... Go didn't really solve that. Rust/Tokio didn't. Zig didn't. C certainly hasn't.
It might be easier to prove to yourself, if you're the one doing all the writing, but at the end of the day, code is rarely just for you...
You probably should have the same level of proof whether you wrote it yourself and just trust yourself bro, or whether a Chinese Room wrote if for you.
I feel like I'm living in a Brave New World, and - at least for the time being - I'm enjoying it, even if it feels like I'm sprinting as fast as I can and still unable to keep up.
This is not a good thing. You should understand what your code does. Writing code nobody can understand is not a flex.
Reviewing code is pain, reviewing requirements and giving feedback feels more productive. I have to confront the full shape of the problem and I usually discover a few cans of worms that make me rethink my approach.
And of course the hedonic treadmill (if that's even valid any more, IDK) has reset the baseline so that anything less than the quick gratification feels like nothing. It makes the stuff I used to absolutely love feel like more of a chore compared to just cranking out features with code only an AI can love.
Entering the workforce happens at an age where people have built (some more rudimentary than others) a level of understanding and self control regarding delayed gratification and Type II fun.
Did you have the kind of life where you were never really challenged to build that skillset, or is the mental stimulation so strong for you when you use AI that it overcomes executive function?
Do you really think phrasing a question like this will ever induce a productive response?
I didn't learn how to study until my 20s. I didn't have will-power over eating and exercise until my body changed around 30 and I suddenly got fat, then I talked with friends that teased me for being less skilled at something than a teenage version of themselves.
What's the saying: someone who's never smoked doesn't have to learn how to quit smoking?
I personally think it can be a great tool for learning but it's so easy to fall into the trap of getting AI to do everything for you.
I've also used it for personal projects like a Chip8 emulator I wrote in C where I'd managed to run a few basic games and ran out of steam. Used AI to help me implement the rest.
I've been using AI coding tools a lot lately, though I'm always in the loop. I write most of the important code by hand, but I like to send Claude Code or Codex off to try to come up with a solution in parallel to compare.
Having reviewed so much of my hand-written code side by side with AI-written alternatives, I am still amazed that anyone admits to letting AI write all of their code. Either you're working on much simpler problems than I am, or you don't really care about anything other than making the tests go green and waiting for bug reports to come back so you can feed them back into the LLM again.
Some times the coding tools come back with better ideas than I came up with. Some times my idea is much better. Most often with medium to high complexity problems, if the AI comes up with a working solution it has enough problems that an attentive human reviewer would have rejected it at best. At worst, it creates a mess of spaghetti code with maintenance time bombs ticking away. And that's for one change. I can't imagine what a codebase would look like if you completely deferred to AI tools to do everything.
This quote is even weirder because they claim to have been doing this for two years! Two years ago, coding tools were much worse than they are today. Using AI to write all of your code 2 years ago would have been a weird choice.
When I read posts like this I don't know what to think. Is this real? Or is it exaggerated for effect?
I also roll my eyes a little bit at the idea that not writing code for 1-2 years means you forget how to code. I've been back and forth between 100% management and 100% IC in my career. While there is a warm-up time to get back into coding, you should not completely forget how to code after such a short time. The only reason this person feels like they've forgotten how to code is that they've made a choice not to code for 2 years and, apparently, they don't feel like making any effort to change this. For someone who claims to love writing code, I don't get it. Something doesn't make sense about this writing.
I find that AI fails at things that are truly creative. I have been thoroughly unimpressed with ideas it has had or things it’s written for me. There’s still a lot of room for human creativity.
This sounds a lot like "You can skip the fundamentals of basketball and just focus on dunking!"
I have a pet theory that perhaps the optimal way to use AI will be more like an "exoskeleton" that turns you into a super-human programmer. Something that plugs the deficiencies of the human programmer, rather than replacing you entirely.
There are things that I think are very cool; there are lots of projects that I've sort of wanted to do for the last decade that I have pushed off because they're reasonably high effort and I don't want them that much, so being able to have a pretend intern write it for me has been great.
On the other hand, I do think that using Claude/Codex to do all the coding at work has become a little soul sucking. Now instead of being paid to do fun software work, a lot of my work still boils down to babysitting interns.
When I do get to work on projects that are interesting, it's still fun because I can justify writing TLA+, and using that as a guiding spec for my projects. The problem is that most work really isn't that interesting; a lot of it is glorified SQL queries, or CRUD, or "put thing into Kafka in one place, and take it out in another place". Those jobs can be tedious, but they aren't interesting, and now instead of even getting that, I yell at Codex to do it and I awkwardly sit and wait.
I didn't think I'd miss writing stupid CRUD apps, but here we are.
If coding a new feature, I do one step and check the code, doing git diff, reading changes, or just asking Codex, to show me changes.
If writing an article, I ask for only one paragraph. I read paragraph and if it is ok, I accept it, if it doesn't show off my thoughts I work on one paragraph.
If doing data analysis with AI, I do one step of analysis and ask AI to display intermediate results so I can see if all is going in good direction and there are no hallucinations, additionally I have follow-up prompts for AI to do results verification. If all looks good, then I continue to the next step.
I don't like situation when I ask AI to do all code changes, or all article, or all data analysis in one pass with one prompt. It is simply impossible to check if AI is correct and results are not satisfactory. You can easily see this when asking AI to write a deep article with one prompt - you clearly see that it doesn't reflect your thoughts.
Maybe step-by-step is the approach to use AI and not feel dumber.
So for example, once AI deleted my project, I was able to recover it but I lost version control through series of mistakes and IMO I lost a good version. (I think after abandoning that project and coming back, I was able to accomplish it)
Another example which is the one which is biting me the most is that I wanted to create a copy.sh/v86 based thing where you are able to edit the .img files of distros and save them all within the browser. I was able to run v86 custom way but I wasn't able to mount or have a proper way for making it work.
And now although I mean this is just an optional project and I just thought hey it would be fun to edit .img files in browser but now it feels like I get disappointed.
I think that disappointment is in both say a frustration of thing not working and secondly, just realizing that I might be dropping this idea altogether. Now I must admit that this is a field that I have absolutely no expertise at all in, but still, it feels disappointing to me and I kept thinking about it for sometime now.
I wonder how many people just feel that if AI is unable to make their project, to then either get frustrated/disappointed and even a salt of panic. I think its just wrong for how damn much we are relying on LLM's at this point. It feels like the whole economy is just doing what I am doing but with billions of dollars.
Another thing that I feel like is that both young and elderly people are really much like the same in vibe-coding. (Yes specs can help but LLM's are still autocorrect on steroids), I feel like we are both forsaking the junior developers and also forsaking the expertise created by senior developers as we replace it with these LLM's
Do you think that in 2026 maybe rapid progress can also come from using the same primitives faster?
I'm still figuring this out but I'm certainly open to the possibility.
You need to spend time on coding without agents and writing without AI as practice if nothing else.
You should not get complacent in offloading all detail oriented work to agents.
I unironically believe this is a very good habit. When it comes to writing, instead of starting with AI, finish a chapter by hand first then ask AI to review it strikes the best balance.
This is where I'm at. I feel like I need AI to review everything.
I do write initial proof of concept crude prototypes (not commented, hardcoded variables, etc), and AI does the productionizing of them. It has really allowed me to command a team of agents instead of keeping track of a bunch of humans of varying work ethic, skill, and ability to maintain high code quality. And often AI is very good at maintaining patterns used in the code base or even keeping them to industry best practices.
When using AI you will no longer be writing so much in programming languages—English or whatever language you talk to the LLM will be the main language.
Most people, given a nail gun, cant build a house, thats where the skill is...
Im not someone whose validation came from the lines of code, but from the resulting working system.
Based on the MIT and MSFT studies.
The work rhythm has ballooned and as every co worker is now pushing work (generally mediocre but acceptable due to strong codebase fundamentals and them being good engineers) it is increasingly becoming a rat race of who delivers more. Companies don't even need to promote AI productivity because engineers being engineers will engineer the minimum effort required to deliver as much output that makes stakeholders happy.
I am less and less fond of this work.
I'm sure there will be people with different experiences, but I've never worked as much as I did in the last two years, but I'm too burned out. I genuinely feel I've regressed as an engineer and I see the same in my coworkers, some of them contributors to the highest impact OSSs you can think of.
Every day, I'm more and more leaning into changing industry.
I love code and programming and solving product problems. But the job has changed dramatically.
If the pay+comfort ratio wasn't that good I would've done that already.
It's hard to give up to 6/7k+ net per month in southern Mediterranean. I'm way better off financially than most US devs making even more, there's no comparison.
Yes -- now let's talk about the correct form of fighting back.
It is not "I don't want to feel self-doubt so I will suppress that feeling."
It is, "The self-doubt is valuable -- it's pushing me to improve."
The AI is never going to be able to say what you really mean. But it may inspire you to push harder to improve your ability to do that.
1. a general coding AI: Completely broken. Should auto-comment, but never does anymore. Stopped a while back, nobody seems to know why.
2. another general AI: You have to at-chat it. It reacts to the message with <eyes emoji>, but never actually posts a comment?
3. a security bot. Comments, when it thinks there's a problem, in the most obtuse way possible. "SAST findings". But the findings are behind a link, and none of us devs are given access.
I could lean on and press the various people shoving AI down my gullet to like … look at this, and the actual lived experience of devs trying to derive productivity from this mess? But IDK what's in it for me, really.
Even Claude, when it worked, would comment in the most sociopathic manner possible: an English prose description of the problem, attached to an utterly unrelated line of code. Part of that is probably Github, who does let you attach comments to arbitrary lines of code in a review, only the blesséd lines can have comments. Literally none of our AI can format their complaint with a freaking suggested change (i.e., the Github feature, no, instead I get English prose).
Honestly for all I know we failed to pay the bill or something inane, but it would be nice if the AI could format an error message, or something.
Also I feel like it’s fine to let AI write your code. I felt very much like the OP did. A couple of things help keep my sanity. one is that as developers I think our job has evolved to knowing what decision an AI makes is good and which one is bad, this can be code or design – but there is nowhere a developer(or for that matter a knowledge worker) can hide from ai. In this world you will be forced to communicate with them. Partly because as a community we have decided(for better or worst) that AI should bring non trivial amounts of productivity gains to software development.
The other one is something I want to validate which is for those of us who are mediocre at coding, it might be a gift because it would free up some time and thus mind space to consider what we are actually good at.
I love coding, it always felt like Legos for adults. Not that Legos aren't also Legos for adults.
But there's no fighting the fact that we won't be writing 99% of the code anymore so I take pleasure in crafting the specs and requirements clearly, that's where I put the effort.
And then to avoid having to babysit the agents to get them to stick to the plan, I built a super robust external orchestrator that forces multiple review and fix rounds until I get the result I want.
I'll be fully open sourcing that soon also https://engine.build
Today I'm forcing myself to learn SwiftUI and type each character with my hands, there is a part of me asking "Why are you wasting your time instead of prompting it and getting the UI you want in minutes?". Well, even I use AI I must know the domain I'm operating in to create good products instead of useless slop. Even though I've been coding for 20 years now, I still need to be humble to grown in anything new. I can vibecode full apps but I'm not gonna pretend that my experience isn't playing a massive role in guiding the models.
Don't let AI take away your joy for building stuff, it's totally fine not being "productive" and taking your time. Just force yourself to have, at least, 2 AI days off every week.
After using LLMs for a while, I have to admit it's pretty nice, and I like using it. I've been vibecoding a few apps, and it's a good dopamine hit to immediately see your ideas come to life. However, based on my experience, it will bite you if you trust it blindly. Even in my vibecoded projects, it keeps adding "features" without me asking for them. Since they're just pet projects, I don’t really care as long as the end result is what I'm expecting, but I don’t think companies will be as flexible. I also don't think customers would like it if features changed or got added with every new fix or update.
So this could go in a bunch of different directions from here, but to summarize the current situation:
A lot of companies are heading in this direction.
Without proper engineering, AI will easily write more code and potentially change the application unintentionally.
We will have fewer junior engineers entering the market because of fear around AI and reduced hiring.
AI usage will hit a critical point where it is making massive amounts of changes, and the people "prompting" it might start getting overwhelmed.
We will end up with more features that people have to keep in their heads. I don’t think we can trust LLMs 100%, and because of that, developers will still need to know exactly what the application does.
Eventually, there will be a lot of bugs, and developers will complain that we need additional human resources.
Hiring starts again.
I think, right now, the toughest position is for new developers, and the best position is for people already in the market.If you don’t do this constantly, LLMS can certainly lead you right down the Dunning-Kruger path (though that’s a big oversimplification of a whole collection of psychological features from idee fixe to narcissism to fear of failure/criticism). If you really work at getting the LLM into the proper state it will happily rip your work apart in a rather cruel and indifferent manner, like an unsympathetic corporate gatekeeper who relishes exposing your flaws in a public setting. Debate club is another tactic that’s a bit less harsh, you have the LLM flip back and forth between defense and prosecution of your work.
I think this should be the default setting, but it doesn’t encourage engagement, the average customer will think the LLM is a mean jerk if it starts off like that.
ge96•52m ago
bigstrat2003•50m ago
ge96•49m ago
As far as the topic on hand, I work with someone whenever you ask them a question they say "AI says..." I'm not a big fan of that.
skeptic_ai•15m ago