The other 20% is writing: policies, SOPs, audits, grants, performance reviews, etc.
I could probably automate over half my job in n8n in a weekend… hmm… actually might try that.
I believe that it's pretty close to the article thesis, just more prosaic.
And yes, the AI works great for some programming tasks, just not for everything or completely unsupervised.
(Yeah, I know, there's lots of instances of execs who got paid huge amounts of money and delivered abysmal results...)
They just create even more slop currently, which will be the case until someone realizes they aren't needed to produce slop at all.
Me (and my friends similarly) inspect code indirectly now - telling agents to write reports about certain aspects of the code and architecture etc.
> Me (and my friends similarly) inspect code indirectly now - telling agents to write reports about certain aspects of the code and architecture etc.
Doesn't this take longer than reading the code?
I can see how some of this is part of the future (I remember this article talking about python modules having a big docstring at the top fully describing the public functions, and the author describing how they just update this doc, then regenerate the code fully, never reading it, and I find this quite convincing), but in the end I just want the most concise language for what I'm trying to express. If I need an edge case covered, I'd rather have a very simple test making that explicit than more verbose forms. Until we have formal specifications everywhere I guess.
But maybe I'm just not picturing what you mean exactly by "reports".
And plenty of prolific programmers are writing publicly about their Ai use.
I find people tend to omit that on HN and folks dealing with different roles end up yelling at each other because those details are missing. Being an embedded sw engineer writing straight C/ASM is, for instance, quite different from being a frontend engineer. AI will perform quite differently in each case.
Why is this supposed to be a good thing?
Developers use it, for groking a codebase, for implementing boilerplate, for debugging. They don't need juniors to do the grunt work anymore, they can build and throw away, the language and technology moats get smaller.
The value of low level managers, whose power came from having warm bodies to do the grunt work, diminishes.
The bean counters will be like when does it pay for itself. Will it? IDK, IDC.
I know there's an attempt to shift the development part from developers to other laypeople, but I think that's just going to frustrate everyone involved and probably settle back down into technical roles again. Well paid? Unclear.
But because time is money, I think all the benefits go to the dev. The exec still needs the dev regardless
ICs dislike this because it raises expectations and puts the spotlight on delivery velocity. In a manufacturing analogy, it’s the same as adding robots that enables workers to pack twice as many pallets per day. You work the same hours, but you’re more tired, and the company pockets the profits.
Software Engineers are experiencing, many for the first time in their careers, what happens when they lose individual bargaining power. Their jobs are being redefined, and they have no say in the matter - especially in the US where “Union” is a forbidden word.
It doesn't help that the west has a clear bias wherein moving "up" is moving away from the work. Many executives often don't know what good looks like at the detail level, so they can't evaluate AI output quality.
Yes, we have craftsmanship, but at the end of the day everything is ephemeral and impermanent and the world continues on without remembering us.
I think both the IC and executive are correct in superposition.
I think another part of it is that AI tools demo really well, easily hiding how imperfect and limited they are when people see a contrived or cherry-picked example. Not a lot of people have a good intuition for this yet. Many people understand "a functional prototype is not a production app" but far fewer people understand "an AI that can be demonstrated to write functional code is not a software engineer" because this reality is rapidly evolving. In that rapidly evolving reality, people are seeing a lot of conflicting information, especially if you consider that a lot of that information is motivated (eg, "ai is bad because it's bad to fire engineers" which, frankly, will not be compelling to some executives out there). Whatever the new reality is going to be, we're not going to find out one step at a time. A lot of lessons are going to be learned the hard way.
Yes, and they work really well for small side projects that an exec probably used to try out the LLM.
But writing code in one clean discrete repo is (esp. at a large org) only a part of shipping something.
Over time, I think tooling will get better at the pieces surrounding writing the code though. But the human coordination / dependency pieces are still tricky to automate.
I'm (mildly) excited by LLMs because I love a new shiny tool that does appear to have quite some utility.
My analogy these days is a screwdriver. Let's ignore screw development for now.
The first screwdrivers, which we still use, are slotted and have a habit of slipping sideways and jumping (camming out). That's err before LLMs ... something ... something.
Fast forward and we have Philips and Pozi and electric drivers. Yes there were ratchet jobs, and I still have one but the cordless electric drilldriver is nearly as magical as the Dr Who sonic effort! That's your modern LLM that is.
Now a modern drilldriver can wrench your wrist if you are not careful and brace properly. A modern LLM will hallucinate like a nineties raver on ecstasy but if you listen carefully and phrase your prompts carefully and ignore the chomping teeth and keep them hydrated, you may get something remarkable out of the creature 8)
Now I only use Chat at the totally free level but I do run several on-prem models using ollama and llama.cpp (all compiled from source ... obviously).
I love a chat with the snappily named "Qwen3.5-35B-A3B-UD-Q4_K_XL" but I'm well aware that it is like an old school Black and Decker off of the noughties and not like my modern De Walt wrist knackerers. I've still managed to get it to assist me to getting PowerDNS running with DNSSEC and LUA and configuring LACP and port channel/trunking and that on several switch brands.
You?
Like what - the world's most advanced blowjob?
I really think a lot of folks were conned by a smooth operator and a polished demo, so now everyone has to suffer though having this nebulous thing rammed down our throats regardless of its real utility because people with higher pay grades believe it has utility.
It feels like a lot of “AI is inevitable; you are failing to make this abundant future inevitable by your skepticism.”
Not even sure if determinism is a good axis to analyze this problem. Also smells extremely like concept creep - do you mean "moving up the abstraction stack" as "non determinism" too?
I think that the simple explanation for why executives are so hyped about AI is simply that they're not familiar with its severe current limitations. For example, Garry Tan seems to really believe he's generating 10KLOC of working code per day; if he'd been a working developer he would have known he isn't.
E.g. when Jensen Huang said that you need to pair your $250k engineer with $250k of tokens.
I understand that developers feel their code is an art form and are pissed off that their life’s work is now a commodity; but, it’s time to either accept it and move on with what has happened, specialize as an actual artist, or potentially find yourself in a very rough spot.
I'm not being precious here or protective of my "art" or whatever. But I do find it sort of hilarious and obvious that someone on a data science team might not understand the aesthetic value of code, and I suspect anyone else who has worked on such a team/ with such a team can probably laugh about the same thing - we've uh... we've seen your code. We know you don't value aesthetic code lol. Single variable names, `df1`, `df2`, `df3`.
I'm not particularly uncomfortable at the moment because understanding computers, understanding how to solve problems, understanding how to map between problems and solutions, what will or won't meet a customer's expectations, etc, is still core to the job as it always has been. Code quality is still critical as well - anyone who's vibe-coded >15KLOC projects will know that models simply can not handle that scale unless you're diligent about how it shoul dbe structured.
My job has barely changed semantically, despite rapid adoption of AI.
So, yes; I understand where you’re coming from. But; that’s not what we do.
https://degoes.net/articles/insufficiently-polymorphic
> My job has barely changed semantically, despite rapid adoption of AI.
it's coming... some places move slower than other but it's coming
lol this is not why people do "df1", "df2", etc, nor are those polymorphic names but okay.
> it's coming... some places move slower than other but it's coming
What is coming, exactly? Again, as said, I work at a company that has rapidly adopted AI, and I have been a long time user. My job was never about rapidly producing code so the ability to rapidly produce code is strictly just a boon.
You were either a very talented baby or we’re justified in questioning your ability to assess the correctness of nitpicky formalisms.
Which parts of it exactly? I've considered for loops and if branches "commodities" for a while. The way you organize code, the design, is still pretty much open and not a solved problem, including by AI-based tools. Yes we can now deal with it at a higher level (e.g. in prompts, in English), but it's not something I can fully delegate to an agent and expect good results (although I keep trying, as tools improve).
LLM-based codegen in the hands of good engineers is a multiplier, but you still need a good engineer to begin with.
If you look at the evolution of agent-written code you see that it may start out fine, but when you start adding more and more features, things go horrifically wrong. Let's say the model runs into a wall. Sometimes the right thing to do is go back into the architecture and put a door in that spot; other times the right thing to do is ask why you hit that wall in the first place, maybe you've taken a wrong turn. The models seem to pick one or the other almost at random, and sometimes they just blast a hole through the wall. After enough features, it's clear there's no convergence, just like what happened in Anthropic's experiment. The agents ultimately can't fix one problem without breaking something else.
I wish they could write working code; they just don't.[1] But man, can they debug (mostly because they're tenacious and tireless).
[1]: By which I don't mean they never do, but you really can't trust them to do it as you can a programmer. Knowing to code, like knowing to fly a plane, doesn't mean sometimes getting the right result. It means always getting the right result (within your capabilities, but those are usually known in advance in the case of humans).
But I will insist that executives are more driven by FOMO than a teenager.
If you are not, you either have a boring job or do not have any ideas that are worth prototyping asynchronously. Or haven't tried AI in the last ~3 months.
When you analyze this as "Management loves AI" and "workers hate it" goes completely back to 'who owns the means of production?', and can be clearly seen within Marx's critique.
Ic can refer to people leading, without direct reports, making 500k+ in comp.
Meanwhile executives see the money related numbers go up.
Narrator: there is not
In my systems programming job ICs have mostly avoided it because we don't have time to learn a new thing with questionable benefits. A lot of my team are really, really good programmers and like that aspect of the job. They don't want to turn any part of it over to a machine. Now if a machine could save us from ever dealing with Jira...
That said, I have begun using AI for some things and it is starting to be useful. It's still 50/50 though, with many hallucinations that waste time but some cases where it caught very simple bugs(syntax or copy/paste errors). I think the experience of, say, systems programmers is very different vs python/web folks though. AI does a great job for my helper scripts in Python.
Management needs to take their own medicine though. They continue to refuse to leverage AI to do things it could actually be good at. I give a duplicate status to management 3x/week now. Why? AI could handle tracking and summarizing it just fine. It could also produce my monthly status for me.
It accomplished this not simply by eliminating my overpaid bullshit job as parasite attractor; but by putting an end to its pathetic semblance of a premise: building software to be used by, uh, someone? for, uh, something?
The various entities requesting the work (or, in later years, the layers of barely-sentient intermediaries between me and said entities) were hardly if ever clear on how exactly this was supposed to produce value; but now they're free, too! Free from having to even try to understand how answering that question is relevant, emdash - so in the end it worked out for them as well!
I am finally at liberty to do something worthwhile with my life, and while at this point I realize it'll take me some time to even remember what "worthwhile" even was (or whether such a thing still exists in your imaginary world of personalized sensory bubbles), I do sleep a rich REM sleep knowing society is now capable of digging its own grave without my assistance. Seriously, I was looking at my bank account and getting a little worried.
I am told that mine is a minority position: if you happen to be the kind of person who believes that more is better, no matter more of what, rest assured you and your eventual progeny will be quite safe - for a while, anyway - in your new role as AI trainer (or is it AI fodder, let's let the market decide!)
Well, turns out when we are all busy looking the part, it becomes impossible for anyone to actually play the part; but also nobody notices, so this is fine too!
Just one request on my part: if possible, do shut up while figuring out how to better turn yourself and our world into paperclips, alright? Besides the ones that you recognize as people, a whole bunch of other people do live on this here planetation - and I hear they find all the AI blather to be mighty annoying.
It’s like Marc Andressen bloviating about how AI will replace everyone except him.
To be fair, some of this is understandable. At some level, you’re just going to see some things as a bullet point in a daily/monthly/quarterly report and possibly a 10 minute presentation. You’re implicitly assuming that the folks under you have condensed this information into something meaningful.
embedded/cloud/IoT --> AI --> quantum…
When the company originally known as C3 Energy changes their name to C3.quantum, you'll know where on to the next buzzword.
Thoughts and idea as in "I will implement this in this structure, with these tradeoff, and it will work with these 4 APIs and have no extra features and here's how I'm (or LLM with tools is) going to run it and test it".
Thoughts and idea not as in "build facebook" - a lot of people think AI can do that, it won't (but might pretend to) and it will just lead to failure.
My competitive edge did not diminish, it expanded.
- You ask someone to do it
- You check their work and they made some mistakes, but it's good enough to use
- You ultimately don't know if they're doing the best at their job but you have regular performance check-ins to be safe
As ICs we can complain all we want about the quality of AI, but as far as your manager goes - you using AI is not that much different to them having an employee.
For non-technical, the current meteoric rise of AI is due to the fact that AI is generally synonymous to "it can talk". It has never _really_ spoken to the wider audience that the image recognition, or various filters, or whatever classifiers they could have stumbled upon are AI as well. What we have, now, is AI in the truest sense. And executives are primarily non-technical.
As for the technical people, we know how it works, we know how it doesn't work, and we're not particularly amused.
For executives, that's writing code. For ICs, it's other stuff.
It makes me think of an executive I once reported to who “increased velocity” by changing the utilization rate on a spreadsheet from 75% to 80%.
Executives do not need actively functional systems from AI to help with their own daily work. Nothing falls over if their report is not quite right. So they are seeing AI output that is more complete for their own purposes.
But also, AI is good enough to accelerate software engineering. To the degree that there are problems with the output, well, that's why they haven't fired all the the engineers yet. And executives never really cared about code quality -- that is the engineers' problem.
What I'm trying to build for my small business client right now is not engineering but still requires some remaining employees. He's already automated a lot of it. But I'm trying to make a full version of his call little center that can run on one box like an H200. Which we can rent for like $3.59/hr. Which if I remember correctly is approximately the cost of one of his Filipino employees.
Where we are headed is that the executives are themselves pretty quickly going to be targeted for replacement. Especially those that do not have firm upper class social status that puts them in the same social group as ownership.
Executives see this as way to replace labor.
The labor sees themselves being replaced.
This is a story as old as the hills.
That said, the central point of the TFA is spot-on, though it could be made more generally, as it applies to engineering as well as management: uncertainty rises sharply the higher you climb the corporate and/or seniority ladder. In fact, the most important responsibility at higher levels is to take increasing ambiguity and transform it into much more deterministic roles and tasks that can be farmed out to many more people lower on the ladder.
The biggest impact of AI is that most deterministic tasks (and even some suprisingly ambiguous ones) are now spoken for. This happens to be at the bread and butter of the junior levels, and is where most of the job displacement will happen.
I would say the most essential skill now is critical thinking, and the most essential personality trait is being comfortable with uncertainty (or as the LinkedInfluencers call it, "having a growth mindset.") Unfortunately, most of our current educational and training processes fail to adequately prepare us for this (see: "grade inflation") so at a minimum the fix needs to start there.
bitwize•1h ago