To me this article sums up the most frustrating part about software engineers believing themselves to be the part of the business with the most complex, unknowable work.
"Most non-technical leaders have never really engaged with the real work of software and systems management. They don’t know what it’s like to update a major dependency, complete a refactor, or learn a new language."
_Every_ function in a tech business has hidden complexity. Most parts of the business have to deal with human, interpersonal complexity (like sales and customer support) far more than do engineers. By comparison, actually, engineering only has to deal with the complexity of the computer which is at least deterministic.
As a result, lots of engineers never learn how to present to the business the risk of the kinds of complexity they deal with. They would prefer to ignore the human realities of working on a team with other people and grumble that the salesperson turned CEO just doesn't get them, man.
As a SWE-turned-product-manager, you're in an ideal place to teach SWEs about:
- how to present to the business the risk of the kinds of complexity they deal with
- the human realities of working on a team with other people
- why the salesperson turned CEO just doesn't get them, man
_Every_ function in a tech business has hidden complexity.
That's just the way the world is.
Now, software is hard because the complexity isn't as visible to most of the org, but also because software people tend to be less than good at explaining that complexity to the rest of the org.
Tech debt is real, but so is the downside of building a system that has constraints that do not actually align with the realities of the business: optimizing for too much scale, too much performance, or too much modularity. Those things are needed, but only sometimes. Walking that line well (which takes some luck!) is what separates good engineering leadership from great engineering leadership.
Hey, I resemble that remark!
Yeah, I get where you're coming from but i do really feel that it's more of a communication issue, along with the abstract nature of software. I mostly do data related stuff, and have often had the experience of finding "wrong" data that doesn't have a large impact, and every time I need to remind myself that it might not matter.
You can also see this in the over-valuation of dashboards vs data engineering. Stakeholders lurve dashboards but value the engineering of pipelines much less highly, even though you can't have dashboards without data.
I didn't get that message at all. If anything they're saying that the complexity of PM work is entirely knowable, but the many engineers do not bother, because they do not acknowledge the existence of that complexity in the first place.
And honestly, they have a point! Our industry is rife in this attitude and it sucks.
Look at how many posts about totally disparate fields are on HN all the time where coders suddenly become experts in [virology|epidemiology|economics|etc] without so much as a passing acknowledgment that there are real experts in the field.
We even do it to ourselves - the frequent invocations of "pffft I can build that in a weekend" dismissals of other software projects, for example.
Shit is complex, that complexity is knowable to anyone willing to roll up their sleeves and put in the work, but there is definitely a toxic tendency for people in our field to be wildly dismissive of that complexity in fields that aren't ours. And yeah, that includes dismissal of complexity in our own field but not directly in the function of programming.
This statement is on big pile of BS. From a practical standpoint there is nothing deterministic about software systems and their interactions in enterprise.
One of the hidden variables being the same machine which is supposed to bring order to chaos: product managers and their interaction with sales teams and upper management.
This could be bias talking, though. Is it common for sales or support teams to be given milestones that are off by 50%?
However, I disagree that engineers only have to deal with the complexity of the computer; instead, I argue they have to translate the messiness of the organization and of every customer of the program and make it coherent to an inflexible computer.
And every exception and case added is a permutation that echoes through the system. (And, quite often, outside the system due to shadow processes that people use unintended consequences of the system to encode.)
That said, it's why I have so many of my senior engineers start learning the terms of business so that they can deliver those messages without me. It's part of my toolkit I expect them to learn.
Engineers outside business structures can scramble themselves and produce value. It's messy and inefficient, but doable. Sometimes, it's a lot of value.
This requires human communication. Engineer-to-engineer brawls. It's nothing like the well oiled JIRA machine. It's a mess, but it is undeniably human.
I think that deserves a little more respect.
The article talks about JIRA metrics. Teams the measure productivity, time spent, code produced, deadlines. Ins't that a poor attempt at making human team work _deterministic_? Don't get me wrong, I don't think that's completely useless, but it certainly can be detrimental to both engineering and business.
I'm not saying you do those things or preach that pure metric-based system. However, you are making a comment on an article about it, and I think that in the middle of your rant, you reversed the roles a little bit.
> You feel great, until…you realize that your new powder room doesn’t have running water; it was never connected to the water main.
> You ask the AI to fix it. In doing so, it breaks the plumbing in the kitchen. It can’t tell you why because it doesn’t know. Neural systems are inherently black boxes; they can’t recognize their own hallucinations and gaps in logic.
I've seen plenty of humans causing problems where they didn't expect, so it's not like using using humans instead of AI prevents the problem being described here. Besides, even when AI hallucinates, when you interact with it again it is able to recognize its error and try to fix the mistake it made, just like a human would.
The article correctly describes tech debt as a tug-of-war between what developers see as important in the long-term versus immediate priorities dictated by business needs and decisions. It's hard to justify spending 40 man-hours chasing a bug which your customers hardly even notice. However, this equation fundamentally changes when you are able to put a semi-autonomous agent on that task. It might not be the case just yet but in the long run, AI will enable you to lower your tech debt because it dramatically reduces the cost of addressing it.
"recognize" is a strong claim
> and try to fix the mistake it made,
or double down on it
>just like a human would.
probably not, because the human is also responding to emotional dynamics (short and long term) which the AI only pretends to mimic.
This feels beyond generous. I'm sure I'm not the only one who has led my AI assistant into a corner it couldn't get itself out of
My team recently did a vibe coding exercise to get a sense of the boundaries of current LLMs. I kept running into a problem where the LLM would code a function that worked perfectly fine. I'd request tweak and it would sometimes create a new, independent function, and then continually update the new completely orphaned function. Naturally, the LLM was very confident that it was making the changes, but I was seeing no result. I was able to manually edit the needed files to get unstuck, but it kept happening.
It will confidently explain version A is broken because it isn't version B, and version B is broken because it isn't version A. There's no "awareness" that this is cycle is happening and it could go on indefinitely.
Yup, because the error becomes part of the prompt. Even if you tell them to do something different it's basically impossible for the model to recover.
Many many people report experiences that directly contradict what you say next. The type of large language model people currently label as AI does not learn and when asked to fix a problem it takes another guess, usually also wrong, sometimes creating other problems.
I think about the AI vending machine business simulation. Yes, LLMs were able to keep it running for a long time, but they were also prone to nosediving failure spirals due to simple errors a human would spot immediately.
People point to an infinity in the distance where these LLMs cross some incredibly high threshold of accuracy that lets us thoughtlessly replace human workers and thinkers with them... I have not seen sufficient evidence to believe this.
The big difference is that (most) humans will learn from this mistake. An LLM, however, is always a blank slate so will make the same mistakes over and over again.
And more generally, because of how these models work, it would be very strange if they didn't make the same mistakes as humans.
AIs are a horrible fit for the current system. They aren't really better or worse, but instead completely alien. From one angle they look like a good fit and end up passing initial checks, and even doing a somewhat decent job, but then a request comes in from another angle that results in them making mistakes that we would normally only associate with a complete beginner.
The question becomes, can we modify the existing systems, or design new ones, where the alien AIs do fit in. I think so. Not enough to completely replace all humans, but 8 people with AI systems being able to do the job 10 people use to do still means 2 people were replaced, even if no one person's entire job was taken over entirely by the AI.
What remains unknown is how far this can extend, between getting better AI and getting better systems where the AI can be slotted in. (And then to grandparent's point, what new roles in these systems become available for AI that weren't financially viable to turn into full jobs for humans.)
I can promise you there are plenty of companies that have not drank the kool aid and, while they might leverage LLM tools, they aren't trying to replace developers or expect 10x out of developers using these tools.
Any company that pushed for this kind of thing is clearly run by morons and you'd be smart to get the heck out of dodge.
I've seen companies previously adopt rules like "Everybody needs to use VSCode as their editor" and it's normally a sign that somebody in charge doesn't trust their engineers to work in the way that they feel most productive.
The main claim is fine: If you disregard human expertise, AI can end up doing more harm than good.
Biggest weakness: Strong sense of 'us vs them', 'Agile Industrial Complex' as a term for people working outside engineering, derogatory implication that the 'others' don't have common sense.
Why not address that no one knows how things will play out?
Sure, we have a better idea of how complex software can be, but the uncertainty isn't reserved to non-engineers.
Look at HN, professional software developers are divided in their hopes and predictions for AI.
If we're the experts on software, isn't our job to dampen the general anxiety, not stoke the fire?
It’s a large system, too large for any person to understand. This system is poorly and partially documented, and then only years after it’s put into place. The exact behaviour of the system is a closely-guarded secret: there are public imitations, but they don’t act the same. This system is known to act with no regard for correctness or external consistency.
And this system, and systems like it, are being used, widely, to generate financial presentations and court documents and generate software itself. They’re used for education and research and as conversation partners and therapists.
I have anxiety and honestly, I think other people should too.
I feel unease about LLMs too, along with a sense of appreciation for them.
But this post does not seek to persuade, otherwise it would be written for non-engineers. This post panders.
Here's a report about the bug: https://www.catonetworks.com/blog/cato-ctrl-poc-attack-targe...
My own notes on that here: https://simonwillison.net/2025/Jun/19/atlassian-prompt-injec...
> Go back and tell the CEO, great news: we can save eight times as much money by replacing our CEO with an AI.
The funny ("funny"?) thing is, this is proposal is somehow missing in most discussions about AI. But seriously, the quality of decision making would probably not suffer that much if we replaced our elites with LLMs, and it still would be way cheaper, on the whole (and with mostly the same accountability). But apparently people in charge don't see themselves as fungible and would rather not replace themselves with AI; and since they are the ones in charge then this, tautologically, won't happen.
However, there’s probably a kernel of truth. I guess the org tree should grow like log(n_employees). If AI really makes the workers multiple times more effective, then the size of the company could shrink, resulting shorter trees. Maybe some layers could be replaced entirely by AI as well.
There is also maybe an opportunity for a radically different shape to solve the “LLM can’t bear responsibility” problem. Like a collection of guilds and independent contractors coordinated by a LLM. In that case the responsibility stays in the components.
So the shareholders vote to switch from ChatGPT to DeepSeek :) and they don't even have to pay out the parting bonus to it! And it's nobody's fault, just the market forces of nature that the quarter was rough. It's a win-win-whatever situation, really.
I find it so odd that 'agile' is something that people chose to hate. What dysfunctions did 'agile' itself bring that had not been there before? Didn't managers before 2001 demand new features for their products? Did they use to empathise more with engineering work? If they hadn't yet learnt about t-shirt sizes, didn't they demand estimates in specific time periods (months, days, hours)? Didn't they make promises based on arbitrary dates and then pressed developers to meet those deadlines, expecting them to work overtime (as can be inferred from agile principle 8: "agile processes promote sustainable development ... Developers should be able to maintain a constant pace indefinitely)? What sins has 'agile' committed, apart from inadvertently unleashing an army of 'scrum masters' who discovered an easy way to game the system and 'break into tech' after taking a two-day course?
Because it adds hours of nearly pointless meetings to your workweek? The time spent in standups and retros and sprint planning and refinement that imo add nearly no value is shocking...
I worked in fiance once with Agile, where there exists in the culture an infinite growth mindset. So I found that we were putting a metric to everything possible, expecting future "improvements" and peoples' salaries were dependent on it. Some companies probably don't suffer from this though.
> ...
> I worked in fiance once with Agile ... I found that we were putting a metric to everything possible, expecting future "improvements" and peoples' salaries were dependent on it.
It's fascinating to me how different a meaning different people put into the word 'agile'. The 'agile' of the founders was about small teams, close collaboration with the customer, and frequent feedback based on the real emerging working software, which allowed the customer to adapt and change their mind early (hence 'agile', as in 'flexible'). What it contrasted itself to was heavy, slow-moving organisations, with large interdependent teams, multiple layers of management between developers and customers, and long periods of planning (presentations, designs, architecture, etc.) without working code to back it up. All this sounds to me like an intuitively good idea and a natural way to work. None of that resembles the monstrosity that people complain about while calling it 'agile'. In fact, I feel that this monstrosity is the creature of the pre-agile period that appropriated the new jargon.
Connect to the business.
I often seen engineers focus on solving cool, hard problems, which is neat. I get it, it's fun.
But having an understanding of business problems, especially strategic ones, and applying technology as needed to solve them--well, if you do that you stand out and are more valuable.
These type of problems tend to be more intractable, cross departments, and are techno-social rather than purely technical.
So it may take some time to learn, but that's the path I'd advise you and other ICs to follow.
this is such excellent advice, and it will keep you relevant as an engineer so that you know the thing you're building solves the actual problem
Connecting to the business keeps you valued at your current job. You do things that are unusual in industry, to create great impact for your company.
But if you tell how you achieved those things in an interview, the decision maker, who is usually another technical person, will raise eyebrows hugely. For your next job, it is best to nail industry-accepted techniques, i.e. coding and system design interviews.
Stick to business impact in your current role too much, and you become irrelevant to your next job (unless via referral or unconventional routes).
This holds true even until senior leadership roles.
Why would they raise their eyebrows? Do you mean they wouldn't understand or care?
> via referral
Most of my jobs have been through referral.
I think it is a far superior way to get hired when compared to the morass of the normal technical interview. So I'd actually optimize for that.
One can only know so much. If on top of all the intricacies about distributed systems, software engineering, dbs, leadership and a large etc., we also need to know about the "business", well, what are we then? How do we acquire so much knowledge without sacrificing our entire free time?
Sure thing there are individuals out there who know a lot about many things, and those are probably the ones companies want to hire the most. But what about the rest of us?
No, they don't. They want specialists, as you point out.
This will feed into existing mistrust, that developers are whiny, just trying to pad out their resume with new shiny, tinkering as opposed to delivering value, and the c-suite are clueless and don't understand engineering. But we've never had a tool before (except maybe outsourcing) that can present itself to either party as either good AND bad depending on your beholding eye. So I feel like the coming years could be politically disasterous.
One thing I find curious, is how the biggest tech companies today, got to where they are by carefully selecting 10* engineers, working hard on recruitment and trying to only select the very best. This has given them much comfort and a hugely profitable platform but now some of them seek to undermine and reverse the very strategy that got them there, in order to justify their investment in the technology.
For the cynics, the question is, how long can the ship keep holding its course from the work already done combined with AI generated changes? As we see with Twitter and Musks ad-hoc firing strategy, the backend keeps trundling along, somewhat vindicating his decision. What's the canary for tech companies that spend the next few years firing devs and replacing them with AI?
Another curious thought is the idea that concepts of maintainability will fly out the window, that the c-suite will pressure engineering to lower pull request standards to accept more autonomous AI changes. This might create an element of hilarity where complex code bases get so unmaintable to the eye that the only quick way of understanding them with be to use an AI to look at them for you. Which leads to what I think a long-term outcome of generative AI might be, in that it ends up being a layer between all human interaction, for better, for worse.
I believe its possible to see early seeds of this in recruitment, where AI is used at the recruitment end to filter resumes and some applicants are using AI to generate a "tailor made" resume to adapt their experience to a given job posting. Its just AI talking to AI and I think this might start to become a common theme in our society.
tacone•2h ago
That would make up for a very good movie.
kevingadd•2h ago
WesolyKubeczek•1h ago
mountainriver•2h ago
kevin_thibedeau•1h ago