Not that I disagree: I’m on record agreeing with the article months ago. Folks in labs probably seen it coming for years.
Yes we’ve seen major improvements in software development velocity - libraries, OSes, containers, portable bytecodes - but I’m afraid we’ve seen nothing yet. Claude Code and Codex are just glimpses into the future.
Correlation doesnt say anything about the sensitivity/scaling. (i recognize that my original comment didnt quite make this point, though the correlation is definitely not 100%, so that point does still stand)
can you note the difference between the earth being lit by torches, candles, kerosene lamps and incandescent bulbs, versus LED lights? LED isnt glowing harder, it just wastes less energy.
A rocket stove, or any efficient furnace, can extract vastly more energy from the same fuel source than an open fire. I assume combustion engines have had significant efficiency improvements since first introduced. And electric engines are almost completely efficient - especially when fed by efficient, clean/renewable source.
How about the computing power of a smartphone versus a supercomputer from 1980?
What is more energy efficient, a carpenter working with crude stones or with sharp chisels?
and we can, of course, put aside whether any measurement of economic value is actually accurate/useful... A natural disaster is technically good for many economic measures, since the destruction doesn't get measured and the wealth invested in rebuilding just counts as economic activity
And, Of course, then there's creeptocurrencies which use an immense amount of energy to do something that was previously trivial. And worse, when it is used in place of cash. But even there, some are more efficient than others - not that anyone who uses them actually cares.
If we use about 20 TW today, in a thousand years of 5% growth we’d be at about 3x10^34. I think the sun is around 3.8x10^26 watts? That gives us about 8x10^7 suns worth of energy consumption in 1000 years.
If we figure 0.004 stars per cubic light-year, we end up in that ballpark in a thousand years of uniform spherical expansion at C.
But that assumes millions ( billions?) of probes traveling outward starting soon, and no acceleration or deceleration or development time… so I think your claim is likely true, in any practical sense of the idea.
Time to short the market lol.
AI capabilities are growing exponentially thanks to exponential compute/energy consumption, but also thanks to algorithmic improvements. we've got a proof that human-level intelligence can run at 20W of power, so we've got plenty of room to offset the currently-missing compute.
- Other input that a deck of cards. Terminals and teletypes were a revolution.
- Assembly was much better than hardware switches.
- Also, a proper keyboard input against some "monitor" software was zillions better than, again, a deck of cards/hardware toggles. When you can have a basic line basic editor and dump your changes in a paper tape or print your output you have now live editing instead of suffering batch jobs.
A Knowledge Pool is the reservoir of shared knowledge that a group of people have about a particular subject, tool, method, etc. In product strategy, knowledge pools represent another kind of moat, and a form of leverage that can be used to grow or maintain market share.
Usage: Resources are better spent on other things besides draining the knowledge pool with yet another new interface to learn and spending time and money filling it up again with retraining.
For instance, tons of people know how to use Adobe products like Photoshop, by way of deliberate inaction on the part of Adobe around product piracy outside of workplaces. With this large knowledge pool entering into the workforce, users were able to convince workplaces to adopt Adobe products that they were already familiar with.
That wouldn't be institutional knowledge, but a pool of knowledge that institutions could take actions (or inaction, as the case above) to influence.
As a developer for almost 30 years now, if I think where most of my code went, I would say, quantitatively, to the bin.
I processed much data, dumps and logs over the years. I collected statistical information, mapped flows, created models of the things I needed to understand. And this was long before any "big data" thing.
Nothing changed with AI. I keep doing the same things, but maybe the output have colours.
I think I've overall just had just 2 or 3 projects where anyone has actually even tried the thing I've been working on.
This is why you need to find emotional significance for your life (traveling, family, art, etc...) outside of this claustrophobic work.
A chef reflecting on their life would hardly lament that every meal they'd ever crafted ended up in the bin (or the toilet).
I won't put intention into the text because I did not check any other posts from the same guy.
That said, I think this revolution is not revolutionary yet. Not sure if it will be, but maybe?
What is happening os that companies are going back to "normal" number of people in software development. Before it was because of adoption to custom software, later because of labour shortage, then we had a boom because people caught up into it as a viable career but then it started scaling down again because one developer can (technically) do more with AI.
There are huge red flags with "fully automated" software development that are not being fixed but for those outside of the expertise area, doesn't seem relevant. With newer restrictions related to cost and hardware, AI will be even a worse option unless there is some sort of magic that fixes everything related to how it does code.
The economy (all around the world) is bonkers right now. Honestly, I saw some Jr Devs earning 6 fig salaries (in USD) and doing less than what me and my friends did when we were Jr. There is inflation and all, but the numbers does not seem to add.
Part of it all is a re- normalisation but part of it is certainly a lack of understanding of software and/or// engineering.
Current tools, and I include even those kiro, anti-gravity and whatever, do not solve my problems, just make my work faster. Easier to look for code, find data and read through blocks of code I don't see in a while. Writing code not so much better. If it is simple and easy it certainly can do, but for anything more complex it seems that it is faster and more reliable to do myself (and probably cheaper)
The following is just disingenuous:
>industrialisation of printing processes led to paperback genre fiction
>industrialisation of agriculture led to ultraprocessed junk food
>industrialisation of digital image sensors led to user-generated video
Industrialization of printing was the necessary precondition for mass literacy and mass education. The industrialization of agriculture also ended hunger in all parts of the world which are able to practice it and even allows for export of food into countries which aren't (Without it most of humanity would still be plowing fields in order not to starve). The digital image sensor allows for accurate representations of the world around us.
The framing here is that industrialization degrades quality and makes products into disposable waste. While there is some truth to that, I think it is pretty undeniable that there are massive benefits which came with it. Mass produced products often are of superior quality and superior longevity and often are the only way in which certain products can be made available to large parts of the population.
>This is not because producers are careless, but because once production is cheap enough, junk is what maximises volume, margin, and reach.
This just is not true and goes against all available evidence, as well as basic economics.
>For example, prior to industrialisation, clothing was largely produced by specialised artisans, often coordinated through guilds and manual labour, with resources gathered locally, and the expertise for creating durable fabrics accumulated over years, and frequently passed down in family lines. Industrialisation changed that completely, with raw materials being shipped intercontinentally, fabrics mass produced in factories, clothes assembled by machinery, all leading to today’s world of fast, disposable, exploitative fashion.
This is just pure fiction. The author is comparing the highest quality goods at one point in time, who people took immense care of, with the lowest quality stuff people buy today, which is not even close to the mean clothing people buy. The truth is that fabrics have become far better and far more durable and versatile. The products have become better, but what has changed is the attitude of people towards their clothing.
Lastly, the author is ignoring the basic economics which separate software from physical goods. Physical goods need to be produced, which is almost always the most expensive part. This is not the case for software, distributing software millions of times is not expensive and only a minuscule part of the total costs. For fabrics industrialization has meant that development costs increased immensely, but per unit production costs fell sharply. What we are seeing with software is a slashing of development costs.
The Industrial Revolution created a flywheel: you built machines that could build lots of things better and for less cost than before, including the parts to make better machines that could build things even better and for less cost than before, including the parts to make better machines... and on and on.
The key part to industrialisation in the 19th-century framing, is that you have in-built iterative improvement: by driving down cost, you increase demand (the author covers this), which increases investment in driving down costs, which increases demand, and so on.
Critically, this flywheel has exponential outputs, not linear. The author shows Jevons paradox, and the curve is right there - note the lack of straight line.
I'm not sure we're seeing this in AI software generation yet.
Costs are shifting in people's minds, from developer salaries to spending on tokens, so there's a feeling of cost reduction, but that's because a great deal of that seems to be heavily subsidised today.
It's also not clear that these AI tools are being used to produce exponentially better AI tools - despite the jump we saw ~GPT-3.5, quantitive improvement in output seems to remain linear as a function of cost, not exponential. Yet investment input seems to be exponential (this makes it feel more like a bubble).
I'm not saying that industrialisation of the type the author refers to isn't possible (and I'd even say most industrialisation of software happened back in the 1960s/70s), or that the flywheel can't pick up with AI, just that we're not quite where they think it is.
I'd also argue it's not a given that we're going to see the output of "industrialisation" drive us towards "junk" as a natural order of things - if anything we'll know it's not a junk bubble when we do in fact see the opposite, which is what optimists are betting on being just around the corner.
Take this for example:
``` Industrial systems reliably create economic pressure toward excess, low quality goods. ```
Industrial systems allow for low quality goods, but also they deliver quality way beyond what can be achieved in artisanal production. A mass produced mid-tier car is going to be much better than your artisanal car.
Scale allows you not only to produce more cheaply, but also to take quality control to the extreme.
I don't think this is true in general, although it may be in certain product categories. Hand-built supercars are still valued by the ultra-wealthy. Artisanal bakeries consistently make better pastries than anything mass produced... and so on
Perhaps an industrial car is better than your or my artisanal car, but I'm sure there's people who build cars by hand of very high quality (over the course of years). Likewise fine carpentry vs mass produced stuff vs ikea.
Or I make sourdough bread and it would be very impractical/uncompetitive to start selling it unless I scaled up to make dozens, maybe hundreds, of loaves per day. But it's absolutely far better than any bread you can find on any supermarket shelf. It's also arguably better than most artisanal bakeries who have to follow a production process every day.
This has never been true for "artisanal" software. It could be used by nobody or by millions. This is why the economic model OP proposes falls apart.
I think automobiles are a bad example: I'd trust the reliability and quality of a mass produced Toyota or Honda over a hand-made Ferrari. (Of course there are bad mass produced cars as well.)
But reliability isn't the only measure of quality.
The whole premise of AI bringing democratization to software development and letting any layperson produce software signals a gross misunderstanding of how software development works and the requirements it should fulfill.
With my last side project, I became frustrated with my non-technical founder because he would have a lot of vague ideas and in his mind, he was sure that he had a crystal clear vision of what he wanted... But it was like, every idea he had, I was finding massive logical holes in them and finding contradictions... Like he wanted a feature and some other feature but it was physically impossible to have both without making the UX terrible.
And it wasn't just one time, it was constantly.
He would get upset at me for pointing out the many hurdles ahead of time... When in fact he should have been thanking me for saving us from ramming our heads into one wall after another.
What I want is software that can glue these things together. Each week, announce the fixture and poll the team to see who will play.
So far, the complete fragmentation of all these markets (fixtures, chat) has made software solutions uneconomic. Any solution's sales market is necessarily limited to a small handful of teams, and will quickly become outdated as fixtures move and teams evolve.
I'm hopeful AI will let software solve problems like this, where disposable code is exactly what's needed.
Quite the opposite is true. For a large proportion of people, they would increase both the amount of years they live and quality of life by eating less.
I think the days where more product is always better lapse to an end - we just need to figure out how the economy should work.
The mass production of unprocessed food is not what led to the production of hyper processed food. That would be a strange market dynamic.
Shareholder pressure, aggressive marketing and engineering for super-palatable foods are what led to hyper processed foods.
I think some people do instinctively feel like all different kinds of software have different shelf lives or useful lifetimes for different reasons.
But there's always so much noise it's not very easy to get the expiration date correct.
Mass production is pretty much a given when it comes to commodities, and things like long shelf life are icing on the cake.
The inversion comes when mass production makes the highly processed feed more affordable than the unprocessed. After both have scaled maximally, market forces mean more than the amount of labor that was put in.
Strange indeed.
Another common misconception is, it is now easier to compete with big products, as the cost of building those products will go down. Maybe you think you can build your own Office suite and compete with MS Office, or build a SAP with better features and quality. But what went into these software is not just code, but decades of feedback, tuning and fixing. The industrialization of software can not provide that.
On the contrary, this is likely the reason why we can disrupt these large players.
Experience from 2005 just don't hold that much value in 2025 in tech.
That would be why a significant portion of the world's critical systems still run on Windows XP, eh?
But taking out features are difficult - even when they have near to zero value.
Why it sometimes make sense for new players to enter the market and start over - without the legacy.
This is indeed one of the value propositions in the startup I work in.
So the things you mention indeed is experience you need to get rid of as you move to other software stacks and other technologies.
Basically every company that does anything non-trivial could benefit from tailor-made software that supports their specific workflow. Many small companies don't have that, either they cannot afford their own development team, or they don't know that/how software could improve their workflow, or they are too risk-averse.
Heck, even my small family of 4 persons could benefit from some custom software, but only in small ways, so it's not worth it for me to pursue it.
Once we're at the point where a (potentially specialized) LLM can generate, securely operate and maintain software to run a small to medium-sized business, we'll probably find that there are far more places that could benefit from custom software.
Usually if you introduce, say, an ERP system into a company that doesn't use one yet, you need to customize it and change workflows in the company, and maybe even restructure it. If it were cheap enough to build a custom ERP system that caters to the existing workflows, that would be less disruptive and thus less risky.
Games have a ton of demand for code that isn't readily shareable but also needs to be done quickly.
I have some programming ability and a lot of ideas but would happily hire someone to realize those ideas for me. The idea I have put the most time into, took me the better part of a year to sort out all the details of even with the help of AI, most programmers could have probably done it in a night and with AI could write the software in a few nights. I would have my software for an affordable price and they could stick it in their personal store so other could buy it. If I am productive with it and show its utility, they will sell more copies of it so they have an incentive to work with people like me and help me realize my ideas.
Programming is going to become a service instead of an industry, the craft of programming will be for sale instead of software.
As someone who has worked in two companies that raised millions of dollars and had hundred people tackling just half of this, tax software, you are in for a treat.
Edit: Just noticed I said "any buisness", that was supposed to be "any small buisness." Edited the original post as well.
Edit: And if I was using C or C++ above my lack of capitalization would either evoke an error too OR passably continue foward referencing the wrong variable and result in a similar error to your transposition.
Oh wait. It is already a thing.
What are the constraints with LLMs? Will an Anthropic, Google, OpenAI, etc, constrain how much we can consume? What is the value of any piece of software if anyone can produce everything? The same applies to everything we're suddenly able to produce. What is the value of a book if anyone can generate one? What is the value of a piece of art, if it requires zero skill to generate it?
The important thing is that goods =/= software. I, as an end user, of software rarely need specialized software. I dont need an entire app generated on the spot to split the bill and remember the difference if I have the calculator.
So, yes, we are industrializing software, but this reach that people talk about (I believe) will be severely limited.
The idea of automation creating a massive amount of software sounds ridiculous. Why would we need that? More Games? Can only be consumed at the pace of the player. Agents? Can be reused once they fulfill a task sufficently.
We're probably going to see a huge amount of customization where existing software is adapted to a specific use case or user via LLMs, but why would anyone waste energy to re-create the same algorithms over and over again.
I'm personally doing just that because I want an algorithm written in C++ in a LGPL library working in another language
I like the article except the premise is wrong - industrial software will be high value and low cost as it will outlive the slop.
...Or so think devs.
People responsible for operating software, as well as people responsible for maintaining it, may have different opinions.
Bugs must be fixed, underlying software/hardware changes and vulnerabilities get discovered, and so versions must be bumped. The surrounding ecosystem changes, and so, even if your particular stack doesn't require new features, it must be adapted (a simple example: your react front breaks because the nginx proxy changed is subdirectory).
I am certain cost can go down there, but that will only compete against SaaS where the marginal cost of adding another customer is already zero.
Yeah, that's a part of software that's often overlooked by software developers.
Are they, though? I am not aware of any indicators that software costs are precipitously declining. At least as far as I know, we aren't seeing complements of software developers (PMs, sales, other adjacent roles) growing rapidly indicating a corresponding supply increase. We aren't seeing companies like mcirosoft or salesforce or atlassian or any major software company reduce prices due to supply glut.
So what are the indicators (beyond blog posts) this is having a macro effect?
If that wasn't the case, every piece of software could already be developed arbitrarily quickly by hiring an arbitrary amount of freelancers.
low cost/low value software tagged as disposable usually means development cost was low, but maintenance cost is high ; and that's why you get rid of it.
On the other hand, the difference between good and bad traditional software is that, while cost is always going to be high, you want maintenance cost to be low. This is what industrialization is about.
Could you say more about what you think it would look like for LLMs to genuinely help us deal with complexity? I can think of some things: helping us write more and better tests, fewer bugs, helping us get to the right abstractions faster, helping us write glue code so more systems can talk to each other, helping us port things to one stack so we don't have to maintain polyglot piles of stuff (or conversely helping us not worry about picking and choosing the best stuff from every language ecosystem).
I partially agree. While LLMs don't magically increase a human's mental capacity, but they do allow a given human to explore the search space of e.g. abstractions faster than they otherwise could before they run out of time or patience.
But (to use GGP's metaphor) do LLMs increase the ultimate height of the software mountain at which complexity grinds everything to a halt?
To be more precise, this is point at which the cost of changing the system gets prohibitively high because any change you make will likely break something else. Progress becomes impossible.
Do current LLMs help us here? No, they don't. It's widely known that if you vibe code something, you'll pretty quickly hit a wall where any change you ask the LLM to make will break something else. To reliably make changes to a complex system, a human still needs to really grok what's going on.
Since the complexity ceiling is a function of human mental capacity, there are two ways to raise that ceiling:
1. Reduce cognitive load by building high-leverage abstractions and tools (e.g. compilers, SQL, HTTP)
2. Find a smarter person/machine to do the work (i.e. some future form of AI)
So while current LLMs might help us do #1 faster, they don't fundamentally alter the complexity landscape, not yet.
https://bsky.app/profile/sunshowers.io/post/3mbcinl4eqc2q
What software developers actually do is closer to the role of an architect in construction or a design engineer in manufacturing. They design new blueprints for the compilers to churn out. Like any design job, this needs some actual taste and insight into the particular circumstances. That has always been the difficult part of commercial software production and LLMs generally don't help with that.
It's like thinking the greatest barrier to producing the next great Russian literary novel is not speaking Russian. That is merely the first and easiest barrier, but after learning the language you are still no Tolstoy.
Theyre explicitly saying that most software will no longer be artisianal - a great literary novel - and instead become industrialized - mass produced paperback garbage books. But also saying that good software, like literature, will continue to exist.
Perhaps a good analogy is the spreadsheet. It was a complete shift in the way that humans interacted with numbers. From accounting to engineering to home budgets - there are few people who haven't used a spreadsheet to "program" the computer at some point.
It's a fantastic tool, but has limits. It's also fair to say people use (abuse) spreadsheets far beyond those limits. It's a fantastic tool for accounting, but real accounting systems exist for a reason.
Similarly AI will allow lots more people to "program" their computer. But making the programing task go away just exposes limitations in other parts of the "development" process.
To your analogy I don't think AI does mass-produced paperbacks. I think it is the equivalent of writing a novel for yourself. People don't sell spreadsheets, they use them. AI will allow people to write programs for themselves, just like digital cameras turned us all into photographers. But when we need it "done right" we'll still turn to people with honed skills.
It's the article's analogy, not mine.
And, are you really saying that people aren't regularly mass-vibing terrible software that others use...? That seems to be a primary use case...
Though, yes, I'm sure it'll become more common for many people to vibe their own software - even if just tiny, temporary, fit-for-purpose things.
I think there are some people with limited, or no, programming experience who are vibe coding small apps out of nothing. But I think this is a tiny fraction of people. As much as the AI might write code, the tools used to do that, plus compile, distribute etc are still very developer focused.
Sure, one day my pastor might be able to download and install some complete environment which allows him to create something.
Maybe it'll design the database for him, plus install and maintain the local database server for him (or integrate with a cloud service.)
Maybe it'll get all the necessary database and program security right.
Maybe it'll integrate well with other systems, from email to text-import and export. Maybe that will all be maintainable as external services change.
Maybe it'll be able to do support when the printing stops working, or it all needs to be moved to a new machine.
Maybe this environment will be stable enough for the years and decades that the program will be used for. Maybe updating or adding to the program along the way won't break existing things.
Maybe it'll work so well it can be distributed to others.
All this without my pastor even needing to understand what a "variable" is.
That day may come. But, as well as it might or might not write code today, we're a long long way from this future. Mass producing software is a lot more than writing code.
We are now undergoing a Cambrian explosion of bespoke software vibe coded by a non-technical audience, and each one brings with it new sets of failure modes only found in their operational phase. And compared to the current state, effectively zero training data to guide their troubleshooting response.
Non-linearly increasing the surface area of software to debug, and inversely decreasing the training data to apply to that debugging activity will hopefully apply creative pressure upon AI research to come up with more powerful ways to debug all this code. As it stands now, I sure hope someone deep into AI research and praxis sees this and follows up with a comment here that prescribes the AI-assisted troubleshooting approach I’m missing that goes beyond “a more efficient Google and StackOverflow search”.
Also, the current approach is awesome for me to come up to speed on new applications of coding and new platforms I’m not familiar with. But for areas that I’m already fluent in and the areas my stakeholders especially want to see LLM-based amplification, either I’m doing something wrong or we’re just not yet good at troubleshooting legacy code with them. There is some uncanny valley of reasoning I’m unable to bridge so far with the stuff I’m already familiar with.
Missing the point. The barrier to make software has lowered substantially. This not makes mediocre devs less mediocre and for a lot of businesses out there being slightly less mediocre is all they need most of the time. Needing decent devs 20-40% of the time is already a big win in terms of expenses. Making small quick mediocre software that later on you need a decent dev for a couple of months to clean as opposed to pay and keep that dev for several years to make the software from scratch.
Yes, it is not very efficient, but neither are those Cobol apps in old banks. It's always about it being just good enough that it works not beautifully crafted software that never breaks. The market can stay alive longer than you can keep a high salary job as a very experienced dev when you are competing against 100 other similarly experienced devs for your job.
Unlike clothing, software always scaled. So, it's a bit wrongheaded to assume that the new economics would be more like the economics of clothing after mass production. An "artisanal" dress still only fits one person. "Artisanal" software has always served anywhere between zero people and millions.
LLMs are not the spinning jenny. They are not an industrial revolution, even if the stock market valuations assume that they are.
As such, the article's point fails right at the start when it tries to make the point that software production is not already industrial. It is. But if you look at actual industrial design processes, their equivalent of "writing the code" is relatively small. Quality assurance, compliance to various legal requirements, balancing different requirements for the product at hand, having endless meetings with customer representatives to figure out requirements in the first place, those are where most of the time goes and those are exactly the places where LLMs are not very good. So the part that is already fast will get faster and the slow part will stay slow. That is not a recipe for revolutionary progress.
Your point that most software uses the same browsers, databases, tooling and internal libraries is a weakness, a sameness that can be exploited by current AI, to push that automation capability much further. Hell, why even bother with any of the generated code and infrastructure being "human readable" anymore? (Of course, all kinds of reasons that is bad, but just watch that "innovation" get a marketing push and take off. Which would only mean we'd need viewing software to make whatever was generated readable - as if anyone would read to understand hundreds/millions of generated complex anything.)
> You get the exact same browser, database server and (whatsapp/signal/telegram/whatever) messenger client as basically everyone else.
Hey! I'm going to passionately defend my choice over a really minor difference. I mean do you see how that app does their hamburger menu?! It makes the app utterly unusable!Maybe I'm exaggerating here but I've heard things pretty close in "chrome vs Firefox" and "signal vs ..." threads. People are really passionate about tiny details. Or at least they think that's that they're passionate about.
Unfortunately I think what they don't realize is that passion often hinders that revolutionary progress you speak of. It just creates entrenched players and monopolies in domains where it should be near trivial to move (browsers are definitely trivial to jump ship)
I think this is understating the cost of jumping. Basically zero users care about the "technological" elements of their browser (e.g. the render engine, JS engine, video codecs) so long as it offers feature equivalence, but they do care a lot about comparatively "minor" UX elements (e.g. password manager, profile sync, cross-platform consistency, etc) which probably actually dominate their user interaction with the browser itself and thus understandably prove remarkably sticky ("minor" here is in terms of implementation complexity versus the rest of a browser).
The most radical development in software tools I think, would be more tools for non-professional programmers to program small tools that put their skills on wheels. I did a lot of biz dev around something that encompassed "low code/no code" but a revolution there involves smoothing out 5-10 obstacles with a definite Ashby character that if you fool yourself that you can get away with ignoring the last 2 required requirements you get just another Wix that people will laugh at. For now, AI coding doesn't have that much to offer the non-professional programmer because a person without insight into the structure of programs, project management and a sense of what quality means will go in circles at best.
I think the thinking in the article is completely backwards about the economics. I mean, the point of software is you can write it once and the cost to deploy a billion units is trivial in comparison. Sure, AI slop can put the "crap" in "app" but if you have any sense you don't go cruising the app store for trash but find out about best-of-breed products or products that are the thin edge of a long wedge (like the McDonald's app which is valuable because it has all the stores baacking it)
The difference with software is that software is design all the way down. It only needs to be written once, similar to how a mass-produced item needs only be designed once. The copying that corresponds to mass production is the deployment and execution of the software, not the writing of it.
The user experience will be less constrained as the self arrangement of pixels improves and users do not run into designer constraints, usually due to lack of granularity some button widget or layout framework is capable of.
"Artisanal" software engineers probably never were their own self selected identity.
Have been writing code since the late 80s, when Windows and commercial Unix were too expansive and we all wrote shoddy but functional kernels. Who does that now? Most gigs these days are glue code to fetch/cache deps and template concrete config values for frameworks. Artisanal SaaS configuration is not artisanal software engineering.
And because software engineers were their own worst enemy the last decade; living big as they ate others jobs and industries; hate for the industry has gone mainstream. Something politicians have to react to. Non-SWEs don't want to pay middle men to use their property. GenAI can get them to that place.
As an art teacher once said; making things for money is not the practice of a craft. It's just capitalism. Anyone building SaaS apps through contemporary methods is a Subway sandwich artist, not the old timey well rounded farmer, hunter, who also bakes bread.
And what do you feel is the role of universities? Certainly not just to learn the language right? I'm going through a computer engineering degree and sometimes I feel completely lost with an urge to give up on everything, even though I am still interested in technology.
A lot of engineers and programmers did not go to school.
The article is very clearly not saying anything like that. It's saying the greatest barrier to making throwaway comments on Russian social media is not speaking Russian.
Roughly the entire article is about LLMs making it much cheaper to make low quality software. It's not about masterpieces.
And I think it's generally true of all forms of generative AI, what these things excel at the most is producing things that weren't valuable enough to produce before. Throwaway scripts for some task you'd just have done manually before is a really positive example that probably many here are familiar with.
But making stuff that wasn't worth making before isn't necessarily good! In some cases it is, but it really sucks if we have garbage blog posts and readmes and PRs flooding our communication channels because it's suddenly cheaper to produce than whatever minimal value someone gets out of hoisting it on us.
I've worked for a lot of people involved in the process happily request their software get turned into spaghetti. Often because some business process "can't" be changed, but mostly because decision makers do not know / understand what they're asking in a larger scheme of things.
A good engineer can help mitigate that, but only so much. So you end up with industrial sludge to some extent anyway if people in the process are not thoughtful.
As Bryan Cantrill commented (quoting Jeff Bonwick, co-creator of ZFS): code is both information about the machine and the machine:
* https://www.youtube.com/watch?v=vHPa5-BWd4w&t=4m37s
Whereas an architect creates blueprints which is information, that gets constructed into a building/physical object, and a design engineer also creates documents that are information that get turned into machine(s), when a developer writes code they are generating information that acts like a machine.
Software has a duality of being both.
How does one code and not create a machine? Produce a general architecture in UML?
What software developers produce is not a machine by itself. It's at most a blueprint for a machine that can be actualized by combining it with specific hardware. But this is getting a bit too philosophical and off track: LLMs can help produce source code for a specific program faster, but they are not very good at determining whether a specific program should be built at all.
"The thing that is remarkable about it is that it has this property of being information—that we made it up—but it is also machine, and it has these engineered properties. And this is where software is unlikely anything we have ever done, and we're still grappling on that that means. What does it mean to have information that functions as machine? It's got this duality: you can see it as both."
It's not about software and hardware needing each other, but rather about the strange 'nature' of software.
He has made the point before:
> We suffer -- tremendously -- from a bias from traditional engineering that writing code is like digging a ditch: that it is a mundane activity best left to day labor -- and certainly beneath the Gentleman Engineer. This belief is profoundly wrong because software is not like a dam or a superhighway or a power plant: in software, the blueprints _are_ the thing; the abstraction _is_ the machine.
* https://bcantrill.dtrace.org/2007/07/28/on-the-beauty-in-bea...
(Perhaps @bcantrill will notice this and comment.)
> If the hardware is not present, then the software will be just bytes on a storage device.
And what do you mean by "hardware" and what is meant by 'running software'? If you see a bunch of C or Python or assembly code, and you read through it, is it 'running' in your brain? Do you need 'real' CPUs or can you run software on stuff that is not made of silicon but the carbon between your ears?
Instead of people downloading / purchasing the same bits for a particular piece of software which is cookie cutter like a two-piece from Men's Suite Warehouse, we can ask LLM for custom bit of code: everyone getting a garment from Savile Row.
I think the author's analogies are on point.
Design engineers can leave little details out and let contractors figure out the details. Software has no such luxury.
Software has design, edge case finding, and actually constructing the process.
Design is only 1/3 of the process in construction.
It’s always a choice between taking more time today to reduce the cost of changes in the future, or get result fast and be less flexible later. Experience is all about keeping the cost of changes constant over time.
You got tech debt when rushing to implement stuff while having an incomplete representation of the problem. And then trying to patch the wrong solution instead of correcting it.
There is a difference between writing for mainstream software and someone's idea/hope for the future.
Software that is valued high enough will be owned and maintained.
Like most things in our world, I think ownership/stewardship is like money and world hunger, a social issue/question.
High-level languages are about higher abstractions for deterministic processes. LLMs are not necessarily higher abstractions but instead about non-deterministic processes, a fundamentally different thing altogether.
First the core of the argument that 'Industrialization' produces low quality slop is not true - industrialization is about precisely controlled and repeatable processes. A table cut by a CNC router is likely dimensionally more accurate than one cut by hand, in fact many of the industrial processes and machines have trickled back into the toolboxes of master craftsmen, where they increased productivity and quality.
Second, from my experience of working at large enterprises, and smaller teams, the 80-20 rule definitely holds - there's always a core team of a handful of people who lay down the foundations, and design and architect most of the code, with the rest usually fixing bugs, or making bullet point features.
I'm not saying the people who fall into the 80% don't contribute, or somehow are lesser devs, but they're mostly not well-positioned in the org to make major contributions, and another invariable aspect is that as features are added and complexity grows, along with legacy code, the effort needed to make a change, or understand and fix a bug grows superlinearly, meaning the 'last 10%' often takes as much or more effort than what came before.
This is hardly an original observation, and in today's ever-ongoing iteration environment, what counts as the last 10% is hard to define, but most modern software development is highly incremental, often is focused on building unneeded features, or sidegrade redesigns.
One of the things that happened around 2010, when we decided to effect a massive corporate change away from both legacy and proprietary platforms (on the one hand, away from AIX & Progress, and on the other hand, away from .Net/SQL Server), was a set of necessary decisions about the fundamental architecture of systems, and which -- if any -- third party libraries we would use to accelerate software development going forward.
On the back end side (mission critical OLTP & data input screens moving from Progress 4GL to Java+PostgreSQL) it was fairly straightforward: pick lean options and as few external tools as possible in order to ensure the dev team all completely understand the codebase, even if it made developing new features more time consuming sometimes.
On the front end, though, where the system config was done, as well as all the reporting and business analytics, it was less straightforward. There were multiple camps in the team, with some devs wanting to lean on 3rd party stuff as much as possible, others wanting to go all-in on TDD and using 3rd party frameworks and libraries only for UI items (stuff like Telerik, jQuery, etc), and a few having strong opinions about one thing but not others.
What I found was that in an organization with primarily junior engineers, many of which were offshore, the best approach was not to focus on ideally "crafted" code (I literally ran a test with a senior architect once where he & I documented the business requirements completely and he translated the reqs into functional tests, then handed over the tests to the offshore team to write code to pass. They didn't even mostly know what the code was for or what the overall system did, but they were competent enough to write code to pass tests. This ensured the senior architect received something that helped him string everything together, but it also meant we ended up with a really convoluted codebase that was challenging to holistically interpret if you hadn't been on the team from the beginning. I had another architect, who was a lead in one of the offshore teams, who felt very strongly that code should be as simple as possible: descriptive naming, single function classes, etc. I let him run with his paradigm on a different project, to see what would happen. In his case, he didn't focus on TDD and instead just on clearly written requirements docs. But his developers had a mix of talents & experience and the checked-in code was all over the place. Because of how atomically abstract everything was, almost nobody understood how pieces of the system interrelated.
Both of these experiments led to a set of conclusions and approach as we moved forward: clearly written business requirements, followed by technical specifications, are critical, and so is a set of coding standards the whole group understands and has confidence to follow. We setup an XP system to coach junior devs who were less experienced, ran regular show & tell sessions where individuals could talk about their work, and moved from a waterfall planning process to an iterative model. All of this sounds like common sense now that it's been standard in the tech industry for an entire generation, but it was not obvious or accepted in IT "Enterprise Apps" departments in low margin industries until far more recently.
I left that role in 2015 to join a hyperscaler, and only recently (this year) have moved back to a product company, but what I've noticed now is that the collaborative nature of software engineering has never been better ... but we're back to a point where many engineers don't fully understand what they're doing, either because there's a heavy reliance on code they didn't write (common 3P libraries) or because of the compartmentalization of product orgs where small teams don't always know what other teams are doing, or why. The more recent adoption of LLM-accelerated development means even fewer individuals can explain resultant codebases. While software development may be faster than ever, I fear as an industry we're moving back toward the era of the early naughts when the graybeard artisans had mostly retired and their replacements were fumbling around trying to figure out how to do things faster & cheaper and decidedly un-artisanally.
This whole article was interesting, but I really like the conclusion. I think the comparison to the externalized costs of industrialization, which we are finally facing without any easy out, is a good one to make. We've been on the same path for a long time in the software world, as evidenced by the persistent relevance of that one XKCD comic.
There's always going to be work to do in our field. How appealing that work is, and how we're treated as we do that work, is a wide open question.
Maybe there'll be an enormous leap again but I just don't quite see the jump to how this gets you to 'industrial' software. It made it a lot faster, don't get me wrong, but you still needed the captain driving the ship.
The question is more what becomes of all the rowers when you’re switching from captain + 100 rowers to captain + steam engine
They’re not all going to get their own boat and captain hat
To me it feels like when doing it’s like when you’re pair programming (which is quite intense anyway) with a frustrating newby who can program but misses the big picture no matter how many times you try. It works better for certain types of software - web it’s great, I have had much worse results when doing data pipeline stuff at work.
Why not? Anyone can load up Claude code and start trial and erroring until they get something that works and has similar reliability to accepted software … what is the stat about 1 bug per 10 lines of code on average?
I am meeting a lot of non coders telling me about their projects they are getting AI to do for them, stuff to help land title something or other, stuff to work on avalanche forecast, whatever their area of expertise they are unchained and writing programs using AI that they couldn’t before.
Everyone is the captain now
You still need to understand the code that AI is generating to fix the problems that you can't vibe a solution to. You still need to understand the process of developing software to know when something isn't working even if it looks like it is. You still need other people to trust the software that you created. None of those things comes naturally to vibe coders. They're essentially teaching themselves software engineering in a very back-to-front way.
Maybe the amateurs aren’t going to be writing a new distributed database but CRUD apps must be easier than ever
[1] https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-gen...
Everyone wants to be but I don’t think there will be enough seats. There are people doing boilerplate and simple CRUD stuff - they’re not going to switch to farming. Reckon this will lead to more competition for same number of senior seats
This sounds weird, or wrong. Does anonymous stats need cookies at all?
That's the first example I can think of off the top of my head.
In fact, getting out of the two quadrant mindset, and seeking the third, is part of the learning process for developing modern industrial products. And where I work, adjacent to a software development department, I think the devs are aware of this as well. They wanted to benefit from further automation -- the thing I think they're coping with is that it seems to be happening so quickly.
People use software for specific features, but most software have lots of features people never use or need. A lot of modern software is designed to handle lots of users, so they need to be scalable, deployable, etc.
I don't need any of that. I just need the tool to do the thing I want it to do. I'm not thinking about end users, I just need to solve my specific problem. Sure there might be better pieces of software out there, which do more things. But the vibe coded thing works quite well for me and I can always fix it by prompting the model.
For example, I've vibe coded a tool where I upload an audio file, the tool transcribes it and splits it into 'scenes' which I can sync to audio via a simple UI and then I can generate images for each scene. Then it exports the video. It's simple, a bit buggy, lacks some features, but it does the job.
It would have taken me weeks to get to where I am now without having written one manual line of code.
I need the generated videos, not the software. I might eventually turn it into a product which others can use, but I don't focus on that yet, I'm solving my problem. Which simplifies the software a lot.
After I'm finished with this one, I might generate another one, now that I know exactly what I want it to do and what pitfalls to avoid. But yeah, the age of industrial software is upon us. We'll have to adapt.
Most commercial software is nowadays integrated into the real world in ways that can't be replicated by code alone, software which isn't like this can be easily replaced yes, but that kind of software already had free alternatives.
This paper about LLM economics seems relevant:
https://www.nber.org/papers/w34608
Quote: "Fifth, we estimate preliminary short-run price elasticities just above one, suggesting limited scope for Jevons-Paradox effects"
Physical goods like clothes or cars have variable costs. The marginal unit always costs > 0, and thus the price to the consumer is always greater than zero. Industrialization lowered this variable cost, while simultaneously increasing production capacity, and thus enabled a new segment of "low cost, high volume" products, but it does not eliminate the variable cost. This variable cost (eg. the cost of a hand made suit) is the "umbrella" under which a low cost variant (factory made clothes) has space to enter the market.
Digital goods have zero marginal cost. Many digital goods do not cost anything at all to the consumer! Or they are as cheap as possible to actively maximize users because their costs are effectively fixed. What is the "low value / low cost" version of Google? or Netflix for that matter? This is non-sensical because there's no space for a low cost entrant to play in when the price is already free.
In digital goods, consumers tend to choose on quality because price is just not that relevant of a dimension. You see this in the market structure of digital goods. They tend to be winner (or few) take all because the best good can serve everyone. That is a direct result of zero marginal cost.
Even if you accept the premise that AI will make software "industrialized" and thus cheaper to produce, it doesn't change the fact that most software is already free or dirt cheap.
The version of this that might make sense is software that is too expensive to make at all because the market size (eg. number of consumers * price they would pay) is less than the cost of the software developer / entrpreneurs time. But by definition those are small markets, and not anything like the huge markets that were enabled by physical good industrialization.
Also, most consumers don't choose on quality; they choose on price. This is why free mobile games became huge and paid mobile games are a dying breed. In the physical world, it's why shein and alibaba nearly became trillion-dollar companies.
And, I think, consumers would like to balance both cost and quality. The problem is cost is obvious, quality is purposefully obfuscated. You really can't tell what is or is not quality software without spending an unreasonable amount of time and requiring an unreasonable amount of knowledge. Same with most modern physical goods.
You don't need to explain this. Literally everyone here knows what you said, and everyone - including you - knew that he also knew that.
This is pointless and annoying nitpicking.
I am not threatened by LLMs. I would like it if I could code purely in requirements. But every time I get frustrated and just do it myself, because I am faster.
- Tailored suit: This is a high-cost and high-value thing. Both quality and fit are much better than fast-fashion.
In the similar sense, maybe LLMs will produce the frameworks or libraries in the future. Akin to the "fabric" used by the tailors. But at the end, craftsmen or women are the ones architecting and stitching these together.
Verbatim [1]:
> Will traditional software survive?
> Ultraprocessed foods are, of course, not the only game in town. There is a thriving and growing demand for healthy, sustainable production of foodstuffs, largely in response to the harmful effects of industrialisation. Is it possible that software might also resist mechanisation through the growth of an “organic software” movement? If we look at other sectors, we see that even those with the highest levels of industrialisation also still benefit from small-scale, human-led production as part of the spectrum of output.
> For example, prior to industrialisation, clothing was largely produced by specialised artisans, often coordinated through guilds and manual labour, with resources gathered locally, and the expertise for creating durable fabrics accumulated over years, and frequently passed down in family lines. Industrialisation changed that completely, with raw materials being shipped intercontinentally, fabrics mass produced in factories, clothes assembled by machinery, all leading to today’s world of fast, disposable, exploitative fashion. And yet handcrafted clothes still exist: from tailored suits to knitted scarves, a place still exists for small-scale, slow production of textile goods, for reasons ranging from customisation of fit, signalling of wealth, durability of product, up to enjoyment of the craft as a pastime.something "simple" as reverse ETL - a lot of value is locked within that - & you can even see players such as Palantir etc trying to bring unified data view with a fancy name etc
it's also the same reason Workday, Salesforce etc charge a lot of money
I wonder if this will lead to more "forks for one person" where you know of open source software that's close to what you want, except for one thing, so you point a coding agent at it.
I've recently started to contribute to open source, mostly just to add some tiny feature or such
I did not realise before this that getting your changes upstreamed can take a long time. Now I have even more respect for the likes of Asahi Linux. The code reviews usually improve the change a lot but it's not like coding at a company where best case you are able to get multiple changes deployed a day on repos your team does not even own.
But I feel it's worth the effort. For as long as my changes are not upstreamed I can keep using the forked version or just tolerate not being able to do something.
But the major issue I keep running into is how the build setup is painful. This is where Nix really comes to the rescue though. I can import the nixpkgs setup for the given project, point it to my forl/branch, and rebuild the software into my cachix for use in dev shells and containers.
I wish more people used nix. I would love to see it getting adopted as a backend for other package managers. Just maintaing a binary cache for your distro and it would be more or less the same.
But I suppose there are arguments against a monoculture as well.
I use pointy-clicky software for visualizing relationships and correlations. I use draggy droppy software for building complex data workflows (including ML elements). I use desktop publishing software rather than LaTEX. I suffer industrial interfaces for (most of) my own software (where I am the chief customer) because it's easy for me to sling simple server-side UIs, but there are standalone servers out there which make creating apps (with RFID, QR, accellerometer support) just like, yes JUST like, desktop publishing (especially as you get closer to industrial control applications); one of my faves has a widget which is a "choose your own adventure" dashboard widget so that the users can create a dashboard customized just for them, yes, Inception (granted, that widget does need some configuration which requires a keyboard).
Granted, behind every one of those slick interfaces is a drippy gob of data. I have to create a CSV in the correct format for the visualizer, or surrender to their integrated "partner solutions"; or for a different visualizer I have to create some network services which it consumes for "enrichment". My data workflow tool has generic python "actions" so you can create custom tasks (it presents the data to your scripts using pandas). On very rare occasions I have a need to hack on DTP docs to format obscene amounts of repetitive data; but a lot of the time it's back to making a CSV and putting that in a spreadsheet / database which the software can then utilize for quaintly-named "mail merge". The UI software which I refer to integrates with SQL databases and authorization / access management engines, somebody still needs to set those up; and I stumbled across it in the first place because somebody needed a little (surprisingly little) help connecting to an obscure HTTP resource.
To short circuit a bunch of off track commentary: LLMs are not speaking english to other LLMs to design new LLMs AFAIK. I have never seen a debate or article about whether it is better for LLMs to utilize english or russian for this task. I just don't like the menu that is offered on this ship Titanic, and I'm uncomfortable with the majority of the passengers being incarcerated belowdecks; I'll book different passage, thanks.
memoriuaysj•1mo ago
moffkalast•1mo ago
memoriuaysj•1mo ago