Software engineering often isn't just answering "how" to accomplish something, but also all the five other questions starting with 'w'. Software ate the world a very long time ago, so people being allergic to code while trying to run a business is only sounding more and more absurd as time marches on.
What are we really trying to solve for here? All that could be automated reasonably well has been already. In most cases you do not want a stochiastic result, but exact code to be reused. We use libraries in our code and we have reproducibility of results. The code that needs to be written for most applications is minimal and only grows large as the business refines what they want. This code is unique and virtually worthless to any other business. The code mirrors the organization. We already know all this for decades. It's very confusing to me to keep hearing about this insistence that we need to automate software engineering.
I don't agree with this at all. Part of the pitch of code is packing really nasty semantics into a single interface. This inherently reflects in the services we provide to clients. This interface should naturally correspond to UI, if that's what you offer. If you can only express the interface for your service in code, you've failed.
Unless of course you primarily operate a code-centric product, like an sdk.
Correspondingly, it's very easy to imagine someone disgruntled for having to deal with code (i imagine hotwiring a car in the context you provided). People know their lane; you can't force them to change it.
> In these roles, their responsibilities would shift from writing code and debugging to higher-level oversight, decision-making, and strategic planning—until these responsibilities can be automated too.
When they spoke on Dwarkesh's podcast they seemed to think this would take 30 years. Not sure why we coders are so quick to be automated but the rest aren't.
In the third paragraph of the podcast Dwarkesh already told you that he already invested.
>>> (disclosure - I’m an angel investor),
At that point the co-founders and Dwarkesh himself will agree on everything and they will say anything to get more VC money - Even if the timelines are unrealistic. (Because that is the scam).
That's...ridiculous. I'm a product manager, and AI is already chipping away at my job.
Two months ago I said to one of my devs, "Our dashboard here looks very bland. What if we had a more visual display of the pipeline statuses across the top of the screen?" He said he thought that was a good idea, and I went to lunch. I came back and started sketching up some ideas for how to lay out the statuses. I had barely gotten started with that when he called me over to show me what cursor had come up with when asked: it was better than what I was sketching, for sure.
We're (white collar work) going to be 90% automated in less than ten years, and I feel like I'm being conservative saying ten years.
But that's what they were saying about a simple paragraph of coherent writing five years ago. And what they were saying about structured output three years ago. And now I can ask for a coherent breakdown of the functionality that might be required for a ticket tracking system, with a list of use cases and screens to support them, and user personas, and expect that the result will be a little generic, but coherent. I can give Claude a picture of a UI and ask for suggestions for improvement, and half the ideas will be interesting.
> [...]
> We think this is essentially a data problem, not an algorithms problem.
This is extremely hand-wavy. How are you going to instrument the various thought processes and non-verbal communication that goes into building successful software? A huge part of it is intuition about what makes sense to other humans. It's related to the idea of common sense, but in the software world there's this layer of unforgiving determinism and rigidity that most humans don't want to deal with. I just don't see how AI crosses that chasm.
What experienced software engineers have is a sense of taste - this looks like good code and/or design, that doesn't. But they don't have data; they have, at best, a couple of anecdotes. It's more a sense of "that was harder to work with than it should have been; that approach seems to have drawbacks". But you only get a few examples of that in a career.
And there are very few outfits compiling usable data that could shape the approaches that software engineers use.
So I don't think how humans got there was primarily data.
how much real-world data do you think went into the evolution of the human brain and all its learning algorithms?
having 40 years of experience building software gives you no more insight into that than having 40 years of experience using language gives you insight into where your language skills come from
This is not a profession where you've figured it all out after 5 years and can rest on your laurels. I think it takes literally 15+ years for most highly determined and intelligent people to even approach that state where they can manage complexity effectively. Most people never seem to get there.
But most of the lessons learned during that time go towards quality, not quantity or speed. The current trend with LLMs (and eventually maybe AI) seems to be doing what humans can do worse but significantly faster and cheaper. Unfortunately, not everyone needs or cares about safety, security or correctness.
I am afraid software will mirror the evolution of physical products from the industrial revolution to present day. It feels like the quality of consumer grade products is constantly decreasing.
And I don't see it being improved with whatever any llm chugs out, at least not "in-depth".
>Today we’re announcing Mechanize, a startup focused on developing virtual work environments, benchmarks, and training data that will enable the full automation of the economy.
>Compensation
>$200K – $475K • 1% – 2%
Just imagine for a moment you are a software engineer capable of doing what they say, rivaling the raw intellectual capabilities of every mathematician, economist, and physicist known to man, and you end up actually building something that directly leads to automating the entire fucking economy of the world.
And there's still someone who says to you: "best I can do is only 1% - 2%"
At best they don't believe actual AI will be created and they are helping a scam.
At worst, they are actively working to make their own job redundant and when they're fired, they will own nothing of what they built. All the money from their work will go to the owners who fired them.
Except that's the case with literally every meaningful information-work job. Your goal is to obsolete yourself. If you do it well then your success becomes your calling card for your next job. Your career is a series of such jobs.
2) Even if your claim was correct, then you'd amass experience on one job (task) that you could then leverage on the next job (task). If AI became reality, all that would become irrelevant. Your middle-class self would quickly realize you're only worth as much as your lower class neighbors who swing a shovel or flip burgers for a living, all the while your upper class bosses become richer thanks to your work.
In a well functioning society improvements in economic efficiency and productivity improve the standard of living if everyone in that society. Thus if you innovate yourself out of a job you'd still benefit and would have access to any training needed for your next job.
Sadly, I am becoming increasingly convinced I do. [1]
You're falling for one simple trap. Yes, even if things are (mostly) improving for everyone, they are improving massively faster for those already rich.
On top of that, I am not even sure they are improving. Some people are paying a third or even a half of their salary just to have a place to live. I even heard the situation in some cities is that people are struggling to feed their kids and keep the heating on.
1% of the global economy seems fine to me. If you actually believe in their vision, money is going to be worthless anyway.
Or do you mean it in the sense that everyone would already have everything they could ever want so the net utility of additional money would be 0?
It is possible to increase productivity while also centralizing that profit. That's what communists would call "exploitation of labor"
Today I set up a remote network with a couple of switches, a router and the rest. From the outside. The customer had already got the router to the internet (good skills) and a LAN. I had a router (pfSense) with six 2.5 GB connections.
I turned it into a 10 VLAN effort with access and trunks and so on, ports at layer 2, without disconnecting myself.
It's quite hard visualising a network, with VLANS and even harder working out how to pivot from the current setup to another. Anyone who has had to change the default VLAN across a site knows what I'm on about.
Just in case anyone here is in any doubt, networks are quite tricky. On a par with programming.
Ever? Forever is a long time. Or do you mean with today's AI, assuming no improvements are made?
Not now
> The market potential here is absurdly large: workers in the US are paid around $18 trillion per year in aggregate. For the entire world, the number is over three times greater, around $60 trillion per year.
No they don't.
If you try to talk about automating SE and you aren’t clearly explaining how people who know nothing about engineering will directly interact with this automation to get what they need, then you aren’t saying anything.
Even with vibe coding there is a certain skillset involved.
Artificial intelligence. Many people expect advances in artificial intelligence to provide the revolutionary breakthrough that will give order-of-magnitude gains in software productivity and quality. I do not. To see why, we must dissect what is meant by “artificial intelligence” and then see how it applies.
Parnas has clarified the terminological chaos:
Two quite different definitions of AI are in common use today.
AI-1: The use of computers to solve problems that previously could only be solved by applying human intelligence.
AI-2: The use of a specific set of programming techniques knows as heuristic or rule-based programming. In this approach human experts are studies to determine what heuristics or rules of thumb they use in solving problems. . . . The program is designed to solve a problem the way that humans seem to solve it.
The first definition has a sliding meaning. . . . Something can fit the definition of AI-1 today but, once we see how the program works and understand the problem, we will not think of it as AI anymore. . . . Unfortunately I cannot identify a body of technology that is unique to this field. . . . Most of the work is problem-specific, and some abstraction or creativity is require to see how to transfer it
great read: https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.p...Regardless, software dev I consider way more dimensional (eg nuanced) than Dota 2, even if a lot of patterns recur on a smaller scale in the code itself. If they weren’t able to crack Dota 2, why should I believe that software eng is just around the corner?
What if I told you that's not the key question, and the "more data" approach has obviously and publicly hit a wall that requires causal reasoning to move past?
rvz•1d ago
builder.ai was the first casualty claiming to use AI to replace software engineering teams to build products. Now bankrupt. [0]
It could cost a lot more money than initially estimated than other industries if (by their own admission) it is the last profession to be automated.
Let's see in the next 10 years if some of these 'startups' are still around on this 'mission' and whether if software engineering will be fully automated and dead for humans in the next decade which that is the narrative that they are all hyping for more VC money.
[0] https://finance.yahoo.com/news/builder-ais-shocking-450m-fal...