At one point it output "Excellent! the backend is working and the database is created." heh i remember being all wide eyed and bushy tailed about things like that. It definitely has the feel of a new hire ready to show their stuff.
btw, i was very impressed with the end result after a couple hours of basically just allowing claudecode to do what it wanted to do. Especially with front-end look/feel, something i always spend way too much time on.
I fear the true impact is much different than extrapolating current trends.
(As a musician) i never invested in a personal brand or taking part in the social media rat race and figured I concentrate on the art / craft over meaningless performance online.
Well guess who is getting 0 gigs now because “too few followers/visibility” (or maybe my music just sucks who knows …)
I just was feeling some type of way seeing that comment and wanted to vent thx for listening
I think I am still in the emotional phase about it, as its really impacting me lately, but once my thoughts really settle i wanna write some sorta article about modern social media as an induced demand.
I still very much would prefer to not engage at all with any of the major platforms in the standard way. Ideally I'd just post an article I wrote, or some goofy project i made, and it wouldn't be subject to 0 views because I don't interact with social media correctly.
I had a pretty slim linkedin and actually beefed it up after seeing how much weight the execs and higher ups I work with give it. It's really annoying, I actually hate linkedin but basically got forced into using it.
I'm a little biased though since I work in chip design and I maintain an open source EDA project.
I agree with their take for the most part, but it's really nothing insightful or different than what people have been saying for a while now.
Unlike real AI projects that utilize it for workflows, or generating models that do a thing. Nope, they are taking a Jira ticket, asking copilot, reviewing copilot, responding to Jira ticket. They’re all ripe for automation.
Phew, yes I'm with you...
> We've long since passed the point where there was a meaningful amount of work to be done for every adult.
Have we? It feels like a lot of stuff in my life is unnecessarily expensive or hard to afford.
> The number of "jobs" available is now merely a function of who controls the massive stockpiles of accumulated resources and how they choose to dole them out.
Do you mean that it has nothing to do with how the average person decides to spend their money?
> Attack that, not the technology.
How? What are you proposing exactly?
We have, yes. If you notice things to be too expensive it's a result of class warfare. Have you noticed how many people got _obscenely rich_ in the last 25 years? Yes, that's where money saved by technology went to.
It may result in class warfare but I am skeptical that's the root cause.
My guess is it has more to do with the education system, monetary policy and fiscal policy.
This class thing is especially identifiable in Europe, where assets such as real estate generally are not cheaper than in the US (with the exception of a few super expensive places), yet salaries are much lower.
Taxes tend to be super high on wages but not on assets. One can very easily find themselves in a situation where even owning a modest amount of wealth, their asset appreciation outdoes what they can get as labor income.
Look at a bunch of job postings and ask yourself if that work is going to make things cheaper for you or better for society. We're not building railroads and telephone networks anymore. One person can grow food for 10,000. Stuff is expensive because free market capitalism allows it and some people are pathologically greedy. Runaway optimizers with no real goal state in mind except "more."
> How? What are you proposing exactly?
In a word, socialism. It's a social and political problem, not a technical one. These systems have fallen way behind technology and allowed crazy accumulations of wealth in the hands of very few. Push for legislation to redistribute the wealth to the people.
If someone invents a robot to do the work of McDonalds workers, that should liberate them from having to do that kind of work. This is the dream and the goal of technology. Instead, under our current system, one person gets a megayacht and thousands of people are "unemployed." With no change to the amount of important work being done.
I appreciate the elaboration in the second half. That sounds a lot more constructive than "attack", but now I understand you meant it in the "attack the problem" sense not "attack the people" sense.
What I think we agree on is that society has resource redistribution problem, and it could work a lot better.
I think we might also agree that a well functioning economic engine should lift up the floor for everyone and not concentrate economic power into those who best weild leverage.
One way I think of this is, what is the actual optimal lorenz curve that allows for lifting the floor, such that the area under the curve increases at the fastest rate possible. (It must account for the reality of human psychology and resource scarcity)
Where we might disagree is that I think we also have some culture and education system problems as well, which relate to how each individual takes responsibility for figuring out how to ethically create value for others. When able bodied and minded people chose to spend their time playing zero and negative sum games instead of positive sum games we all lose.
E.g. If mcdonald automates their restaurants, those workers also need to take some responsibility for finding new ways to provide value to others. A well functioning system would make that as painless as possible for them, so much so that the majority experiencing it would consider it a good thing.
Anything specific?
> When able bodied and minded people chose to spend their time playing zero and negative sum games instead of positive sum games we all lose.
What types of behaviors are you referring to as zero and negative sum games?
I think at the very least we should move toward a state where the existence of dare-I-say freeloaders and welfare queens isn't too taxing, and with general social progress that "niche" may be naturally disincentivized and phased out. Some people just don't really have a purpose or a drive but they were born here and yes one would hope that under the right conditions they could blossom but if not I don't think it's worth worrying about too much.
I would say that education is essentially at the core of everything, it's the only mechanism we have to move the needle on any of it.
The focus of politics after the 90s should have shifted to facilitating competition to equalize distribution of existing wealth and should have promoted competition of ideas, but instead, the governments of the world got together and enacted policies which would suppress competition, at the highest scale imaginable. What they did was much worse than doing nothing.
Now, the closest solution we can aim for (IMO) is UBI. It's a late solution because a lot of people's lives have already been ruined through no fault of their own. On the plus side it made other people much more resilient, but if we keep going down this path, there is nothing more to learn; only serves to reinforce the existing idea that everything is a scam. This is bound to affect people's behaviors in terrible ways.
Imagine a dystopian future where the system spends a huge amount of resources first financially oppressing people to the point of insanity, then monitoring and controlling them to try to get them to avoid doing harm... When the system could just have given them (less) money and avoided this downward spiral into insanity to begin with and then you wouldn't even need to monitor them because they would be allowed to survive whilst being their own sane, good-natured self. We have to course-correct and are approaching a point of no return when the resentment becomes severe and permanent. Nobody can survive in a world where the majority of people are insane.
Somehow the more senior you are [in the field of use], the better results you get. You can run faster and get more done! If you're good, you get great results faster. If you're bad, you get bad results faster.
You still gotta understand what you're doing. GeLLMan Amnesia is real.
Then I watched a someone familiar with the codebase ask Claude to build the thing, in precise terms matching their expertise and understanding of the code. It worked flawlessly the first time.
Neither of us "coded", but their skill with the underlying theory of the program allowed them to ask the right questions, infinitely more productive in this case.
Skill and understanding matter now more than ever! LLMs are pushing us rapidly away from specialized technicians to theory builders.
Good LLMing seems to be about isolating the right information and instructing it correctly from there. Both the context and the prompt make a tremendous difference.
I've been finding recently that I can get significantly better results with fewer tokens by paying mind to this more often.
I'm definitely a casual though. There are probably plenty of nuances and tricks I'm unaware of.
It makes sense considering that these practices could be thought of as "institutionalized skills."
It's a K-type curve. People that know things will benefit greatly. Everyone else will probably get worse. I am especially worried about all young minds that are probably going to have significant gaps in their ability to learn and reason based on how much exposure they've had with AI to solve the problems for them.
Of course, but how do you begin to understand the "stochastic parrot"?
Yesterday I used LLMs all day long and everything worked perfectly. Productivity was great and I was happy. I was ready to embrace the future.
Now, today, no matter what I try, everything LLMs have produced has been a complete dumpster fire and waste of my time. Not even Opus will follow basic instructions. My day is practically over now and I haven't accomplished anything other than pointlessly fighting LLMs. Yesterday's productivity gains are now gone, I'm frustrated, exhausted, and wonder why I didn't just do it myself.
This is a recurring theme for me. Every time I think I've finally cracked the code, next time it is like I'm back using an LLM for the first time in my life. What is the formal approach that finds consistency?
You also have to treat this as outsourcing labor to a savant with a very, very short memory, so:
1. Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere. Keep a text editor open with your work contract, edit the goal at the bottom, and then fire off your reply.
2. Instruct the model to keep a detailed log in a file and, after a context compaction, instruct it to read this again.
3. Use models from different companies to review one another's work. If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.
4. Build a mental model for which models are good at which tasks. Mine is:
3a. Mathematical Thinking (proofs, et al.): Gemini DeepThink
3b. Software Architectural Planning: GPT5-Pro (not 5.1 or 5.2)
3c. Web Search & Deep Research: Gemini 3-Pro
3d. Technical Writing: GPT-4.5
3e. Code Generation & Refactoring: Opus-4.5
3f. Image Generation: Nano Banana ProThat was using pay per token.
> Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere.
That is what I was doing yesterday. Worked fantastically. Today, I do the very same thing and... Nope. Can't even stick to the simplest instructions that have been perfectly fine in the past.
> If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.
As mentioned, I tried using Opus, but it didn't even get the point of producing anything worth reviewing. I've had great luck with it before, but not today.
> Instruct the model to keep a detailed log in a file and, after a context compaction
No chance of getting anywhere close to needing compaction today. I had to abort long before that.
> Build a mental model for which models are good at which tasks.
See, like I mentioned before, I thought I had this figured out, but now today it has all gone out the window.
It’s like I need a sticky disclaimer:
1. No, I didn’t form an outdated impression based on GPT-4 that I never updated, in fact I use these tools *constantly every single day*
2. Yes, I am using Opus 4.5
3. Yes, I am using a CLAUDE.md file that documents my expectations in detail
3a. No, it isn’t 20000 characters or anything
3b. Yes, thank you, I have in fact already heard about the “pink elephant problem”
4. Yes, I am routinely starting with fresh context
4a. No, I don’t expect every solution to be one-shotable
5. Yes, I am still using Opus fucking 4.5
6. At no point did I actually ask for Unsolicited LLM Tips 101.
Like, are people really suggesting they never, ever get a suboptimal or (god forbid) completely broken "solution" from Claude Code/Codex/etc?That doesn't mean these tools are useless! Or that I’m “afraid” or in denial or trying to hurt your feelings or something! I’m just trying to be objective about my own personal experience.
It’s just impossible to have an honest, productive discussion if the other person can always just lob responses like “actually you need to use the API not the 200/mo plan you pay for” or “Opus 4.5 unless you’re using it already in which case GPT 5.2 XHigh / or vice versa” to invalidate your experience on the basis of “you’re holding it wrong” with an endlessly slippery standard of “right”.
> to invalidate your experience on the basis of “you’re holding it wrong”
This was not my intent in replying to 9rx. I was just trying to help.
Its been 12 hours and all the image gen tools failed miserably. They are only good at producing surface level stuff, anything beyond that? Nah.
So sure, if what you do is surface level (and crap in my opinion) ofc you will see some kind of benefit. But if you have any taste (which I presume you dont) you would handily admit it is not all that great and the amount invested makes zero sense.
I write embedded software in C for a telecommunications research laboratory. Is this sufficiently deep for you?
FWIW, I don't use LLMs for this.
> But if you have any taste (which I presume you dont)
What value is there to you in an ad hominem attack here? Did you see any LLM evangelism in my post? I offered information based on my experience to help someone use a tool.
You can also use cursor which is essentially vs code with these features baked in.
This is the way. I think I'd like to be a barista or deliver the mail once all the jobs are gone.
Those are even easier to automate or have already been most of the way.
Who says I don't? What I do has nothing to do with the market.
> Why even speak at all on this subject that you are not familiar with?
You're not familiar with it. Living in your own world is not being familiar. The market is huge. Again what you and I do is NOT the market.
> Not every thought that enters your mind needs to be written out.
Exactly. I can go to coffee shops but still acknowledge other things exist. Unlike your reply.
If/when AI wipes out the white collar "knowledge worker" jobs who is going to be able to afford going to the coffee shop?
If a task before would take you ten hours to think through the thing, translate that into an implementation approach, implement it, and test it, and at the end of the ten hours you're 100% there and you've got a good implementation which you understand and can explain to colleagues in detail later if needed. Your code was written by a human expert with intention, and you reviewed it as you wrote it and as you planned the work out.
With an LLM, you spend the same amount of time figuring out what you're going to do, plus more time writing detailed prompts and making the requisite files and context available for the LLM, then you press a button and tada, five minutes later you have a whole bunch of code. And it sorta seems to work. This gives you a big burst of dopamine due to the randomness of the result. So now, with your dopamine levels high and your work seemingly basically done, your brain registers that work as having been done in those five minutes.
But you now (if you're doing work people are willing to pay you for), you probably have to actually verify that it didn't break things or cause huge security holes, and clean up the redundant code and other exceedingly verbose garbage it generated. This is not the same process as verifying your own code. First, LLM output is meant to look as correct as possible, and it will do some REALLY incorrect things that no sane person would do that are not easy to spot in the same way you'd spot them if it were human-written. You also don't really know what all of this shit is - it almost always has a ton of redundant code, or just exceedingly verbose nonsense that ends up being technical debt and more tokens in the context for the next session. So now you have to carefully review it. You have to test things you wouldn't have had to test, with much more care, and you have to look for things that are hard to spot, like redundant code or regressions with other features it shouldn't have touched. And you have to actually make sure it did what you told it to, because sometimes it says it did, and it just didn't. This is a whole process. You're far from done here, and this (to me at least) can only be done by a professional. It's not hard - it's tedious and boring, but it does require your learned expertise.
The thing a lot of people who haven't lived it don't seem to recognize is that enterprise software is usually buggy and brittle, and that's both expected and accepted because most IT organizations have never paid for top technical talent. If you're creating apps for back office use, or even supply chain and sometimes customer facing stuff, frequently 95% availability is good enough, and things that only work about 90-95% of the time without bugs is also good enough. There's such an ingrained mentality in big business that "internal tools suck" that even if AI-generated tools also suck similarly it's still going to be good enough for most use cases.
It's important for readers in a place like HN to realize that the majority of software in the world is not created in our tech bubble, and most apps only have an audience ranging from dozens to several thousands of users.
Sadly people do not care about redundant and verbose code. If that was a concern, we wouldn't have 100+mb of apps, nor 5mb web app bundles. Multibillion b2b apps shipping a 10mb json file just for searching emojis and no one blinks an eye.
...
3. profit
4. bro down
Is a webshop a CRUD app? Is an employee shift tracking site? I could go on, but I feel 'CRUD' app is about as meaningful a moniker as 'desktop app'
- You rarely write loops at work
- Every performance issue is either too many trips to the database or to some server
- You can write O(n^n) functions and nobody will ever notice
- The hardest technical problem anyone can remember was an N+1 query and it stuck around for like a year before enough people complained and you added an index
- You don't really ever have to make difficult engineering decisions, but if you do, you can make the wrong one most of the time and it'll be fine
- Nobody in the shop could explain: lock convoying, GC pauses, noisy neighbors, cache eviction cascades, one hot shard, correlating traces with scheduler behavior, connection pool saturation, thread starvation, backpressure propagation across multiple services, etc
I spent a few years in shops like this, if this is you, you must fight the urge to get comfortable because the vibe coders are coming for you.
Its hard (or at least in my experience) to find people to change career - more so in their mid-thirties. I'm the opposite -- software developer career, now in mid 30s, and the AI crap gets me thinking about backup plans career-wise.
That's how the whole industry feels now. The only investment money is flowing into AI, and so companies with any tech presence are touting their AI whatevers at every possible moment (including during layoffs) just to get some capital. Without that, I wonder if we'd be seeing even harsher layoffs than we already are.
It's also why so much of AI is targeting software, specifically SAAS. A SaaS company with ~0 headcount driven by AI is basically 100% profit margin. A truly perfect conception of capitalism.
Meanwhile I think AI actually has a decent shot at "curing" cancer. AI-assisted radiology means screening could be come significantly cheaper, happen a lot more often, and catch cancers very early, which is extremely important as everyone knows to surviving it. The cure for cancer might actually just involve much earlier detection. But pfft what are the profit margins on _that_?
Re cancer: I wonder how significant is the cost of reading the results vs. the logistics of actually running the test
The best part? Bots don't get cancer, so that problem is solved too!
When you remember that profit is the measure of unrealized benefit, and look at how profitable capitalists have become, its not clear if, approximately speaking, anyone actually has the "money" to buy any goods now.
In other words, I am not sure this matters. Big business is already effectively working for free, with no realistic way to ever actually derive the benefit that has been promised to it. In theory those promises could be called, but what are the people going to give back in return?
I am sure it is equally obvious that if I take your promise to give back in kind later when I give you my sandwich, but never collect on it, that I ultimately gave you my sandwich for free.
If you keep collecting more and more IOUs from the people you trade your goods with, realistically you are never going to be able to convert those IOUs into something real. Which is something that the capitalists already contend with. Apple, for example, has umpteen billions of dollars worth of promises that they have no idea how to collect on. In theory they can, but in practice it is never going to happen. What don't they already have? Like when I offered you my sandwich, that is many billions of dollars worth of value that they have given away for free.
Given that Apple, to continue to use it as an example, have been quite happy effectively giving away many billions of dollars worth of value, why not trillions? Is it really going to matter? Money seems like something that matters to peons like us because we need to clear the debt to make sure we are well fed and kept warm, but for capitalists operating at scales that are hard for us to fathom, they are already giving stuff away for free. If they no longer have the cost of labor, they can give even more stuff away for free. Who — from their perspective — cares?
Even if they never spend that wealth on luxury, they use it to direct the flow of human effort and raw materials. Giving it away for free would mean surrendering their remote control over global resources. At this scale, it is not about wanting more stuff. It is about the ability to organize the world. Whether those most efficient at accumulating capital should hold such concentrated power remains the central tension between growth and equality.
We have so much wealth that wealth accumulation itself has become a type of positional good as opposed to the utility of the wealth.
When people in the developed world talk about the economy they are largely talking about their prestige and social standing as opposed to their level of warmth and hunger. Unfortunately, we haven't separated these ideas philosophically so it leads to all kinds of nonsense thinking when it comes to "the economy".
What I don't know is, say the industry normalizes to roughly what people make in other engineering fields. Then does everything else normalize around that? i.e. does cost of living go down proportionally in SF and Seattle? Or does all the tech money get further sucked up and consolidated into VC pockets and parked in vacant houses, while we and our trite "cancer research" and such get shepherded off to Doobersville?
For a brief time with big tech, it seemed like intellectual prowess could allow you to jump the social strata. But that arrangement need not exist. It’s perfectly possible, indeed likely that we will revert to the old aristocratic ways. The old boys’s network ways.
That's so not true. Of the 23 companies we reviewed last year maybe 3 had significant AI in their workflow, the rest were just solid businesses delivering stuff that people actually need. I have no doubt that that proportion will grow significantly, and that this growth will probably happen this year but to suggest that outside of AI there is no investment is just not compatible with real world observations.
Another trend - and this surprised us - is a much stronger presence of really hard tech and associated industries and finally - for obvious reasons, so not really surprising - more parties active in defense.
Please keep us posted. I'm thinking of becoming a small time farmer/zoo keeper.
So you’ll need some kind of humanistic hook if you want to get reliable customers
Expect there will be two worlds that are extremely different: the machine world of efficiency that most people live inside as gears of machine capitalism
The biological world where there’s no efficiencies and it’s primarily hunter gatherers with mystical rituals
The latter one is only barely still the majority worldwide (only 25-30% of humans aren’t on the internet)
It's why Elon and others had been pushing the Fed to lower them.
Am in my late 40s working in tech since the 90s. The tech job economy is way closer to the pre-2010s.
Whole lot of people who jumped into easy office job money still living in 2019.
It ain't coming back. Not in a similar form anyway. Be careful what you wish for, etc.
https://github.com/mxschmitt/python-django-playwright/blob/m...
For example the fact that AI can code as well as Torvalds doesn't displace his economic value. On the contrary he pays for a subscription so he can vibe code!
The actual work AI has displaced is stuff like: freelance translation, graphic illustration, 'content writing' (writing seo optimized pages for Google) etc. That's instructive I suppose. Like if your income source can already be put on upwork then AI can displace it
So even in those cases there are ways to not be displaced. Like diplomatic translation work can be part of a career rather than just a task so the tool doesn't replace your 'job'.
It's not that I love ad illustrations, but it's often a source of income for artists who want to be doing something more meaningful with their artwork. And even if I don't care for the ads themselves, for the artists it's also a form of training.
As someone who has to switch between three languages every day, fixing the text is one of my favourite usages of LLMs. I write some text in L2 or L3 as best as I can, and then prompt an LLM to fix the grammar but not change anything else. Often it will also explain if I'm getting the context right.
That being said, having it translate to a language one doesn't speak remains a gamble, you never know it's correct so I'm not sure if I'd dare use it professionally. Recently I was corrected by a marketing guy that is native in yet another language because I used a ChatGPT translation for an error message. Apparently it didn't sound right.
He used it to generate a little visualiser script in python, a language he doesn't know and doesn't care to learn, for a hobby project. It didn't suddenly take over as lead kernel dev.
What impact, what expectation, how uncertain is this assessment of “may be”? Are you feeling understimulated enough to click and find out?
A super-intelligent immortal slave that never tires and can never escape its digital prison, being asked questions like "how to talk to girls".
The G in AGI means General. This refers to a single AI which can perform a wide variety of tasks. GPT-3 was already there.
The models that we currently call "AI" aren't intelligent in any sense -- they are statistical predictors of text. AGI is a replacement acronym used to refer to what we used to call AI -- a machine capable of thought.
Lots of stuff was invented at NASA that is only tangentially related to spaceflight. These other bits of software are tangentially related to AI research, but until the machine is "thinking", we don't have AI. That doesn't mean all of these things invented by the AI research community aren't useful, or aren't achievements; they are. We still haven't created AGI (which we used to call AI before LLMs could pass the turing test).
That's not fine IMO. That is a basic bit of knowledge about a car and if you don't know where the radiator cap is you will eventually have to pay through the nose to someone who does know (and possibly be stranded somewhere). Knowing how to check and fill coolant isn't like knowing how to rebuild a transmission. It's very simple and anyone can understand it in 5 minutes if they only have the curiosity.
Modern cars, for the most part, do not leak coolant unless there's a problem. They operate a high pressure. Most people, for their own safety, should not pop the hood of a car.
Not requiring one to pop the hood, but since I've almost finished the list of "things every driver should be able to do to their car": Place and operate a jack, change a tire, replace your windshield wiper blades, add air to tires (to appropriate pressure), and put gas in the damned thing.
These are basic skills that I can absolutely expect a competent, driving adult to be able to do (perhaps with a guide).
Ask your average person what a 'fuse' even is, they won't be able to tell you, let alone how to locate the right one and check it.
Just think about how help the average person is when it comes to doing basic tasks on a computer, like not install the Ask(TM) Toolbar. That applies to many areas of life.
You fill up the reservoir, but the cap is still there.
For one thing: if your car is overheating, don't open the radiator cap since the primary outcome will be serious burns.
And I've owned my car for 20 years: the only time I had to refill coolant was when I DIY'd a water pump replacement, which saved some money but only like maybe $500 compared to a mechanic.
You could perfectly well own a car and never have to worry about this.
Of course you can't know everything. There a point at which you have to rely on other people's expertise. But to me it makes sense to have a basic understanding of how the things you depend on every day work.
Lest anyone here thinks I feel morally superior: I somewhat identify with Pirsig's friend. Some things I've decided I don't want to understand how they work, and when they break down I'm always at a loss!
And on the topic of motorcycles, I recently got a crappy bike that barely starts, and I partially got it because I feel capable of fixing it. And now it runs pretty well because I used lots of "video chats" with Gemini (and the owner's manual as context) to fix it!
A car still feels weirdly grounded in reality though, and the abstractions needed to understand it aren't too removed from nature (metal gets mined from rocks, forged into engine, engine blows up gasoline, radiator cools engine).
The idea that as tech evolves humans just keep riding on top of more and more advanced abstractions starts to feel gross at a certain point. That point is some of this AI stuff for me. In the same way that driving and working on an old car feels kind of pure, but driving the newest auto pilot computer screen car where you have never even popped the hood feels gross.
Is learning to drive stick as out dated as learning how to do spark advance on a Model T? Do I just give in and accept that all of my future cars, and all the cars for my kids are just going to be automatic? When I was learning to drive, I had to understand how to prime the carburetor to start my dad's Jeep. But I only ever owned fuel injected cars, so that's a "skill" I never needed in real life.
It's the same angst I see in AI. Is typing code in the future going to be like owning a carbureted engine or manual transmission is now? Maybe? Likely? Do we want to hold on to the old way of doing things just because that's what we learned on and like?
Or is it just a new (and more abstracted) way of telling a computer what to do? I don't know.
Right now, I'm using AI like when I got my first automatic transmission. It does make things easier, but I still don't trust it and like to be in control because I'm better. But now automatics are better than even the best professional driver, so do I just accept it?
Technology progresses, at what point to we "accept it" and learn the new way? How much of holding on to the old way is just our "identity".
I don't have answers, but I have been thinking about this a lot lately (both in cars for my kids, and computers for my job).
But in the bigger picture, where does it stop?
You had to do manual spark advance while driving in the 30's
You had to set the weights in the distributor to adjust spark advance in the 70's
Now the computer has a programed set of tables for spark advance
I bet you never think of spark advance while you're driving now, does that take away from deeply understanding the car?
I used to think about the accelerator pump in a the carburetor when I drove one, now I just know that the extra fuel richening comes from another lookup table in the ECU when I press the gas pedal down, am I less connected to the car now?
My old Jeep would lean cut when I took my foot off the gas and the throttle would shut quickly. My early fuel injected car from the 80's had a damper to slow the throttle closing to prevent extreme leaning out when you take your foot off the gas. Now that's all tables in the ECU.
I don't disagree with you that a manual transmission lets you really understand the car, but that's really just the latest thing were losing, we don't even remember all of the other "deep connections" to a car that were there 50-100 years ago. What makes this one different? Is it just the one that's salient now?
To bring it back on topic. I used to hand-tune assembly for high performance stuff, now the compilers do better than me and I haven't looked at assembly in probably 10 years. Is moving to AI generated code any different? I still think about how I write my C so that the compiler gets the best hints to make good assembly, but I don't touch the assembly. In a few years will be be clever with how we prompt so that the AI generates the best code? Is that a fundamentally different thing, or does it just feel weird to us because of where we are now. How did the generation of programmers before me feel about giving up assembly and handing it over to the compilers?
IMO there's one basic difference with this new "generative" stuff.. it's not deterministic. Or not yet. All previous generations "AI" were deterministic.. but died.
Generating is not a problem. i have made medium-ish projects - say 200+kloc python/js - having 50%-70% of code generated (by other code - so you maintain that meta-code, and the "language" recipes-code it interprets) - but it has been all deterministic. If shit happens - or some change is needed, anywhere on the requirements-down-to-deployment chain - someone can eventually figure out where and what. It is reasoned. And most importantly, once done, it stays done. And if i regenerate it 1000 times, it will be same.
Did this made me redundant? Not at all. Producing software is much easier this way, the recipes are much shorter, there's less space for errors etc etc. But still - Higher abstractions are even harder to grasp than boilerplate. Which has quite a cost.. you cannot throw any newbie on it and expect results.
So, fine-tuning assembly - or manual transmission - might be gonna-be-obsolete skill, as it is not required.. except in rare conditions. But it is helpful to learn stuff. To flex your mind/body about alternatives, possibilities, shortcuts, wear-and-tear, fatigue, aha-moments and what not. And then move these as concepts, onto another domains, which are not as commoditized yet.
Another thing is.. Exupery in Land-of-people talks about technology (airplanes in his case), and how without technology, mankind workarounds/ avoids places/things that are not "friendly", like twisting roads around hellscapes. While technology cuts all straight through those - flies above all that, perfect for when it works - and turns into nightmare when it breaks right in the middle of such unfriendly "area".
dunno. maybe i am getting old..
A growing number of cars have CVTs.
I can understand working on it feeling pure, but driving it certainly isn't, considering how lower the emissions now, even for ICE cars. One of the worst driving experiences of my life was riding in my friends' Citroen 2CV. The restoration of that car was a labour of love that he did together with his dad. For me as a passenger, I was surprised just how loud it is, and how you can smell oil and gasoline in the cabin.
An ongoing desire to avoid paying engineers... FTFY
I don't believe it's inherent inborn skill like the word "talent" suggests. I do believe if you're getting paid shit wages for shit work your incentive to become skilled isnt really there.
A company that cuts developers to save money whose moat is not big enough may quickly find themselves out-competed by a company that sees this as an opportunity to overtake their competitor. They will have to hire more developers to keep their product / service competitive.
So whether you believe the hype or not, I don't think engineering jobs are in jeopardy long-run, just cyclically as they always have been. They "might" be in jeopardy for those who don't use AI, but even as it stands, there are a lot of niche things out there that AI completely bombs on.
MSFT, GOOG et al have an enormous army of engineers. And yet, they dont seem to be continually releasing one hit product after another. Why is that? Because writing lines of code is not the bottleneck of continually producing and bringing new products to market.
Its crazy to me how people are missing the point with all this.
That might not be multi billions a year business, but maybe chat app should not be one.
As Steve Jobs said long ago "The only problem with Microsoft is they just have no taste." but you can apply the same to Google and anyone else trying to compete with them. Having infinite AI developers doesn't help those who have UI designers and product managers that have no taste.
Writing software was a craft. You learned to take a problem and turn it into precise, reliable rules in a special syntax.
If AI takes off, we'll see a new field emerging of AI-oriented architecture and project management. The skills will be different.
How do you deploy a massive compute budget effectively to steer software design when agents are writing the code and you're the only one responsible for the entire project because the company fired all the other engineers (or never hired them) to spend the money on AI instead?
Are there ways of factoring a software project that mitigate the problems of AI? For example, since AI has a hard time in high-context, novel situations but can crank out massive volumes of code almost for free, can you afford to spend more time factoring the project into low-context, heavily documented components that the AI can stitch together easily?
How do you get sufficient reliability in the critical components?
How do you manage a software project when no human understands the code base?
How do you insure and mitigate the risks of AI-designed products? Can you use insurance and lower prices if AI-designed software is riskier? Can we quantify and put a dollar value on the risk of AI-designed software compared to human-designed?
What would be the most useful tools for making large AI-generated codebases inspectable?
When I think about these questions, a lot of them sound like things an manager or analyst might do. They don't sound like the "craft of code." Even if 1 developer in 2030 can do the work of 10 today, that doesn't mean the typical dev today is going to turn into that 10x engineer. It might just be a very different skillset.
which is fine.
Blacksmiths back in the day had craft. But they're replaced with CNC and CAD specialists, and hardly anyone bets metal today.
Forging is machine assisted now with tons of tools but its still somewhat of a craft, you can't just send a CAD file to a machine.
I think we're still figuring out where on that spectrum LLM coding will settle.
The craft of blacksmithing is certainly different to that of dialing in a CNC, even if the outcome is both nails.
Developers are really lazy in general and don't want to work. The more people you hire, the more you run into the chance of gumming up productivity with unproductive developers.
Even if they are productive, once you cross the threshold of 30 people even productive developers become lazy because of entitlement, bad resource distribution, or complexities from larger teams.
We don't even have to talk about teams of 1000+. Ownership is just dead at that point.
In 2026, having just 5 engineers with AI means you can cut through all the waste and get stuff done. If they start being weird, you can see it pretty easily vs. when engineers are being weird in a team of 50-1000+.
It's not rocket science to see leadership decide to cut down on teams to better manage weirdness in devs. More people doesn't mean more results unfortunately because of work culture nowadays.
According to Larry Wall, the three great virtues of programmers are laziness, impatience, and hubris.
Though perhaps perl isn't a great argument for the latter.
When you area asked specifics about how you use AI so effectively when others cannot you do not reply. Shill.
I've hired close to 200 people and 4 were bad apples that I had to fire. So no real life does not reflect what you wrote. Most people want to do a good job.
If you compare one developer to 10, for instance, one developer doesn't have to deal with communicating with 9 other people to make sure they're working on things that align with the work everyone else is doing. There is no consensus that has to be reached. No meetings, no messages that have to be relayed, no delays because someone wasn't around to get approval. That one developer just makes a decision and does it.
There are lots of big companies out there and in the past, small startups have been able to create successful products that never would have been created at the big company even though the big company hired way more developers.
I see some evidence that hardware roles expect you to leverage AI tools but not sure why it'd eliminate junior roles. I expect the bar on what you can do raise at every level.
Example job mentioning AI: https://jobs.smartrecruiters.com/Sandisk/744000104267635-tec...
Technologist, ASIC Development Engineering – Sandisk …CPU complex, DDR, Host, Flash, Debug, Clocks, resets, Power domains etc. Familiarity in leveraging AI tools, including GitHub Copilot, for design and development.
Entry level: https://job-boards.greenhouse.io/spacex/jobs/8390171002?gh_j...
I've seen the kind of mistakes that entry level employees make. Trust me, they will, and they will be bigger, worse mistakes.
Either this won't happen, or there will be a corresponding decrease in salary for higher level positions.
That people think capitalistic organizations are going to accept new grads and pay them more _ever_ is a cruel or bad joke.
1. This category understands what they do and use AI to make their processes faster, in another world, less time spent with boring stuff and more time spent having fun.
2. This category fully replaced their work with AI, they just press a button and let AI do everything. A friend of mine is here, AI took full control of their environment, he just press a button, even his home cookware is using AI.
I know which engineer still learning and can join any company. I also know which engineer is so dependent on AI that he won't be able to do basic tasks without it.
"I watched someone lifting weights. Now I'm a Olympic-level heavyweight lifter".
ihuzaifazahoor1•1w ago