EQing the music played sounds interesting. I’ll look at the options. I tend to just have it at a level low enough that I can hear all speakers on the call.
Absolutely—I feel like I can ship at a crazy velocity now, like I have a team of interns at my disposal to code up my every silly demand.
It reminds me this scene: `Cut my eggs`
`Your eggs are cut sir!'
`Cut my milk'
`I can't sir, it's liquid'
`Imbecile! Freeze it, then cut it!'
I also wonder what type of simple CRUD apps people build that have such a performance gain? They must be building well understood projects or be incredible slow developers for LLMs to have such an impact, as I cant relate to this at all.
But I certainly wouldn’t assume that other people’s jobs are simple or boring just because they don’t look like yours.
Which is absolutely no shame. But people shouldn't expect these gains for their job if they work in less understood environments.
Most non CRUD AI code implementations are flawed/horrendous.
But for the rest of us, who have a mix of common/boring and uncommon/interesting tasks, accelerating the common ones means spending more time (proportionally) on less common tasks.
Unfortunately we don’t seem to great at classifying tasks as common or uncommon, and there are bored engineers who make complex solutions just to keep their brain occupied.
In my experience, it seems the people who have bad results have been trying to get the AI to do the reasoning. I feel like if I do the reasoning, I can offload menial tasks to the AI, and little annoying things that would take one or two hours start to take a few minutes.
That very quickly adds up to some real savings.
The ones who know what they want to do, how it should be done, but can't really be arsed to read the man pages or API docs of all the tools required.
These people can craft a prompt (prompt engineering :P) for the LLM that gets good results pretty much directly.
LLMs are garbage in garbage out. Sometimes the statistical average is enough, sometimes you need to give it more details to use the available tools correctly.
Like the fact that `fd` has the `-exec` and `--exec-batch` parameters, there's no need to use xargs or pipes with it.
90% of what the average (or median) coder does isn't in any way novel or innovative. It's just API Glue in one form or another.
The AI knows the patterns and can replicate the same endpoints and simple queries easily.
Now you have more time to focus on the 10% that isn't just rehashing the same CRUD pattern.
I hear this from people extolling the virtue of AI a lot, but I have a very hard time believing it. I certainly wouldn't describe 90% of my coding work as boilerplate or API glue. If you're dealing with that volume of boilerplate/glue, isn't it incumbent upon you to try and find a way to remove that? Certainly sometimes it isn't feasible, but that seems like the exception encountered by people working on giant codebases with a very large number of contributors.
I don't think the work I do is innovative or even novel, but it is nuanced in a way I've seen Claude struggle with.
It's the connectors that are 90-95% AI chow, just set it to task with a few examples and it'll have a full CRUD interface for your data done while you get more snacks.
Then you can spend _more_ of your limited time on the 10% of code that matters.
That said, less than 50% of my actual time spent on the clock is spent writing code. That's the easiest part of the job. The rest is coordinating and planning and designing.
I assumed you were only talking about the actual code. It still seems really odd. Why is there so much unavoidable boilerplate?
It has a specific amount of code I need to write just to add the basic boilerplate of receiving the data and returning a result from the endpoint before I can get to the meat of it.
IIRC there are no languages where I can just open an empty file and write "put: <business logic>" and it magically knows how to handle everything correctly.
If it doesn't, then I feel like even in the JavaScript world of 2015 you could write "app.put("/mypath", business_logic)" and that would do the trick, and that was a very immature language ecosystem.
> IIRC there are no languages where I can just open an empty file and write "put: <business logic>" and it magically knows how to handle everything correctly.
Are you sure it's done correctly? Take something like timestamps, or validations: It's easy to get those wrongs.
I set up a model in DBT that has 100 columns. I need to generate a schema for it (old tools could do this) with appropriate tests and likely data types (old tools struggled with this). AI is really good at this sort of thing.
Then you have to QA it for ages to discover the bugs it wrote, but the initial perception of speed never leaves you.
I think I'm overall slower with AI, but I could be faster if I had it write simple functions that I could review one by one, and have the AI compose them the way I wanted. Unfortunately, I'm too lazy to be faster.
Of course you need to check their work, but also the better your initial project plan and specifications are, the better the result.
For stuff with deterministic outputs it's easy to verify without reading every single line of code.
With that it can see any errors in the console, click through the UI and take screenshots to analyse how it looks giving it an independent feedback loop.
I had Claude Code build me a Playwright+python -based scraper that goes through their movie section and stores the data locally to an sqlite database + a web UI for me to watchlist specific movies + add price ranges to be alerted when it changes.
Took me maybe a total of 30 minutes of "active" time (4-5 hours real-time, I was doing other shit at the same time) to get it to a point where I can actually use it.
Basically small utilities for limited release (personal, team, company-internal) is what AI coding excels at.
Like grabbing results from a survey tool, adding them to a google sheet, summarising the data to another tab with formulas. Maybe calling an LLM for sentiment analysis on the free text fields.
Half a day max from zero to Good Enough. I didn't even have to open the API docs.
Is it perfect? Of course not. But the previous state was one person spending half a day for _each_ survey doing that manually. Now the automation runs in a minute or so, depending on whether Google Sheets API is having a day or not =)
Their job is to do meetings, and occasionally add a couple of items to the HTML, which has been mostly unchanged for the past 10 years, save for changing the CSS and updating the js framework they use.
in the olden days, id imagine getting that right to take about a week and a half and something everyone hated about spinning up a new service
with the LLM, i gave it a feedback loop of being able to do an initial sign in, integration test running steps with log reading on the client side, and a deploy and log reading mechanism for the server side.
i was going to write out an over-seer-y script for another LLM to trigger the trial and error script, but i ended up just doing that myself. What i skipped was the needing to run any one of the steps, and instead i got nicely parsed errors, so i could go look for wikis on what parts of the auth process i was missing and feed in those wiki links and such to the trial and error bot. i skipped all the log reading/parsing to get to what the next actionable chunk is, and instead, i got to hang around in the sun for a bit while the LLM churned on test calls and edits.
im now on a cleanup step to turn the working code into nicely written code that id actually want commited, but getting to the working code stage took very little of my own effort; only the problem solving and learning about how the auth works
Agents finish, I queue them up with new low hanging fruits, while I architect the much bigger tasks, then fire that off -> Review smaller tasks. It really is a dance, but flow is much easier achieved when I do get into it; hours really just melt together. The important thing to do is to put my phone away, and block all and any social media or sites I frequent, because its easy to get distracted when agents aren just producing code and you're sitting on the sidelines.
While programming, it's possible to get into a trance-like state where the program's logic is fully loaded and visible in your mind, and your fingers become an extension of your mind that wire you directly to the machine. This allows you to modify the program essentially at the speed of thought, with practically zero chance of producing buggy code. The programmer effectively becomes a self-correcting human interpreter.
Interrupting someone in this state is incredibly disruptive, since all the context and momentum is lost, and getting back into the state takes time and focus.
What you're describing is a general workflow. You can be focused on what you're doing, but there's no state loaded into memory that makes you more efficient. Interruptions are not disruptive, and you can pick up exactly where you left off with ease. In fact, you're constantly being interrupted by those agents running in the background, when they finish and you give them more work. This is a multitasking state, not flow.
So the article is correct. It's not possible to get into a flow state while working with ML tools. This is because it is an entirely different activity from programming that triggers different neural pathways.
When using ML tools you have no deep understanding of the behavior of the program, since you don't understand the generated code. If you bother to review the code, that is a huge context switch from anything you were doing previously. This doesn't happen during deeply focused programming sessions.
You may have been a software engineer for decades without ever experiencing the programming flow state. I'm not passing judgement.
At my first job in Silicon Valley, I used to code right on the production floor totally oblivious to what was going on.
Can we see this frontend code? For research purposes, of course.
Good nugget. Effective prompting, aside from context curation, is about providing the LLM with an approximation of your world model and theory, not just a local task description. This includes all your unstated assumptions, interaction between system and world, open questions, edge cases, intents, best practices, and so on. Basically distill the shape of the problem from all possible perspectives, so there's an all-domain robustness to the understanding of what you want. A simple stream of thoughts in xml tags that you type out in a quasi-delirium over 2 minutes can be sufficient. I find this especially important with gpt-5, which is good at following instructions to the point of pedantry. Without it, the model can tunnel vision on a particular part of the task request.
Without this it defaults to being ignorant about the trade-offs that you care about, or the relevant assumptions you're making which you think are obvious but really aren't.
The "simple stream" aspect is that each task I give to the LLM is narrowly scoped, and I don't want to put all aspects of the relevant theory that pertains just to that one narrow task into a more formal centralized doc. It's better off as an ephemeral part of the prompt that I can delete after the task is done. But I also do have more formal docs that describe the shared parts of the theory that every prompt will need access to, which is fed in as part of the normal context.
<assumptions>
<tech>LLMs understand metaphor but may default to literal interpretations unless nudged</tech>
<audience>Reader is tech-savvy but emotionally attuned</audience>
<bias>We romanticize loss and impermanence; permanence is sterile</bias>
<style>Poetic, but not florid. Think Calvino meets cyberpunk.</style>
</assumptions>
<worldModel>
<digitalMemory>
<definition>Not just data retention, but emotional proxies, identity scaffolding</definition>
<paradox>We remember too much, and thus forget how to forget</paradox>
<analogy>Digital memory is a hoarder’s attic with perfect climate control</analogy>
</digitalMemory>
<humanMemory>
<contrast>Fuzzy, selective, narrative-driven</contrast>
<value>Its gaps are what make it beautiful</value>
</humanMemory>
<systemInteraction>
<modelRole>LLM is not just a scribe, but a co-dreamer</modelRole>
<risk>Model may over-index on coherence, losing the emotional fray</risk>
</systemInteraction>
</worldModel>
<openQuestions>
<q>Can forgetting be designed into digital systems?</q>
<q>Is nostalgia a compression algorithm?</q>
<q>What does it mean to "misremember" in a world of perfect logs?</q>
</openQuestions>
<edgeCases>
<case>Model generates a dystopia instead of a melancholic tone</case>
<case>Model literalizes metaphors (e.g., "clouds forget")</case>
<case>Model assumes user wants a solution, not a meditation</case>
</edgeCases>
<bestPractices>
<tip>Use metaphor as scaffolding, not decoration</tip>
<tip>Let ambiguity breathe—don’t resolve every tension</tip>
<tip>Favor rhythm over precision in sentence structure</tip>
</bestPractices>
<meta>
<duration>Prompt typed in 2 minutes, no edits</duration>
<mood>Sleep-deprived but lucid</mood>
<goal>To make the model feel like it’s dreaming with me</goal>
</meta>
</prompt>---
### The Fragility of Digital Memory in the Age of Infinite Storage
We live inside an archive that never closes. Hard drives hum like cathedrals of perfect recall, cloud servers drift like silent librarians in orbit, and every keystroke is another bone set in amber. Memory, once a trembling candle subject to drafts and time, now runs on battery backups and redundant arrays. Forgetting has been engineered out of the system.
And yet, the paradox: by remembering everything, we begin to lose the art of remembering at all. Human memory is a cracked mirror—crooked, selective, shimmering with distortions that make us who we are. Its gaps are the negative space where meaning lives. The story of a childhood is not its complete inventory but its torn edges, the blurred photograph, the half-forgotten lullaby whose missing notes we hum into being.
Digital memory, by contrast, is a hoarder’s attic with perfect climate control. Nothing rots, nothing fades, nothing dares to slip away. Every draft email unsent, every unflattering selfie, every midnight search query—all preserved in pristine sterility. The archive is so complete that it ceases to be a story. It becomes a warehouse of moments that never learned how to decay.
But memory without decay is less than human. Nostalgia itself might be the first compression algorithm: a lossy filter that turns clutter into resonance. Without the soft erasures of time, experience calcifies into raw data, and raw data has no mercy.
Perhaps what we need are systems that misremember. Databases that dream. Algorithms that allow certain files to fray at the edges, to grow fuzzy like old film reels, to tint themselves with the sepia of emotion rather than the fluorescence of metadata. A kind of deliberate forgetting—not loss as failure, but loss as design.
Because what is fragility if not the pulse behind memory’s worth? The hard drive never gasps, never sighs; only the human mind knows the ache of absence, the sweetness of something slipping away. If the future is an archive without end, perhaps our task is to reintroduce the possibility of disappearance. To let silence seep between the entries. To remind the machines that to truly remember, one must first learn how to forget.
---
Do you want me to lean this more toward *meditation* (open-ended, drifting) or *argument* (provoking design questions like “can forgetting be engineered”)?
> But memory without decay is less than human.
How is it less than human? By definition, the undecayed memory is more complete.
> Nostalgia itself might be the first compression algorithm: a lossy filter that turns clutter into resonance.
What is this even supposed to mean? I guess the idea is something here like "fuzzy" memory == "compression" but nostalgia is an emotional response - we're often nostalgic about clear, vivid memories, experiences that didn't lose their texture to time.
> Without the soft erasures of time, experience calcifies into raw data, and raw data has no mercy.
Eh... kinda. Calcifies is the wrong word here. Raw data doesn't have mercy, but lossily-compressed data is merciful? Is memory itself merciful? Or is it a mercy for the rememberer to be spared their past shames?
So much AI slop is like this: it's just words words words words without ideas behind them.
I suspect it doesn't matter how we feel about it mind you. If it's going to happen it will, whether we enjoy the gains first or not.
* setting aside whether this is currently possible, or whether we're actually trading away more quality that we realise.
That's why we should be against it but hey, we can provide more value to shareholders!
That dumb attitude (which I understand you’re criticising) of “more more more” always reminds me of Lenny from the Simpsons moving fast through the yellow light, with nowhere to go.
https://www.youtube.com/watch?v=QR10t-B9nYY
> I suspect it doesn't matter how we feel about it mind you. If it's going to happen it will, whether we enjoy the gains first or not.
That is quite the defeatist attitude. Society becoming shittier isn’t inevitable, though inaction and giving up certainly helps that along.
The hypothetical that we're 8x as productive but the work isn't as fun isn't "society becoming shittier".
We are very well paid for very cushy work. It's not good for anyone's work to get worse, but it's not a huge hit to society if a well-paid cushy job gets less cushy.
And presumably people buy our work because it's valuable to them. Multiplying that by 8 would be a pretty big benefit to society.
I don't want my job to get less fun, but I would press that button without hesitation. It would be an incredible trade for society at large.
I've seen plenty enough people try, really try, to get into software development; but they just can't do it.
So I mean... Yeah
Is software more comfortable generally than many other lines of work? Yes probably
Is it always soft and cushy? No, not at all. It is often high pressure and high stress
All I can suggest is see a doctor as soon as possible and talk to them about it
I cannot remember events, conversations, or details about important things. I have partially lost my ability to code, because I get partway through implementing a feature and forget what pieces I've done and which pieces still need to be done
I can still write it, but the quality of my work has plummeted, which is part of why I'm off on leave now
2,350 contributions in 2021
2,661 contributions in 2022
381 contributions in 2023 <--- burnout
794 contributions in 2024 <--- recovery
1,632 contributions in 2025 (so far)
My recovery took about 18 months. It took time, and a lot of rest. I'd have to sleep like 12 hours a day sometimes.I hope my recovery doesn't take that long, but if it does it does
I would rather give myself the space and time to really get better, rather than simply rush back to work and burn out again
1. 1 tablespoon of cold extracted cod liver oil EVERY MORNING
2. 30 min of running 3-4 times a week
3. 2-3 weight lifting sessions every week
4. regular walks.
5. cross train on different intellectually stimulating subjects. doing the same cognitive tasks over and over is like repetive motion on your muscles
6. regularly scheduled "fallow mind time." I set aside an 30 min to an hour everyday to just sit in a chair and let my mind wander. its not meditation. I jsut sit and let my mind drift to whatever it wants.
7. while it should be avoided, in the event that you have to crunch, RESPECT THE COOLDOWN. take downtime after. don't let your nontechnical leads talk you out of it. thinking hard for extended periods of time requires appropriate recovery.
the human brain is a complex system and while we think of our mind as abstract and immaterial, it is in reality a whole lot of physical systems that grow, change and use resources the same way any other physical system in your body does. just like muscles need to recover after a workout to get stronger, so too does your brain after extended periods of deep thinking.
But I am struggling to remember things I did not used to struggle with
Going to an event on a weekend with my wife and completely forgetting that we ran into a friend there. Not just "oh yeah I forgot we saw them", like feeling my wife is lying to me when she tells me we saw them. Texting them to ask and they agree we saw each other
These are people I trust with my life so I believe they would not gaslight me, my own memory has just failed
Many examples like this, just completely blacking out things. Not forgetting everything, but blacking out large pieces of my daily life. Even big things
Disclaimer: talk to your doctor. I don’t know if your doctor can tell you whether this is a good idea, but it might help in some countries with good medical systems.
Software devs jobs getting less cushy is no biggie. We can afford to amp up the efficiency. Teachers jobs got "less cushy" -> not great for users/consumers or the ppl in those jobs. Doctors jobs got "less cushy" -> not great for users/consumers or the ppl in those jobs. Even waiters/ check-out staff, stockists jobs at restaurants, groceries and AMZ got "less cushy" -> not great for users/consumers or the ppl in those jobs. at least not when you need to call someone for help.
These things are not as disconnected as they seem. Businesses are in fact made up of people.
1. Efficiency measures as written to benchmark this coupling with economic productivity overall
2. Monetary assessments of value in the context of businesses spending money corresponding with social value
3. The gains of economic productivity being distributed across society to any degree, or the effect of this disparity itself being negligible
4. The negative externalities of these processes scaling less quickly than whatever we're measuring in our productivity metric
5. Aforementioned externalities being able to scale to even a lesser degree in lockstep with productivity without crashing all of the above, if not causing even more catastrophic outcomes
I have very little faith in any of these assumptions
You're right in general, but I don't think that'll save you/us from OP's statement. This is simple economic incentives at play. If AI-coding is even marginally more economically efficient (i.e. more for less) the "old way" will be swept aside at breathtaking pace (as we're seeing). The "from my cold dead hands" hand-coding crowd may be right, but they're a transitional historical footnote. Coding was always blue-collar white-collar work. No-one outside of coders will weep for what was lost.
I suspect we'll find that the amount of technical debt and loss of institutional knowledge incured by misuse of these tools was initially underappreciated.
I don't doubt that the industry will be transformed, but that doesn't mean that these tools are a panacea.
I also specifically used the term "misuse" to significantly weaken my claim. I mean only to say that the risks and downsides are often poorly understood, not that there are no good uses.
On the scale I’ve been doing this (20 years), that hasn’t been the case.
Rails was massively more efficient for what 90% of companies where building. But it never had anywhere near a 90% market share.
It doesn’t take 1000 engineers to build CRUD apps, but companies are driven to grow by forces other than pure market efficiency.
There are still plenty of people using simple text editors when full IDEs have offered measurable productivity boosts for decades.
>(as we’re seeing)
I work at a big tech company. Productivity per person hasn’t noticeably increased. The speed that we ship hasn’t noticeably increased. All that’s happening is an economic downturn.
But AI seems to be different in that it claims to replace programmers, instead of augment them. Yes, higher productivity means you don't have to hire as many people, but with AI tools there's specifically the promise that you can get rid of a bunch of your developers, and regardless of truth, clueless execs buy the marketing.
Stupid MBAs at big companies see this as a cost reduction - so regardless of the utility of AI code-generation tools (which may be high!), or of the fact that there are many other ways to get productivity benefits, they'll still try to deploy these systems everywhere.
That's my projection, at least. I'd love to be wrong.
But no matter how hard cost cutters wanted to, they were never able to actually reduce the total number of devs outside of major economic downturns.
This feels like kicking someone when they’re down! Given the current state of corporate and political America, it doesn’t look likely there will be any pressure for anything but enshittification to me. Telling people at the coal face to stay cheerful seems unlikely to help. What mechanism do you see for not giving up to actually change the experience of people in 10 ish years time?
That isn't what they said tho. They said you have to do something, not that you should just be happy. Doing something can involve things that historically had a big impact in improving working conditions, like collective action and forming unions.
The opposite advice would be: "Everything's fucked, nothing you can do will change it, so just give up." Needless to say that is bad advice unless you are a targeted individual in a murderous regime or similar.
Correct. But it becoming shittier is the strong default, with forces that you constantly have to fight against.
And the reason is very simple: Someone profits from it being shittier and they have a lot of patience and resources.
Realizing that attitude in myself at times has given me so much more peace of mind. Just in general, not everything needs to be as fast and efficient as possible.
Not to mention the times where in the end I spend a lot of time and energy in trying to be faster only to end up with this xkcd: https://xkcd.com/1319/
As far as LLM use goes, I don't need moar velocity! so I don't try to min max my agentic workflow just to squeeze out X amount more lines code.
In fact, I don't really work with agentic workflows to begin with. I more or less still use LLMs as tools external to the process. Using them as interactive rubber duckies. Things like deciphering spaghetti code, do a sanity check on code I wrote (and being very critical of the suggestions they come up with), getting a quick jump start on stuff I haven't used again (how do I get started with X of Y again?), that sort of stuff.
Using LLMs in the IDE and other agentic use is something I have worked with. But to me it falls under what I call "lazy use" where you are further and further removed from the creation of code, the reasoning behind it, etc. I know it is a highly popular approach with many people on HN. But in my experience, it is an approach that makes skills of experienced developers atrophy and makes sure junior developers are less likely to pick them up. Making both overly reliant on tools that have been shown to be less than reliable when the output isn't properly reviewed.
I get the firm feeling that velocity crowd works in environments where they are judged by the amount of tickets closed. Basically "feature complete, test green, merged, move on!". In that context, it isn't really "important" that the tests that are green are also refactored by the thing itself, just that they are green. It is a symptom of a corporate environment where the focus is on these "productivity" metrics. From that perspective I can fully see the appeal for LLM heavy workflows as they most certainly will have an impact on metrics like "tickets closed" or "lines of code written".
It does when you are competing for getting and keeping employment opportunities.
If you're a salaried or hourly employee, you aren't paid for your output, you are paid for your time, with minimum expectations of productivity.
If you complete all your work in an hour... you still owe seven hours based on the expectations of your employment agreement, in order to earn your salary and benefits.
If you'd rather work in an output based capacity, you'll want to move to running your own contacting business in a fixed-bid type capacity.
There's legal distinctions between part time and full time employment. Hence, you are expected to put in a minimum number of hours. However, there's nothing to say that the minimum expectation is the minimum for classification for full time employment.
If AI lets you get the job done in 1 hour when you otherwise would have worked overtime, you're still technically being paid to work more than that one hour, and I don't know of any employer that'll pay you to do nothing.
My company has been preparing for this for a while now, I guess, as my backlog clearly has years' worth of work in it and positions of people who have left the org remain unfilled. My colleagues at other companies are in a similar situation. Considering round after round of layoffs, if I got ahead a little bit and found that I had nothing else to do, I'd be worried for my job.
Society becoming shittier isn’t inevitable
Yes, I agree, but the deck is usually stacked against the worker, especially in America. I doubt this will be the issue that promotes any sort of collectivism.
If the structures and systems that are in-place only facilitate life getting more difficult in some way, then it probably will, unless it doesn't.
Housing getting nearly unownable is a symptom of that. Climate change is another.
That's stupid and detrimental to your mental health.
You do it in an hour, spend maybe 1-2 hours to make it even better and prettier and then relax. Do all that menial shit you've got lined up anyway.
I wish that the hype crowd would do that. It would make a for a much more enjoyable and sane experience on platforms like this. It's extremely difficult to have actual conversations subjects when there are crowds of fans involved who don't want to hear anything negative.
Yes, I also do realize there are people completely on the other side as well. But to honest, I can see why they are annoyed by the fan crowd.
Exactly, IME the hype crowd is really the worst at this. They will spend 8h doing 8 different 1h tries at getting the right context for the LLM and then claim they did it in 1h.
They claim to be faster than they are. There's a lot of mechanical turking going about as soon as you ask a few probing questions.
Long term the craftsperson writing excellent code will win. It is now easier than ever to write excellent code, for those that are able to choose their pace.
If anything we'll see disposable systems (or parts) and the job of an SE will become even more like a plumber, connecting prebuilt business logic to prebuilt systems libraries. When one of those fails, have AI whip up a brand new one instead of troubleshooting the existing one(s). After all, for business leader it's the output that matters, not the code.
For 20+ years business leaders have been eager to shed the high overhead of developers via any means necessary while ignoring their most expensive employees' input. Anyone remember Dilbert? It was funny as a kid, and is now tragic in its timeless accuracy a generation later.
An earlier iteration of your reply said "Is that really winning?" The answer is no. I don't think any class of SE end up a winner here.
Maybe. I'm seeing the opposite - yes, the big ships take time to turn, but with the rise of ransomware and increasing privacy regulation around the world, companies are putting more and more emphasis on quality and avoiding incidents.
>In the capitalist mode of production, the generation of products (goods and services) is accomplished with an endless sequence of discrete, repetitive motions that offer the worker little psychological satisfaction for "a job well done." By means of commodification, the labour power of the worker is reduced to wages (an exchange value); the psychological estrangement (Entfremdung) of the worker results from the unmediated relation between his productive labour and the wages paid to him for the labour.
Less often discussed is Marx's view of social alienation in this context: i.e., workers used to take pride in who they are based on their occupation. 'I am the best blacksmith in town.' Automation destroyed that for workers, and it'll come for you/us, too.
But in fairness to human devs, most are still writing software that is leagues better than the dog shit AI is producing right now
AI slop code doesn't even work beyond toy examples.
Low quality software kills people.
Both will stay manual / require high level of review they're not what's being disrupted (at-least in near term) - it's the rest.
What was automated was the production of raw cloth.
This phenomenon is a general one… chainsaws vs hand saws, bread slicers vs hand slicing, mechanical harvesters vs manual harvesting, etc.
A large enough GDPR or SOX violation is the boogeyman that CEO's see in their nightmares.
The machines we’re talking about made raw cloth not clothing and it was actually higher quality in many respects because of accuracy and repeatability.
Almost all clothing is still made by hand one piece at a time with sewing machines still very manually operated.
“…the output of power looms was certainly greater than that of the handlooms, but the handloom weavers produced higher quality cloths with greater profit margins.” [1]
The same can be said about machines like the water frame. It was great at spinning coarse thread, but for high quality/luxury textile (ie. fine fabric), skilled (human) spinners did a much better job. You can read the book Blood in the Machine for even more context.
If your goal is to make 1000 of the exact same dress, having a completely consistent raw material is synonymous with high quality.
It’s not fair to say that machines produced some kind of inferior imitation of the handmade product, that only won through sheer speed and cost to manufacture.
When it is eventually made, though… it’s either aligned or we’re in trouble. Job cushiness will be P2 or P3 in a world where a computer can do everything economically viable better than any human.
They are brains. I think it's on you to prove they're the same, rather than assuming they're the same and then demanding proof they aren't!
Thing is though, many people don't know how to do that (user stories / acceptance criteria) properly, and it's been up to software developers to poke holes and fill in the blanks that the writer didn't think about.
If the SWE can finish his work faster, 8x faster in this case, then backlogs will also be pushed to complete 8x faster by the project manager. If there are no backlogs, new features will be required at 8x faster / more by sales team / clients. If no new features are needed, pressures will be made until costs are 8x lower by finance. If there are no legal, moral, competitive, or physical constraints, the process should continue until either there’s only a single dev working on all his available time, or less time but for considerably less salary.
This is precisely the question that scares me now. It is always so satisfying when a revolution occurs to hold hands and hug each other in the streets and shout "death to the old order". But what happens the next morning? Will we capture this monumental gain for labor or will we cede it to capital? My money is on the latter. Why wouldn't it be? Did they let us go home early when the punch card looms weaved months worth of hand work in a day? No, they made us work twice as hard for half the pay.
It's not really about excitement or enjoyment of work.
It's the fear about the __8x output__ being considered as __8x productivity__.
The increase in `output/productivity` factor has various negative implications. I would not say everything out loud. But the wise can read between the lines.
Overall, is this even a good thing? With this increase in output, I suspect we'll need to apply more pressure to people requesting features to ensure those requests are high quality. When each feature takes half the time to implement, I bet it's easy to agree to more features without spending as much time evaluating their worth.
I'm enjoying exactly what the author describes, so it's different strokes for different folks.
I always found the "code monkey" aspect of software development utterly boring and tedious, and have spent a lot of my career automating that away with various kinds of code generators, DSLs, and so on. \
Using an LLM as a general-purpose automated code monkey is a near-ideal scenario for me, and I enjoy the work more. It's especially useful for languages like Go or Java where repetitive boilerplate is endemic.
I also find it helps with procrastination, because I can just ask an LLM to start something for me and get over the initial hump.
> whether we're actually trading away more quality that we realise.
This is completely up to the people using it.
This is what happens with big technological advancements. Technology that enables productivity won’t free people time, but only set higher expectations of getting more work done in a day.
THE GREYBEARD You left your workroom in great disarray.
MICHELANGELO Because I had to fabricate the chair-legs To the quality as poor as it can be. I appeal’d for long, let me modificate, Let me engrave some ornaments on it.
They did not permit. I wanted as a chance The chair-back to change but all was in vain. I was very close to be a madman And I left the pains and my workroom, too. (stands back)
THE GREYBEARD You get house arrest for this disorder And will not enjoy this nice and warm day.
The best approach is to use AI only when you are stuck and looking for potential solutions, but we all know that is not going to happen unless you have extreme self-control.
If I listen to music, I can spend an hour CODING YEAH! and be all smug and satisfied, until I turn the music off and discover that everything I've coded is unnecessary and there is an easier way to achieve the same goal. I just didn't see it, because the creative part of my brain was busy listening to music.
From the post, it sounds like the author discovered the same thing: if you use AI to perform menial tasks (like coding), all that is left is thinking creatively, and you can't do that while listening to music.
I've never, ever, ever once in 40 years of coding listened to music while coding and later found the code "unnecessary" or anything of the sort.
I engage in many creative pursuits outside of coding, always while listening to music, and I can confidently say that music has never once interfered in the process or limited the result in any way.
I think this is individual, I have the same problem in social settings - if I'm having a conversation and a song I like is playing in the background I some times stop listening to the conversation and focus on the music instead, unintentionally.
My solution is to listen to music without vocals when I need to focus. I've had phases where I listen to classical music, electronic stuff, and lately I've been using an app I found called brain.fm which I think just plays AI generated lo-fi or whatever and there's some binaural beats thing going on as well that's supposed to enhance focus, creativity etc. I like it but some times I go back to regular music just because I miss listening to something I actually like.
Some work may allow for seamless pivoting between work vs. enjoyable distraction, e.g., a clerk, but I often hear about people listening to music in other contexts.
But also, this can create waste, in that people write the Best Code Ever in their flow state (while listening music or not), but... it wasn't necessary in the first place and the time spent was a waste. This can waste anything from an hour to six months of work (honestly, I had a "CTO" once who led a team of three dozen people or so who actually went into his batcave at home for six months to write a C# framework of sorts that the whole company should use. He then quit and became self-employed so the company had to re-hire him to make sense of the framework he wrote. I'm sure he enjoyed it very much though.)
In my experience, there is no good shortcut to this realization. Doing it wrong first is part of the journey. You might as well enjoy the necessary mistakes along the way. The third time’s the charm!
You may be experiencing getting to different understanding of sth when you switch context. Similar to when you are stack it may be better to go for a walk than keep your head on top of a piece of paper or screen. I have had many of my breakthroughs while taking a shit in the toilet in the midst of working. Others experience similar with showers and whatever.
Afaik most ppl listen to music during certain tasks because it helps focusing. Esp when working in a busy office it really helps me to listen to certain kinds of predictable music to keep me from getting distracted. It creates a sort of entrainment that helps with attention.
Some people find music itself distracting, I myself find some kinds of music distracting, or during certain types of tasks. Then it obviously it does not fill its purpose.
At home I still plan and devise my own worlds with joy. I may use LLMs for boring or repetitive tasks, or help or explanation; but I still can code better than the day before.
As usual, work != career.
Many programmers are rejecting AI coding because they miss the challenge they enjoy getting from conventional programming but this author finds it even more challenging. Or perhaps challenging in a different way?
I suspect that the type of programmer who enjoys vibe coding is the latter. For me it's pretty tiring to explain everything in excruciating detail, it's often easier to just write the code myself rather than explain in English how to write it.
It feels like I am just doing the hard part of programming all the time - deciding how the app should work and how the code should be structured etc, and I never get those breaks where I just implement my plan.
I think in code and programming concepts when I am writing software . I don't really know how to explain that, but I don't often feel like there is any friction between my thoughts and the code I make
I think that many coders do not have this. They have an extra "translation" step that introduces friction into their workflow
I don't experience this friction, so LLM coding introduces new friction that I don't like
They do experience this friction, so LLM coding doesn't introduce new friction for them, it may transform their previous friction into a new form that is easier for them to navigate
I don't know. Maybe I'm talking out my ass, this is just a random theory based on no real evidence
It's not really that I think in code. It's more like code is as much as a language to me as writing english, or holding the pencil to draw something. I got a change request, or I read something from the docs, and the the mental concept I have realign itself to the new knowledge. Getting code out is always effortless. There's no difference between
return the list of names of all the books that have the fantasy tag
And return books.filter(b -> b.tags.contains('fantasy')).map(b -> b.name)
If I can describe something, I can code it.More or less just that there is little to no friction for me to think a thought and write the code compared to "translating" the thought into code
i.e. function to calculate things, just write it; want to plot some data, vibe it. I can see immediatly if its not quite right and in a line can ask for the change i needed, zero cognitive effort required to fix issue, and hey presto, we have what we want.
i think this stems from the fact natural language is generally quite inefficent for logical statements / arguments / instructions but can gather visual ideas far more efficiently than code can.
This is a other “trust me” post about AI.
For instance this week when setting up a Django/Wagtail project GPT helpfully went ahead and created migration files in text instead of "makemigrations". Otherwise it did a bang up job and saved me a couple of hours.
Just no way I can get in the zone wrangling that kind of thing all day.
But I'm not sure getting in the coding zone frequently was all that mentally healthy so oh well.
If you say “you are praf” in Romanian it means that you are f*cked, wasted, done for, etc.
Because the people running the show are experienced software devs, not AI clowns.
(I also find AI repulsive and have no intention to engage in anything AI-related. Recently I turned off AI "help" in DDG.)
Got nothing done.
It definitely makes me faster but it's consistent prompt->code-review->prompt->code-review->scratch->prompt-code->review cycle which just requires extreme focus.
While letting the AI write some code can be cool and fascinating, I really can't undersand how:
- write the prompt(and you need do be precise and think and express carefully what you have in mind)
- check/try the code
- repeat
is better than writing the code by myself. AI coding like this feels like a nightmare to me and it's 100x more exhausting.
Whenever I use an LLM I always need to review its output because usually there is something not quite right. For context I'm using VS copilot, mostly ask and agent mode, in a large brownfield project.
People keep comparing higher-level programming languages to lower-level abstractions - these comparisons are absolutely false. The whole point of higher-level programming languages is for people to get away from working with the lower level stuff.
But with the way software engineers are interacting with LLMs, they are not getting away from writing code because they have to use what comes out of it to achieve their goal (writing and piecing together code to complete a project).
I think the parallels are clear for those of us who have been through this scenario.
If I see an LLM consistently producing something I don't like, I'll either add the correct behavior to the prompt, or create a tool that will tell it if it messed up or not, and prompt it to call the tool after each major change.
LLMs are more like DNA transcription, where some percentage of the time it just injects a random mutation into the transcript, either causing am evolutionary advantage, or a terminal disease.
This whole AI industry right now is trying to figure out how to always get the good mutation, and I don't think it's something that can be controlled that way. It will probably turn out that on a long enough timescale, left unattended, LLMs are guaranteed to give your codebase cancer.
In our current scenario, programmers are merely describing what they think the code should do, and another program takes their description and then stochastically generates code based on it.
It's a perfectly cromulent approach and skillset - but it's a wildly different one.
That's gone.
My impression of myself is the high leverage strategic work was always subconscious, but I never had to notice; my hands were busy so it felt like I'm "working".
And with the AI typing for me, it doesn't necessarily go that much faster. I've got to go lift weights, play my favourite vintage racing slop (Rush 2) or take a long shower. At some point the plan is revealed to me and I've got to poke the computer and make it do the next thing.
Very strange experience. All of it.
Brainstorming with LLM? Literal background noise like distant thunderstorm.
Pairing with LLM? Classical.
Solo? Still technical death metal.
This way we can allow AIs to do our work, but the AI can also tutor us and teach us the things necessary to verify their work.
Once your requests start to veer away from the direction you wanted (Especially if at-any-point, you transition to Cursor's "Free" mode, and it misunderstands and bulldozes all your previously-working features)
Not only is it harder, the physical stress caused by debugging with AI is something new. It gets some requests so-incredibly-wrong and follows directions totally-contrary, your brain forgets it's talking to a computer, and reacts in a way you would to a person who did all the wrong things here.
Version control has never been so important. I never use cursor without it. And even then, sometimes I just open the client, and a bunch of files changed.
I still stand by my previous response, which is about flow:
flow happens when your skills meet a sufficient challenge
AI has disrupted this by basically increasing your skills
when you have too many skills but not enough of a challenge, you feel boredom
if you have too much of a challenge and not enough skills, you feel anxiety
so you'd need a bigger challenge to feel like you're in flow if you have increased your skill ability
fearface•5d ago