This is actually a really good point that I have kind of noticed when using AI for side project, so being on my own time. The allure of thinking "Oh I wonder how it will perform with this feature request if I give it this amount of info".
Can't say I would put off sleep for it but I get the sentiment for sure.
I find the latter a lot more challenging to cut my losses when it's on a good run (and often even when I know I could just write this by hand), especially because there's as much if not more intrigue about whether the tool can accomplish it or not. These are the moments where my mind has drifted to think about it the exact way you describe it here.
Previously, I'd have an idea, sit on it for a while. In most cases, conclude it's not a good idea worth investing in. If I decided to invest, I'd think of a proper strategy to approach it.
With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
I still need to figure out how to deal with that, for now I just time box these sessions.
But I feel I'm trading thinking time for execution time, and understanding time for testing time. I'm not yet convinced I like those tradeoffs.
Edit: Just a clarification: I currently work in two modes, depending on the project. In some, I use agentic development. In most, I still do it "old school". That's what makes the side effects I'm noticing so surprising. Agentic development pulls me down rabbit holes and makes me loose the plot and focus. Traditional development doesn't, its side effects apparently keep me focused and in control.
More importantly, As the problem becomes more complex, it then matters more if you know where the AI falls short.
Case study: Security researchers were having a great time finding vulnerabilities and security holes in Openclaw.
The Openclaw creators had a very limited background in security even when the AI entirely built Openclaw and the authors had to collaborate with the security experts to secure the whole project.
That describes the majority of cases actually worth working on as a programmer in the traditional sense of the word. You build something to begin to discover the correct requirements and to picture the real problem domain in question.
That's one way, another way is to keep the idea in your head (both actively and "in the background) for days/weeks, and then eventually you sit down and write a document, and you'll get 99% of the requirements down perfectly. Then implementation can start.
Personally I prefer this hammock-style development and to me it seems better at building software that makes sense and solves real problems. Meanwhile "build something to discover" usually is best when you're working with people who need to be able to see something to believe there is progress, but the results are often worse and less well-thought out.
It's better to have a solid concrete idea written down of the entire system that you know you want to build which has ironed out the limitations, requirements and the constraints first before jumping into the code implementation or getting the agent to write it for you.
The build-something-to-discover approach is not for building robust solutions in the long run. By starting with the code first without knowing what it is you are solving or just getting the AI to generate something half-working but breaks easily and changing it once again for it to become even more complicated just wastes more time and tokens.
Someone still has to read the code and understand why the project was built on a horrible foundation and needs to know how to untangle the AI vibe-coded mess.
before, I would narrow things down to only the most potentially economically viable, and laugh at ideas guys that were married to the one single idea in their life as if it was their only chance, seemingly not realizing they were competing with people that get multiple ideas a day
back to the aforementioned epiphany, it reminds me of the world of Star Trek where everything was developed for its curiosity and utility instead of money
> With agentic development, I have an idea, waste a few hours chasing it,
What's the difference between these 2 periods? Weren't you wasting time when sitting on it and thinking about your idea?
When you jump straight into execution because it’s easy to do so, you lose the distinction.
I prompt and sit there. Scrolling makes it worse. It's a good mental practice to just stay calm and watch the AI work.
If you're going to stay single-minded, why wouldn't you just write the code yourself? You're going to have to double check and rewrite the AI's shitty work anyway
Just like with CNC though, you need to feed it with the correct instructions. It's still on you for the machined output to do the expected thing. CNC's are also not perfect and their operators need to know the intricacies of machining.
What domains do you work in? This description does not match my experience whatsoever.
What did you try to do and the LLM failed you?
It's about presenting externally as a "bad ass" while:
A) Constantly drowning out every moment of your life with low quality background noise.
B) Aggressively polluting the environment and depleting our natural resources for no reason beyond pure arrogance.
It seems perfectly fitting to me that Anthropic is using a wildly overcomplicated React renderer in their TUI.
React devs are the perfect use case for "AI" dev tools. It is perfectly tolerated for them to write highly inefficient code, and these frameworks are both:
A) Arcane and inconsistently documented
B) Heavily overrepresented in open-source
Meaning there are meaningful gains to be had from querying these "AI" tools for framework development.
In my opinion, the shared problem is the acceptance of egregious inefficiency.
Also this post should link to the original source as well.
As per the submission guidelines [1]:
”Please submit the original source. If a post reports on something found on another site, submit the latter.”
[0] https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...
And the initial gut reaction is to resist by organizing labor.
Companies that succumb to organized labor get locked into that speed of operating. New companies get created that adopt 'the new thing' and blow old companies away.
Repeat.
Yeah like tech workers have similar rights to union workers. We literally have 0 power compared to any previous group of workers. Organizing of labour cant even happen in tech as tech has large percentage of immigrant labour who have even less rights than citizens.
Also there is no shared pain like union workers had, we all have been given different incentives, working under different corporations so without shared pain its impossible to organize. AI is the first shared pain we had, and even this caused no resistance from tech workers. Resistance has come from the users, which is the first good sign. Consumers have shown more ethics than workers and we have to applaud that. Any resistance to buying chatbot subscriptions has to be celebrated.
This isn't the place to kvetch about this; you will literally never see a unionization effort on this website because the accounts of the people posting about it will be [flagged] and shadowbanned.
I'm also curious as to what you do, where you do it, and who you work for that makes you feel like you have zero power.
The only winners here are CEOs/founders who make obscene money, liquidate/retire early while suckers are on the infinite treadmill justifying their existence.
"This time, its going to be the correct version of socialism."
On a separate note, I have the intensification problem in my personal work as well. I sit down to study, but, first, let me just ask Claude to do some research in the background... Oh, and how is my Cursor doing on the dashboard? Ah, right, studying... Oh, Claude is done...
Definitely not by posting on right-wing social media websites.
> I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
It is.
Those things don't excite you any more. Plus, the fact that you no longer exercise your brain at work any more. Plus, the constant feeling of FOMO.
It deflates you, faster.
But as far as output - we all have different reasons for enjoying software development but for me it's more making something useful and less in the coding itself. AI makes the fun parts more fun and the less fun parts almost invisible (at small scale).
We'll all have to wrestle with this going forward.
Many of programmers became programmers because they find the idea of programming fascinating, probably in their middle school days. And then they went to be professionals. Then they burned out and if they were lucky, they transited to management.
Of course not everyone is like that, but you can't say it isn't common, right.
But I've found my way to what, for me, is a more durable and substantial source of satisfaction, if not excitement, and that is value. Excuse the cliche, but its true.
My life has been filled with little utilities that I've been meaning to put together for years but never found the time. My homelab is full of various little applications that I use, that are backed up and managed properly. My home automation does more than it ever did, and my cabin in the countryside is monitored and adaptive to conditions to a whole new degree of sophistication. I have scripts and workflows to deal with a fairly significant administrative load - filing and accounting is largely automated, and I have a decent approximation of an always up-to-date accountant and lawyer on hand. Paper letters and PDFs are processed like its nothing.
Does all the code that was written at machine-speed to achieve these things thrill me? No, that's the new normal. Is the fact that I'm clawing back time, making my Earthly affairs orderly in a whole new way, and breathing software-life into my surroundings without any cloud or big-tech encroachment thrilling? Yes, sometimes - but more importantly it's satisfying and calming.
As far as using my brain - I devote as much of my cognitive energy to these things as I ever have, but now with far more to show for it. As the agents work for me, I try to learn and validate everything they do, and I'm the one stitching it all into a big cohesive picture. Like directing a film. And this is a new feeling.
Literal work junkies.
And what’s the point? If you’re working on your own project then “just one more feature, bro” isn’t going to make next Minecraft/Photopea/Stardew Valley/name your one man wonder. If you’re working for someone, then you’re a double fool, because you’re doing work of two people for the pay of one.
it's good that people so quickly see it as impulsive and addicting, as opposed to the slow creep of doomscrolling and algorithm feeds
at least I won't be vegetating at a laptop, or shirking other possible responsibilities to get back to a laptop
You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.
percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.
So everything stays exactly the same?
Validation was always the hard part because great validation requires great design. You can't validate garbage.
My point here is not to roast Deltek, although that's certainly fun (and 100% deserved), but to point out that the bar for how bad software can be and still, somehow, be commercially viable is already so low it basically intersects the Earth's centre of gravity.
The internet has always been a machine that allows for the ever-accelerated publishing of complete garbage of all varieties, but it's also meant that in absolute terms more good stuff also gets published.
The problem is one of volume not, I suspect, that the percentages of good versus crap change that much.
So we'll need better tools to search and filter but, again, I suspect AI can help here too.
No, we get applications so hideously inefficient that your $3000 developer machine feels like it it's running a Pentium II with 256 MB of RAM.
We get software that's as slow as it was 30 years ago, for no reason other than our own arrogance and apathy.
I do feels things in general are more "snappy" at the OS level, but once you get into apps (local or web), things don't feel much better than 30 years ago.
The two big exceptions for me are video and gaming.
I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.
I’ve seen many people (even myself) thinking the same: if I quit/something happens to me, there will be no one who knows how this works/how to do this. Turned out the businesses always survived. There was a tiny inconvenience, but other than that: nothing. There is always someone who is willing to pick up/take over the task in zero amount of time.
I mean I agree with you, in theory. But that’s not what I’ve seen in practice.
To be fair, it is a hard question to contend with. It is easier to keep users who don't know what they're missing happier than users who lost something they now know they want. Even fixing bugs can upset users who have come to depend on the bug as a feature.
> In agile methodologies we measure the output of the developers.
No we don't. "Individuals and interactions over processes and tools". You are bound to notice a developer with poor output as you interact with them, but explicitly measure them you will not. Remember, agile is all about removing managers from the picture. Without managers, who is even going to do the measuring?
There are quite a few pre-agile methodologies out there that try to prepare a development team to operate without managers. It is possible you will find measurement in there, measuring to ensure that the people can handle working without mangers? Even agile itself recognizes in the 12 principles that it requires a team of special people to be able to handle agile.
The people making the buying decisions may not have a good idea of what maximises "meaningful value" but they compare feature sets.
Now comes the hard or impossible part: is it any good? I would bet against it.
Also using double equals to mutate variables, why?
Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.
Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.
All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.
So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’
We just saw the productivity growth in the vibe coded GitHub outages.
Yeah, good luck with that.
Corporations have tried to reduce employee burnout exactly never times.
That’s something that starts at the top. The execs tend to be “type A++” personalities, who run close to burnout, and don’t really have much empathy for employees in the same condition.
But they also don’t believe that employees should have the same level of reward, for their stress.
For myself, I know that I am not “getting maximum result” from using LLMs, but I feel as if they have been a real force multiplier, in my work, and don’t feel burnt out, at all.
alpha sigma grindset
I think these companies have been manipulating social media sentiment for years in order to cover up their bunk product.
In the book The researcher explains that when washing machines were invented the women faced a whole new expectation of clean clothes all the time because washing clothes was much less of a labor. And statistics pointed out that women actually were washing clothes more often than doing more work after the washing machine was invented then before.
This happens with any technology. AI is no different.
That changes if you get it to write code for you. I tried vibe-coding an entire project once, and while I ended up with a pretty result that got some traction on Reddit, I didn't get any sense of accomplishment at all. It's kinda like doomscrolling in a way, it's hard to stop but it leaves you feeling empty.
When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.
When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.
What I personally find exhausting is Simon¹ constantly discovering the obvious. Time after time after time it’s just “insights” every person who smoked one blunt in college has arrived at.
Stop for a minute! You don’t have to keep churning out multiple blog posts a day, every day. Just stop and reflect. Sit back in your chair and let your mind rest. When a thought comes to you, let it go. Keep doing that until you regain your focus and learn to distinguish what matters from what is shiny.
Yes, of course, you’re doing too much and draining yourself. Of course your “productivity” doesn’t result in extra time but is just filled with more of the same, that’s been true for longer than you’ve been alive. It’s a variation of Parkinson’s law.
https://en.wikipedia.org/wiki/Parkinson%27s_law
¹ And others, but Simon is particularly prevalent on HN, so I bump into these more often.
Long story short, it was ugly and didn't really work as I wanted. So I'm learning Hugo myself now... The whole experience was kind of frustrating tbh.
When I finally settle in en did some hours or manual work I felt much better because of it. I did benefit from my planning with Claude though...
erelong•2h ago
Pingk•1h ago
> Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
wiseowise•1h ago
Overheard a couple of conversations in the office how one IC spent all weekend setting up OpenClaw, another was vibe coding some bullshit application.
I see hundreds of crazy people in our company Slack just posting/reposting twitter hype threads and coming up with ridiculous ideas how to “optimize” workflow with AI.
Once this becomes the baseline, you’ll be seen as the slow one, because you’re not doing 5x work for the same pay.
co_king_3•42m ago
These are internet cult victims.