LLMs are a very good example of that: they are a lot less efficient than the processes they replace in terms of energy. "We produce more with more energy" is the norm.
Someone recently said there will be "sustainable abundance" but this is magical thinking. There will be abundance for people in one class and death and poverty for people in the other class. But the abundance will not be "sustained" -- it will be fueled by the suffering of the under class.
In the industrial revolution, tractors and machine looms and steam engines did all the old 'menial' jobs. We just invented new jobs that pay better with all the extra wealth created by automation. The middle class grew and quality of life skyrocketed.
It was arguably the best thing to ever happen to the world.
Cheap raw materials does not affect the cost of everything else that must be done to prepare and deliver the product to the end user.
While the total volume of the great lakes is appropriately great, the areal recharge rate is much lower than one might expect. IIRC about half of its recharge is from groundwater, and so competes with aquifer withdrawals elsewhere.
Overuse is still a concern.
so far ime it's just 1000x more slop everywhere. fake emails at work, fake images on every app, fake text in every comment. and we are sooo productive because we can produce this crap faster than anyone can wade through it
But that's all a bunch of hype that may never come to pass. Some people don't want to hear any hype. Hoping is too much to take if you've been let down too hard before. Or for the rich and you're poor. It's there though, if you want to dream. Dream big.
Or will the cheapest provider gobble everyone else up and then raise prices to maximize marginal revenue / whatever everyone can bear? For example, imagine if Oracle (or whoever you want) somehow wins the AI price wars and completely owns every energy company, every housing company, every farm on earth, every bank, etc. You think they'll LOWER prices to make life easy for everyone? When they'll know exactly just how much you can pay each month to still survive and not riot?
If you want to go to conspiracies theory land, the world is already controlled by 100 families, or corporations, or the Jews or the lizard people. We’re all just toys for them to play with.
At the end of the day, you have three options and a choice to make: corporations, government, or billionaires. Which one are you going to throw your lot in with to survive the upcoming upheaval?
AI is currently making energy more expensive. Shelter and these other commodities aren't made cheaper if population expands along with any capacity increases (like lanes on a highway). Lots of "ifs" in this statement that don't seem to match with observation of reality. The point of the discussion here is that AI in many ways is making workers less efficient.
"If", of course, but it's all but certain. The energy situation, today, is that we as a species depend on fossil fuels, and they are depleting. And we don't have a solution to replace them (renewables are not remotely replacing oil today, those who believe it say something like "we've done 1% of the job, it proves that we will reach 100%!").
AI is making us use more energy, and our use of energy is what is killing the world. Again, the consequences of abundant energy are global warming, mass extinction and political instability as the fossil fuels get less and less available.
One could simply work less. Even if you're full in the office, you can just use that extra time to learn something new.
So, fraud? If you put a fixed price based on 3 hours, that's of course fine. If you lie about how long work takes you, that's fraud.
Unless your bids are what you bill not what it takes, and you would bill the same 3 hours if it took you 4. In which case it's a fixed price under a different name.
You are absolutely correct from a business perspective. However, I just cannot shake the feeling that I am so 'over' this society that we have created. I'm near my breaking point, I swear. Each passing day, I just think living off the grid sounds better and better.
Again, I am not trying to dog you or anything. It's just that reading statements like yours reminds my how unfit I am for this society. It's a 'me problem' not a 'you/them problem.'
I know life today provides an abundance of boons. However, I sometimes wish I could live in a time where I could be the town blacksmith, cooper, tailor, etc.. My job would be to provide a role for my small community and we would all know and rely on each other. I'm not cut out for this hyper-optimized world.
Although the internet has globalized a lot of services, there's still local, labor intensive jobs that can never be scaled up like this.
To name a few:
- Garbage man - bus driver - child care provider - teacher
Tailors still exist. I went to one last week to get an inch off my pant legs.
https://www.kalzumeus.com/2006/08/14/you-can-probably-stand-...
I'm also stating if you choose to bill by the hour, not by the work product, you are legally and ethically required to bill by the actual time it takes.
Initial build
database integration
accessibility
speed
I'm not saying you're wrong to charge the full amount for the same job that used to take much longer. I fully support that, go get that bread! I'm just not down with writing 3h on an invoice when it took 15 mins.
But a response to the title: "_buzzword tech_ is making us work more" - it's rarely the tech making us work more, it's normally the behaviour and attitude of businesses trying to profit from the tech that makes life hard for everyone.
But such is that state of the UK that I had simply assumed the government had censored it. Remarkable how quickly expectations have shifted.
Tell me again how this isn't pure hell and the cuck chair?
Is this really how professionals work on such a problem today?
The times I'd had a tune the responses, we'd gather bad/good examples, chuck it into a .csv/directory, then create an automated pipeline to give us a percentage of success rate for what we expect, then start tuning the prompt, parameters for inference and other things in an automated manner. As we discover more bad cases, add them to the testing pipeline.
Only if it was something that was very wrong would you reach for model re-training or fine-tuning, or when you know up front the model wouldn't be up for the exact task you have in mind.
We've kept the LLM constrained to just extracting values with context, and we show the values to end-users in a review UI that shows the source doc and allows them to navigate to exactly the place the doc where a given value was extracted. These are mostly numbers but occasionally the LLM needs to do a bit of reasoning to determine a value (e.g., is this X, Y or Z type of transaction where the exact words X, Y or X will not necessarily appear). Any calculations that can be performed deterministically are done in a later step using a very detailed, domain specific financial model.
This is not a chatbot or other crap shoehorned into the app. Users are very excited about this - it automates painful data entry and allows them to check the source - which they actually do, because they understand the cost of getting the numbers wrong.
#andRepeat
And yes, I only talked about automation, but the same high-level issues apply to LLMs, but with different downsides: you need to check the LLM output which becomes a bigger topic, and then potentially your own skills stagnate as you rely on LLMs more and more.
I think individuals who get comfortable in their jobs don’t like automation arriving at their station because it upends the order of things just as they were feeling comfortable and stable. Being adaptable now is more important than ever.
> Products don't get better either, but that's more of a "shareholder value" problem than it is a specific technology problem.
This is broadly false. Your laptop is unquestionably better because it was constructed with the help of automated CNC machines and PCB assembly as opposed to workers manually populating PCBs.
Some companies can try to use automation to stay in place with lower headcount, but they’ll be left behind by competition that uses automation to move forward. Once that leap happens it becomes accepted as the new normal, so it never feels like automation is making changes.
I do actually plan on getting old, and as much as I would love to retire before I'm no longer adaptable, I'm not so sure my finances or my brain will comply.
>At home I save time because my dishwasher automates washing my dishes.
I don't think this fits my analogy, because you personally can go watch TV or read a book or exercise given the time that is saved by the dishwasher. At work, you must be at work doing something else, and the "something else" is seldom a real improvement. If I could automate my job and then go on a hike I'd be a lot more excited about it.
When you find an employer that is happy to pay people to not work, let me know because I also want to work there.
This was most employers during COVID :-)
I worked fewer hours, and still got more done than most of my team. Since I didn't come to office, no one knew. As long as I responded to emails/messages in a timely fashion, no one cared.
As someone on a salary, when the work is finished... I am too. What's overtime? My unvested shares are an incentive to save the place from immolation over the next N years. Where's this 'must be at work doing something else' in the contract, again?
"Where's the loyalty?" I hear someone ask. It passed with a family member and employers that had no compassion.
All this to say, I fully support your testing of the water. It's a strategy I've picked up/adapted, too. The poster above should enjoy the time saved by automation/hike. I shitpost.
We have a tendency to scream crisis while stock prices and market caps rapidly rise. Every little downturn is evidence for the cry, but that doesn't change the trend. They keep saying that the share holders are the real customers and they seem to be doing perfectly fine regardless of if it's a hiring spree or firing. Regardless of if it's even a global pandemic.
There's 4 companies worth more than $3T, one more than $4T. 11 are worth more than $1T. It's only been 7 years since we broke that $1T barrier. Most of the growth has happened recently too. Even Apple has had bigger swings since the pandemic.
Idk, I don't think these companies are in trouble anywhere near what they claim. More concerning is this rapid growth in value without corresponding game changing products. Sure, we got AI but it hasn't changed the game like the iPhone did. I'd give up AI a lot sooner than I'd give up my smartphone, even if all it did was make calls, play music, and have a web browser. A pocket computer is very handy
Sure, some of it's made up, but the response is certainly genuine. I've never had to attend so many pointless Teams calls just to prove presence... until this started making the rounds.
I think it's broadly reasonable that you would only be paid for doing something someone else needs doing.
Also, almost everyone is a shareholder, directly or indirectly by being a taxpayer and shouldering the cost of pensions, which are invested in businesses.
Choosing to clean your own house instead of hiring a house cleaner, cooking your own food, doing your own landscaping, driving your own car, all of these are “classifying humans as a cost”.
I probably could afford a maid and landscaper, but I don’t because I would rather keep the money. When an employer does that, it is somehow different.
They will be less excited about the second order - a steady loss of revenue as whole professions are automated and people can't find a well paying job.
The third order will be even worse when no one has a job or money to buy anything.
People always point to the industrial revolution. But that created millions of jobs before it obsoleted millions of jobs - you needed workers to create tractors. This wave seems to be shaping up much more like what happened to the rust belt in the late 20th century, regions which still haven't recovered. However this time it'll hit pretty much everyone, everywhere.
Good luck with that capitalism.
Social/economic stratification (to a certain degree) makes sense as long as there is a reasonable amount of social mobility. AGI paired with advanced robotics seems as though it would all but eliminate social mobility. What would your options be? Politics, celebrity, or a small number of jobs where the human element is essential? I think the economic system needs to dramatically change if/when we reach that point (and ideally before, so people don't suffer in the transition).
Maybe you wouldn't, but you definitely should. Knowledge workers aren't paid for their labor (in the form of me trading my time and effort for wages), knowledge workers are paid for impact. I'm trading my ability to reason, decide, and create value for the company.
I'm valuable not because I sit at a desk and type for 8 hours. I'm valuable because the outputs of my thinking help move the company forward. My employer isn't buying 8 hours of my time , they're buying the outputs that come from expertise and judgement.
So if I automate something, the company still receives the same value the pay me for whether I perform the task manually or build something that automates it. I work in ops, so if I use ansible and a script to automate patching 100 servers instead of doing it by hand, my employers gets the same result: patched systems. The automation didn't diminish my contribution, it proved it. I get paid the same either way.
In essence, my salary is a retainer. It's payment to keep my expertise availalbe, and working for my employers instead of someone else. It's not payment for activity or time.
I'm pretty sure your typical managers don't think so.
>In essence, my salary is a retainer. It's payment to keep my expertise availalbe, and working for my employers instead of someone else.
>It's not payment for activity or time.
If the latter statement is true, then you must not have any mandatory hours to be present.
If you do have mandatory hours to be present, then the latter statement is not true.
Right, like drinking coffee at the kitchen in the office.
Speak for yourself, salary means I'm done when the work is. I encourage you to enjoy the hike, book, whatever. That said, I truly hate the induced demand LLMs offer.
My point is this: it's going to happen anyway. I refuse to over-extend [any more] to stave the inevitable. I'm in a good spot because I have a solid network (contacts/skills) and reasonable savings.
I'm sure the employer would be mad to know I'm posting right now, I don't care. Their fault for allowing me to automate!
Classic example is jeans. Modern jeans are ridiculously stretchy compared to "real" cotton denim because they contain tons synthetic fibers. However I run through jeans at an alarming pace - even compared to when I was a kid. They wear quickly, tear easily, and generally don't last.
[1] https://www.pbs.org/newshour/science/laundry-is-a-top-source...
This is a fundamentally flawed analogy, because the problems are inverted.
CNC and automated PCB assembly work well because creating a process to accurately create the items is hard, but validation that the work is correct is easy. Due to the mechanics of CNC, we can't manufacture something more precise than we can measure.
LLMs are inverted; it's incredibly easy to get them to output something, and hard to validate that the output is correct.
The analogy falls apart if you apply that same constraint to CNC and PCB machines. If they each had a 10% chance of creating a faulty product in a way that can only be detected by the purchaser of the final product, we would probably go back to hand-assembling them.
> Some companies can try to use automation to stay in place with lower headcount, but they’ll be left behind by competition that uses automation to move forward.
I suspect there will be a spectrum, as there historically has been. Some companies will use AI heavily and get crazy velocity, but have poor stability as usage uncovers bugs in a poorly understood codebase because AI wrote most of it. Others will use AI less heavily and ship fewer features, but have fewer severe bugs and be more able to fix them because of deep familiarity with the codebase.
I suspect stability wins for many use cases, but there are definitely spaces where being down for a full day every month isn't the end of the world.
I do think one primary difference between physical objects and software is we bother to have precise specifications that one can validate against, and I think that's what you're trying to get at. If all software had that then software could have an "easy" validation story too, I suppose.
I have mixed feelings about precise specifications in software. On the one hand the hardware engineer in me thinks everything should have an exact specification. On the other hand, that's throws away the "soft" advantage which is important for some types of software. So there is a spectrum.
> I do think one primary difference between physical objects and software is we bother to have precise specifications that one can validate against
Having been on the hardware side and now on software (specifically ML) this is one of the biggest differences I've noticed. It's a lot harder to validate programs. But I think the part that concerns me more is the blasé or even defensive attitude. In physical engineering it often felt "it's the best we can do for now" with people often talking about ideas and trying to make it work. It seemed of concern to management too. But in software it feels a lot more like "it gives the right output" and "it passes the test cases" (hit test cases aren't always robust and don't have the same guarantees as in physical design) and call it done. The whole notion of Test Driven Development even seems absurd. Tests are a critical part of the process, but to drive the process is absurd. It just seems people are more concerned with speed than velocity. A lack of depth, and I even frequently see denial of depth. In physical it seems like we're always trying to go deeper. In software it seems like we're always trying to go wider.This isn't to say that's the case everywhere, but it is frequent enough. There's plenty of bad physical engineering teams and plenty of great software teams. But there's definitely differences in approaches and importantly differences in thresholds. The culture too. I've never had a physical engineer ask me "what's the value?", clarifying that they mean monetary value. I've had managers do that, but not fellow engineers. The divide between the engineering teams and business teams was clearer. Which I think is a good thing. Engineers sacrifice profit for product. Business sacrifices product for profit. The adversarial nature keeps balance
Which I think we already see a fair amount of this in tech. Even as very tech literate people it can be hard to tell. But companies are definitely pushing to move fast and are willing to trade quality for that. If you're trying to find the minimum quality that a consumer is still willing to pay for, you're likely in a lemon market.
I mean look at Microsoft lately. They can't even get windows 11 right. There's clear quality control issues that are ruining the brand. Enough that us techies are joking that Microsoft is going to bring about the year of Linux, not because Linux has gotten better (also true) but because Microsoft keeps shooting itself in the foot. Or look at Apple with the new AirPods, they sound like shit. Same with Apple intelligence and liquid glass. A big problem (which helps lemon markets come into existence and be stable) is that competition is weak, with a very high barrier to entry. The market is centralized not only because the momentum and size of existing players (still major factor) but because it takes a lot of capital to even attempt to displace them. That's probably more money and more time than the vast majority of investors are willing to risk and the only ones with enough individual wealth are already tied to the existing space.
I think you also have it exactly right about LLMs and AI. A good tool makes failures clear and easy to identify. You design failure modes, even in code! But these machines are designed for human preference. Our methods that optimize for truth, accuracy, and human sounding language simultaneously optimize for deception. You can't penalize the network for wrong outputs if you don't recognize they are wrong.
A final note: you say velocity, I think that's inaccurate. Velocity has direction. It's more accurate to say speed.
Look at all the other threads with people’s experiences. They aren’t unhappy with automation because they were comfortable. They are unhappy with automation because the reward for being more productive is higher expectations and no compensation.
People think the Luddite movement was smashing looms because they inherently hated technology. They smashed the looms because the factories were producing more and the result of that productivity was the workers becoming destitute.
If the machines and progress only bring about a worse life for individuals, those individuals are going to be against the machines
Now, if what you actually want is to be relatively more prosperous and have more status that's a game you can keep playing forever. But you really don't have to, to simply be better off than all people in the past with far less work.
All of my grandparents retired in their 50s with fat pensions and then lived into their late 80s without having ever stepped foot on a college campus.
The only place I can think of giving pensions at that age anymore is the military. And you aren’t getting a fat pension without being an officer which requires a degree
Everyone I grew up with or met via work that is my age or younger has 1-3 more degrees than their parents and grandparents and are significantly worse off when it comes to standard life milestones like buying a home or ever having children.
We are not becoming relatively more prosperous as a people. We have more bread and circuses and less roofs over our heads on average
For instance, I had to rename a collection of files almost following a pattern. I know that there are apps that do this and normally I’d reach for the Perl-based rename script. But I do it so irregularly that I have to install it every time, figure out how I can do a dry run first, etc. Meanwhile, with the Raycast AI integration that also supports Finder, I did it in the 10-15 seconds that it took to type the prompt.
There are a lot of tasks that you do not do often enough to commit them fully to memory, but every time you do them it takes a lot of time. LLM-based automation really speeds up these tasks. Similar for refactors that an IDE or language server cannot do, some kinds of scripts etc.
On the other hand LLMs constantly mess up some algorithms and data structures, so I simply do not let LLMs touch certain code.
It’s all about getting a feeling for the right opportunities. As with any tool.
> On the other hand LLMs constantly mess up some algorithms and data structures, so I simply do not let LLMs touch certain code.
See, these two things seem at odds to me. I suppose it is, to a degree, knowledge that you can learn over time: that an LLM is suitable for renaming files but not for certain other tasks. But for me, I'd be really cautious about letting an AI rename a collection of files, to the point that the same restrictions apply as would apply to a script: I'd need to create the prompt, verify the output via a dry run or test run, modify as necessary, and ultimately let the AI loose and hope for the best.
Meanwhile, I probably have a script kicking around somewhere that will rename a batch of files, and I can modify it pretty quickly to match a new pattern, test it out, and be confident that it will do exactly what I expect it to do.
Is one of these paths faster than the other? I'm not sure; it's probably a wash. The AI would definitely be faster if I was confident I could trust it. But I'm not sure how I can cross that threshold in my mind and be confident that I can trust it.
ultimately you are right, the buck needs to stop somewhere, but at least in my experience, the more you add quality/test checks as LLM workflows, the higher the rate of success.
Why? I never understand this level of caution since don't we all use VC? Just feed it the prompt and if it messes up undo the changes.
This assumes you're working with text files.
What if you're working with ~100MiB (each!) frames from a scan of a 35mm movie?
(Note: This isn't fictional. I've worked with file-sets like this in film restoration many times.)
Too many people are trying to jump to the end when they don't even have their day to day managed or efficient today can tend to carry forward efficiency in a number of business workflows.
Checking the LLM output is required when it's not consistent, in many cases maintaining the benefit requires the human to know more on the subject than the LLM.
There are definitely many things which when automated loses out on some edge cases. But most folks don't need artisanal soap.
Without automation we would all be living in poverty.
The folks at the top know how susceptible we are to being nerd-sniped and how readily we will build these things for them.
The bigger problem I see is not automation, it is the exploitation of addictive behaviours to “capture attention”.
Bread is already so cheap as to not notice the price most of the time. But other goods and services are absolutely not that cheap. And there’s certainly higher quality that could be achieved, especially in areas like medicine. It is a lack of imagination to not see all the ways in which cheaper goods and services could improve our lives.
Instead all these automation tools are and will be used to cut corners and optimize on cost. Quality, peace-of-mind, and increased free time will be the sales pitch used to placate us plebes. But we all know what the executive dipshits will really care about.
Although, maybe going against the hedonic treadmill is against our nature. There’s always a nicer house in a better neighbourhood to work for. But I at least want more people to have the choice to work fewer hours through higher wages. That might not come for free with economic growth, but it certainly won’t come without it.
I think the concern is that true human+ AGI and advanced robotics would obsolete so many roles that it doesn't matter if things can be made more efficiently, because nobody will have any money at all. If/when AI can do my job better than me, it isn't giving me leverage, it is removing all leverage I have as someone who puts food on the table through labor.
In the interim period before that happens then sure, the automation is great for some people who can best leverage it.
But honestly, if we have this level of automation it feels like it would be very hard to predict how society will evolve. I would expect our current model of work-to-live to become untenable, and we’d move to something else. I doubt that transition will be easy.
Yes, it led to more work. What would take half a day could now be done in an hour. So we now had to produce 4x more.
I spent 4 years there automating left and right. Everyone silently hated me. One of the problems with my automation was that it allowed for more and more Q/A. And the more you check for quality issues, the more issues you'll find. Suddenly we needed to achieve 4x more, and that meant finding 4x more problems. The thing about automation is that it doesn't speed up debugging time. This leads to more stress.
One senior guy took me aside and said management would not reward me for my efforts, but will get the benefit of all my work.
He was right.
Eventually, I left because I automate things to make my life easier. If it's not making my life easier (or getting me more money), why should I do it?
Since then, whenever I get a new job, I test the waters. If the outcome is like that first job, I stop working on process improvements, and look for another job.
Business is stupid. They value busy-ness over productivity.
Also my experience with that first job. I would get the work done quicker than others, and leave around 5pm (most stayed beyond 6pm).
The message was clear: "There's always work to do. If you're getting work done early, you need to do more!"
I got worse ratings than people who achieved less. It also explains why coworkers refused to learn how to automate things.
Again: I automate to make my life easier. If it isn't working, I shouldn't do it.
Not stupid, just entitled to all of your innovation and productivity while you're on the clock (if waged) and off the clock (if you're salaried). If you've shown yourself to be an outlier - that's great for the business - and congratulations, you've aet yourself a new baseline. Isn't class economics just delightful[1]?
The only employees who have a more direct linkage between productivity and income are sales folk, and it's boom or bust there. If you're an engineer that somehow doubles your employers profits, don't dream they'll double your salary, a once-off bonus is the best you can hope for, at the next evaluation cycle.
1. From each, according to his ability. To each, according to "market" rates, and his negotiation skills.
AI actually has some ability to improve things. At least when I think about manufacturing and farming. When you produced at such a massive scale you could never individually inspect every potato, widget, or target every weed etc. You could produce WAAAY more but more bad products went out the door. But now you can inspect every individual thing. May not extend to every industry though.
When it was done, there were no bugs. Not a single issue. They asked the embedded guys how they had accomplished it. They said "we didn't know bugs were allowed".
Many people have never authored or even been involved with a high quality piece of software, so they just don't know what it looks like, or why you'd want it.
You'd think that someone in the exec team would have some personal pride and ownership in the code and would want to flush out bugs and improve quality. But nah.
The requests to my team are:
build what product says
close out 90% of the defects you find by priority order
deliver in the priority of feature > security > accessibility
once delivered move on to something else we only have time to work for 3 months on an initiative before we move on
These requirements don't end up with a well working product. They end up with gaps in product, defects that are obvious, non-accessible site. Things take time to polish and be made right, but that's not what is requested. Wanting to iterate and measure isn't important because its not more features.
We already do that on many levels -- compilers, linters, pre-commit hooks etc. Well, AI can just red-team and create new tests. The great thing about red-teaming vs blue teaming is that false positive and hallucinations don't hurt the final product. So you can let it go wild.
I hear this so often these days and I quite do not understand this part. If I trust LLM do to "X" that means i have made a determination that LLM is top-notch with "X" (if I did not make this determination then letting LLMs do X would be lunacy) and henceforth I do not give a flying hoot to know "X" and if my "X" skills deteriorate it is same thing as when we got equipment to tend to our corn fields and my corn picking skills deteriorated. of course I am being facetious here but you get the point.
> potentially your own skills stagnate as you rely on LLMs more and more.
There were some papers from microsoft that highlighted this point https://www.microsoft.com/en-us/research/wp-content/uploads/...
First things were made by hand, slowly - they were expensive and you could make a living making things.
Now those things are made in factories.
And they are 99% automated - like where software is going.
And whats left is to be a mindless factory worker doing repetitive things all day for a living wage.
But hey, you are so productive - now you make 100k items in a day. Must feel nice.
Which is great, and has unblocked so much productivity, but I do miss some of the grunt work. I feel like it helped spawn new ideas and gave you some time to think through implementation.
Yes, and this a meme I have in my mind of LLM engineers talking to each other and a balloon: "If we could just get the right regex done in a few seconds we'd win the entire global programming community."
On the SMH as a semiotic, funny how IMHO evolved into IMO and SMDH evolved into SMH. Some of us refuse to be humble or self deprecating.
As jaded as that may be, I believe LLMs for many will become our bigger shovels.
I am now able to single-handedly create webapp MVPs, one of which is getting traction. If anything actually takes-off, there will certainly be need for a real dev to take over. Also, my commits are not "vibe coded." I have read every single loc, and found so many issues that I am stunned that "vibe coding" is actually a thing. I do let the models run wild on prototypes though.
I think that I happen to be in some magical sweet spot as a person who knows the words, kept up with tech, but not the syntax of framework xyz.
I thought this sweet spot was very transient, and I am very happy that the tools appear to be reaching a plateau for now, so I still have at least another year of being useful.
Since agentic dev tools arrived, I am having the time of my life while gladly working 60hrs per week.
I realize that I am an outlier, but is anyone else in this same boat? If you have product ideas, is this not the best time ever to build? All of our ideas are being indirectly subsidized by billions of VC & FAANG dollars. That is pretty freaking cool.
Yep. I have a computer science background but have always been "the most technical product management/marketing guy in the room". Now I'm having lots of fun building a SaaS and a mobile app to my standards, plus turning out micro-projects like pwascore.com in a day or two.
It turns out that I love designing/architecting products, just not the grind-y coding bits. Because I create lots of tests, use code analysis tools, etc., I'm confident that I'm creating higher quality code than (for example) what most outsourced coders are creating without LLMs.
I'm seeing amount of changes needed to produce new features when coding with these AI tools constantly increasing, due to the absence of a proper foundation, and due to the willingness of people to accept it, with the idea that 'we can change it quickly'.
It has become acceptable to push those changes in a PR and present them as your own, filled with filler comments that are instant tech debt, because they just repeat the code.
And while I actually don't care who writes the code, I do expect the PR author to properly understand the code and most importantly, the impact on the codebase.
In my role as a mentor I now spend a lot of time looking at things written and wonder: Did the author write this, or did they AI? Because if the code is wrong, this question changes how the conversation goes.
It also impacts the kind of energy I'm willing to put in into educating the other person as to how things can be improved.
Forces the change in coding practice.
Which is a great idea until your superior asks why you're holding back the vibe coders and crippling their 100x productivity by rejecting their PRs instead of just going with the flow.
https://chatbotkit.com/reflections/why-ai-coding-agents-crea...
The tldr is that AI works like multiplier on both sides of the equation. Not only we will work more but we will get even more stressed because things will be moving at increasing speed - perpetually - until we hit some limit of course .
It seems relatively obvious to me that if a society has work as its cultural core then no amount of productivity increase will get rid of work - it would destabilize the entire society before it could do so.
I just wrote this comment in another thread, but it fits here too:
The development, production and use of machines to replace labour is driven by employers to produce more efficiently, to gain an edge and make more money.
You, as an employee, are just means to an end. "The company" doesn't care about you and you will not reap the benefits of whatever efficiency improvements the future brings.
No one is 100% correct all the time. Leaning on an AI model because it glazes you 24/7 and does tell you that you are correct 100% of the time doesn’t mean it right about you, its just a seductive trap to fall into and the models are very good at telling you the next thing you want to hear
Personally, in the times I've had the most time off, I find that I am more productive, but that doesn't matter to any employer.
I guess you missed the part where people worked 7 days a week and 10 hours per day, and we didn't have ~20% of the population retired.
Unless we break social contracts, in 30 years ~40% of the population will be retired in large parts of The West and China.
If you're still working 40 hours a week, doing basically nothing but posting on HN, going to the gym, having lunch for hour+ breaks, for most of the work day - you might think nothing has changed.
But for 10-20% more of the population to not be working, there's a huge number of hours that aren't being worked.
It's just that most of the gains are going to one group of people.
Most of us will be in that group by that time...
On the other hand, LLMs in hands of a misinformed team member who doesn't actually give any fucks whatsoever, is like a time bomb waiting to torpedo a project.
I thought it was threat of being fired and left without means to pay for rent and food?
The ruling class will work you as much as possible – to the point of death – unless stopped via labor co-ordination, mass strikes, and force.
The 40-hour week w/ fair overtime, fair breaks, etc. could be enforced and expanded to include software engineering.
The Wired article seems to be mostly focused on situations where employees are compensated for working this lifestyle. Aside from that - they discuss how AI _founders_ are doing this to keep up with things. The former surprises me (a little - it wouldn't surprise me to see companies doing this _somewhere_ in the US pre-ChatGPT). The latter doesn't really surprise me at all. Typical founder hustle culture.
A better title would be AI is making startup founders hustle harder [and they are trying to normalize this workload across their (small but growing) companies). NOT "AI Is Making [All Of] Us Work More".
Note also, compilers automated the process of machine instruction generation - quite a bit more reproducibly than 'prompt engineers' are able to control the output of their LLMs. If you really want the LLMs to generate high-quality programming code to feed to the compiler, the overnight build might have to come back.
Also, in many fields the processes can't be shut down just because the human needs to sleep, so you become a process caretaker, living your life around the process - generally this involves coordinating with other human beings, but nobody likes the night shift, and automating that makes a lot of sense. Eg, a rare earth refinery needs to run 24/7, etc.
Finally, I've known many grad students who excelled at gaming the 996 schedule - hour long breaks for lunch, extended afternoon discussions, tracking the boss's schedule so when they show up, everyone looks busy. It's a learned skill, even if overall this is kind of a ridiculous thing to do.
Every moment I don't spend prompting, I'm falling behind.
The system insidiously guilts you for not leveraging it constantly. Allowing AI to sit there, just waiting, feels like a waste. It's subtle, but corrosive. Any downtime becomes a missed opportunity, and rest turns into inefficiency. Within this framework, leisure becomes a moral failure.
I don't feel that way.
What I feel is that in 2025 we're still the bottleneck because we make decisions. In 2026 we'll automate the QA part, and then we'll be able to fan out and fan in a lot of solutions at scale. Those who remove the bottlenecks in business beat everyone else. Is is why FAMGA and tech companies are the top of wall street. Biggest bang for the buck.
Founders have been doing stupid signalling for ages to seem like they are more worthy of VC funding. A single anecdote in a podcast about a badly written Wired article based on a few anecdotes from hustle culture founders does not make something true.
Working 80 hour weeks for low pay and high expected upside has ALWAYS been SV software culture.
The individual leverage of an experienced software developer has never been higher.
The dream of automation was always to fix that. We did that, and more. We have long had the technology to provide for people. But we invent tons of meaningless unnecessary jobs and still cling to the "jobs" model because that's all we know. It's the same reason vaccuum cleaners didn't reduce the amount of cleaning work to be done. We never say "great, I can do less now because I have a thing to do it for me." That thing just enables me to fixate on the next thing "to be done." The next dollar to be gained.
A McDonalds robot should free the people of doing that kind of work. But instead those people become "unemployed" and one individual gets another yacht and creates a couple "marketing" jobs that don't actually provide any value in a holistic humanitarian sense.
Those are cold comfort if compensation isn't enough or the job ruins your health, but I think their absence becomes important if you talk about popular UBI or "end of work" scenarios.
That's why I think even if we had some friendly tech company that did All The Jobs for free using automation and allowed everyone to live a comfortable life without even the need for an income, and even if we changed the culture such that this was totally fine, it would still be a dystopia, or at least risk very quickly drifting into one: Because while everyone could live a happy, fully consumption-oriented life, they'd have zero influence how to live that life: If the company does everything for you that is to be done, it also has all the knowledge and power to set the rules.
cranberryturkey•4h ago
coreyp_1•3h ago
I'm not doubting you, btw... I've seen others here on HN also saying that they burn through money with AI, I guess I'm just missing something.
In fact, the geek in me absolutely wants to know what's going on, because you have probably found something that I would love to know about! :)