Yes, I spend time on writing prompts. Like "Never do this. Never do that. Always do this. Make sure to check that.". To tell the AI my coding preferences. Bot those prompts are forever. And I have written most of them months ago, so that now I just capitalize on them.
I let the AI implement features on its own, then look at the commit diffs and then use VIM to finetune them.
I wrote my own tool for it. But I guess it is similar to cursor, aider and many other tools that do this. Also what Microsoft offers via the AI "edit" tool I have seen in GitHub codespaces. Maybe that is part of VScode?
I have not tried them, but I guess aider, cursor and others offer this? One I tried is copilot in "edit" mode on github codespaces. And it seems similar.
That said, when I had to write a Terraform project for a backend earlier this year, that’s when generative AI really shined for me.
I do full stack projects, mostly Python, HTML, CSS, Javascript.
I have two decades of experience. Not just my work time during these two decades but also much of my free time. As coding is not just my work but also my passion.
So seeing my productivity double over the course of a few months is quite something.
My feeling is that it will continue to double every few months from now on. In a few years we can probably tell the AI to code full projects from scratch, no matter how complex they are.
There is a lack of training data; Apple docs arent great or really thorough, much documentation is buried in WWDC videos and requires an understanding of how the APIs evolved over time to avoid confusion when following stackoverflow posts, which confused newcomers as well as code generators. Stackoverflow is also littered with incorrect or outdated solutions to iOS/Swift coding questions.
With swift it was somewhat helpful but not nearly as much. Eventually stopped using it for swift.
I read each line of the commit diff and change it, if it is not how I would have done it myself.
For me it’s been up to 10-100x for some things, especially starting from scratch
Just yesterday, I did a big overhaul of some scrapers, that would have taken me at least a week to get done manually (maybe doing 2-4 hrs/day for 5 days ~ 15hrs). With the help of ChatGPT, I was done in less than 2 hours
So not only it was less work, it was a way shorter delivery time
And a lot less stress
Personally, I do try to keep a comment at the top of every major file, with a comment with bullets points, explaining the main functionality implemented and the why
That way, when I pass the code to a model, it can better “understand” what the code is meant to do and can provide better answers
(A lot of times, when a chat session gets too long and seems like the model is getting stuck without good solutions, I ask it to create the comment, and then I start a new chat, passing the code that includes the comment, so it has better initial context for the task)
If the training data contains some mistakes often it will reproduce them more likely.
Unless there are preprogrammed rules to prevent them.
As a side note, most good coding models now are also reasoning models, and spend a few seconds “thinking” before giving a reply
That’s by no means infalible, but they’ve come a long way even just in the last 12 months
But, it did require passing tests
Most of the changes in the end were relatively straightforward, but I hadn’t read the code in over a year.
The code also implemented some features I don’t use super regularly, so it would’ve taken me a long time to load everything up in my head, to fully understand it enough, to confidently make the necessary changes
Without ai, it would have also required a lot of google searches finding documentation and instructions for setting up some related services that needed to be configured
And, it would have also taken a lot more communication with the people depending on these changes + having someone doing the work manually while the scrapers were down
So even though it might have been a reduction of 15hrs down to 1.5hrs for me, it saved many people a lot of time and stress
But, from my experience now, I’d happily use AI to build the tests
At the end of the day: 1) a human is the ultimate evaluator of the code results anyway, 2) the thing either works or it doesn’t
Have you tested them across different models? It seems to me that even if you manage to cajole one particular model into behaving a particular way, a different model would end up in a different state with the same input, so it might need a completely different prompt. So all the prompts would become useless whenever the vendor updates the model.
Like, just stop and think about it for a second. You're saying that AI has doubled your productivity. So, you're actually getting twice as much done as you were before? Can you back this up with metrics?
I can believe AI can make you waaaaaaay more productive in selective tasks, like writing test conditions, making quick disposable prototypes, etc, but as a whole saying you get twice as much done as you did before is a huge claim.
It seems more likely that people feel more productive than they did before, which is why you have this discrepancy between people saying they're 2x-10x more productive vs workplace studies where the productivity gain is around 25% on the high end.
I see it happening right in front of my eyes. I tell the AI to implement a feature that would take me an hour or more to implement and after one or two tries with different prompts, I get a solution that is almost perfect. All I need to do is fine-tune some lines to my liking, as I am very picky when it comes to code. So the implementation time goes down from an hour to 10 minutes. That is something I see happening on a daily basis.
Have you actually tried? Spend some time to write good prompts, use state of the art models (o3 or gemini-2.5 pro) and let AI implement features for you?
So, even if AI helps you write code twice as fast, it does not mean that it makes you twice as productive in your job.
Then again, maybe you really have a shitty job at a ticket factory where you just write boilerplate code all day. In which case, I'm sorry!
But working on features that can fit within a timebox of "an hour or more" takes up very little of my time.
That's what I mean, there are certain contexts where it makes sense to say "yeah, AI made me 2x-10x more productive", but taken as a whole just how productive have you become? Actually being 2x productive as a whole would have a profound impact.
Compared to now, the amount of work is about the same, or maybe a bit more than back then. But the big difference is the amount of data being processed and kept, that increased exponentially since then and is still increasing.
So I expect the same with AI, maybe the work is a bit different, but work will be the same or more as data increases.
I understand your point but it lacks accuracy in that mainframes, paper and filing cabinets are deterministic tools. AI is neither deterministic nor a tool.
You keep repeating this in this thread, but as has been refuted elsewhere, this doesn't mean AI is not productive. A tool it definitely can be. Your handwriting is non deterministic, yet you could write reports with it.
> Now it is true that the needs of human beings may seem to be insatiable. But they fall into two classes --those needs which are absolute in the sense that we feel them whatever the situation of our fellow human beings may be, and those which are relative in the sense that we feel them only if their satisfaction lifts us above, makes us feel superior to, our fellows. Needs of the second class, those which satisfy the desire for superiority, may indeed be insatiable; for the higher the general level, the higher still are they. But this is not so true of the absolute needs-a point may soon be reached, much sooner perhaps than we are all of us aware of, when these needs are satisfied in the sense that we prefer to devote our further energies to non-economic purposes.
[…]
> For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!
* John Maynard Keynes, "Economic Possibilities for our Grandchildren" (1930)
* http://www.econ.yale.edu/smith/econ116a/keynes1.pdf
An essay putting forward / hypothesizing four reasons on why the above did not happen (We haven't spread the wealth around enough; People actually love working; There's no limit to human desires; Leisure is expensive):
* https://www.vox.com/2014/11/20/7254877/keynes-work-leisure
We probably have more leisure time (and fewer hours worked: five versus six days) in general, but it's still being filled (probably especially in the US where being "productive" is an unofficial religion).
If you run your own LLM, and you don't update the training data, that IS deterministic.
And, it is a powerful tool.
The average American spends almost 3 hours per day on social media. [1]
The average American spends 1.5 hours per day watching streaming media. [2]
That’s a lot washed clothes right there.
[1] https://soax.com/research/time-spent-on-social-media
[2] https://www.nielsen.com/news-center/2024/time-spent-streamin...
As an example, I have a pretty good paying, full-time white collar job. It would be much more challenging if not impossible to find an equivalent job making half as much working 20 hours a week. Of course I could probably find some way to apply the same skills half-time as a consultant or whatever, but that comes with a lot of tradeoffs besides income reduction and is less readily available to a lot of people.
Maybe the real exception here is at the top of the economic ladder, although at that point the mechanism is slightly different. Billionaires have pretty infinite flexibility on leisure time because their income is almost entirely disconnected from the amount of "labor" they put in.
If you need a supercomputer to run your AGI then it's probably not worth it for any task that a human can do, because humans happen to be much cheaper than supercomputers.
Also, it's not clear if AGI doesn't mean it's necessarily better than existing AIs: a 3 years old child has general intelligence indeed, but it's far less helpful than even a sub-billion parameters LLM for any task.
We won’t need jobs so we would be just fine.
Like you could have "AGI" if you simply virtualized the universe. I don't think we're any closer to that than we are to AGI; hell, something that looks like a human mouth output is a lot easier and cheaper to model than virtualize.
Actual AGI presumably implies a not-brain involved.
And this isn't even broaching the subject of "superintelligence", which I would describe as "superunbelievable".
- Assuming god comes to earth tomorrow, earth will be heaven
- Assuming an asteroid strikes earth in the future we need settlements on mars
etc, pointless discussion, gossip, and bs required for human bonding like on this forum or in a bierhauz
AI automating software production could hugely increase demand for software.
The same thing happened as higher level languages replaced manual coding in assembly. It allowed vastly more software and more complex and interesting software to be built, which enlarged the industry.
Let's think this through
1: AI automates software production
2: Demand for software goes through the roof
3: AI has lowered the skill ceiling required to make software, so many more can do it with a 'good-enough' degree of success
4: People are making software for cheap because the supply of 'good enough' AI prompters still dwarfs the rising demand for software
5: The value of being a skilled software engineer plummets
6: The rich get richer, the middle class shrinks even further, and the poor continue to get poorer
This isn't just some kind of wild speculation. Look at any industry over the history of mankind. Look at Textiles
People used to make a good living crafting clothing, because it was a skill that took time to learn and master. Automation makes it so anyone can do it. Nowadays, automation has made it so people who make clothes are really just operating machines. Throughout my life, clothes have always been made by the cheapest overseas labour that capital could find. Sometimes it has even turned out that companies were using literal slaves or child labour.
Meanwhile the rich who own the factories have gotten insanely wealthy, the middle class has shrunk substantially, and the poor have gotten poorer
Do people really not see that this will probably be the outcome of "AI automates literally everything"?
Yes, there will be "more work" for people. Yes, overall society will produce more software than ever
McDonalds also produces more hamburgers than ever. The company makes tons of money from that. The people making the burgers usually earn the least they can legally be paid
Is it that straightforward? What about theater jobs? Vaudeville?
In live theater it would be mostly actors, some one time set and costume work, and some recurring support staff.
But then again, there are probably more theaters and theater production by volume.
Reducing the amount of work done by humans is a good thing actually, though the institutional structures must change to help spread this reduction to society as a whole instead of having mass unemployment + no retirement before 70 and 50 hours work week for those who work.
AI isn't a problem, unchecked capitalism can be one.
Obesity, mineral depletion, pesticides, etc.
So in a way automation did make more work.
https://firmspace.com/theproworker/from-strikes-to-labor-law...
But you can't have labor laws that cut the amount worked by half if you have no way to increase productivity.
The example they gave was search engine + digital documents removed the junior lawyer headcount by a lot. Prior to digital documents, a fairly common junior lawyer task was: "we have a upcoming court case. Go to the (physical) archive and find past cases relevant to current case. Here's things to check for:" and this task would be assigned to a team of junior (3-10 people). But now one junior with a laptop suffice. As a result the firm can also manage more cases.
Seems like a pretty general pattern.
I skipped over junior positions for the most part
I don’t see that not working now
FB has long wanted to have a call center for its ~3.5B users. But that call center would automatically be the largest in history and cost ~15B/yr to run. Something that is cost ineffective in the extreme. But, with FB's internal AIs, they're starting to think that a call center may be feasible. Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls. Then, with the harder calls, you can actually route it to a human.
So, to me, this is a great example of how the interaction of new tech and labor is a fractal not a hierarchy. In that, with each new tech that your specific labor sector finds, you get this fractalization of the labor in the end. Zuck would have never thought of a call center, denying the labor of many people. But this new tech allows for a call center that looks a lot like the old one, just with only the hard problems. It's smaller, yes, but it looks the same and yet is slightly different (hence a fractal).
Look, I'm not going to argue that tech is disruptive. But what I am arguing is that tech makes new jobs (most of the time), it's just that these new jobs tend to be dealing with much harder problems. Like, we''re pushing the boundaries here, and that boundary gets more fractal-y, and it's a more niche and harder working environment for your brain. The issue, of course, is that, like a grad student, you have to trust in the person working at the boundary is actually doing work and not just blowing smoke. That issue, the one of trust, I think is the key issue to 'solve'. Cal Newport talks a lot about this now and how these knowledge worker tasks really don't do much for a long time, and then they have these spats of genius. It's a tough one, and not an intellectual enterprise, but an emotional one.
Like, if AI is so good, then it'll just eat away at those jobs and get asymptotically close to 100% of the calls. If it's not that good, then you've got to loop in the product people and figure out why everyone is having a hard time with whatever it is.
Generally, I'd say that calls are just another feedback channel for the product. One that FB has thus far been fine without consulting, so I can't imagine its contribution can be all that high. (Zuck also goes on to talk about the experiments they run on people with FB/Insta/WA, and woah, it is crazy unethical stuff he casually throws out there to Dwarkesh)
Still, to the point here: I'm still seeing Ai mostly as a tool/tech, not something that takes on an agency of it's own. We, the humans, are still the thing that says 'go/do/start', the prime movers (to borrow a long held and false bit of ancient physics). The AIs aren't initiating things, and it seems to a large extent, we're not going to want them to do so. Not out of a sense of doom or lack-of-greed, but simply as we're more interested in working at the edge of the fractal.
"I'm still seeing Ai mostly as a tool/tech, not something that takes on an agency of it's own."
I find that to be a highly ironic thing. It basically says AI is not AI. Which we all know it is not yet, but then we can simply say it: The current crop of "AI" is not actually AI. It is not intelligence. It is a kind of huge encoded, non-transparent dictionary.
A customer who wants to track the status of their order will tell you a story about how their niece is visiting from Vermont and they wanted to surprise her for her 16th birthday. It's hard because her parents don't get along as they used to after the divorce, but they are hoping that this will at the very least put a smile on her face.
The AI will classify the message as order tracking correctly, and provide all the tracking info and timeline. But because of the quick response, the customer will write back to say they'd rather talk to a human and ask for a phone number they can call.
The remaining 20% can't be resolved by neither human nor robot.
Is this implying it's because they want to wag their chins?
My experience recently with moving house was that most services I had to call had some problem that the robots didn't address. Fibre was listed as available on the website but then it crashed when I tried "I'm moving home" - turns out it's available in the general area but not available for the specific row of houses (had to talk to a human to figure it out). Water company, I had an account at house N-2, but at N-1 it was included, so the system could not move me from my N-1 address (no water bills) to house N (water bill). Pretty sure there was something about power and council tax too. With the last one I just stopped bothering, figuring that it's the one thing that they would always find me when they're ready (they got in touch eventually).
No it isn't. Attempts to do this are why I mash 0 repeatedly and chant "talk to an agent" after being in a phone tree for longer than a minute.
Actually, now that I think about it, yeah.
The whole purpose of the bots is to deflect you from talking to a human. For instance: Amazon's chatbot. It's gotten "better": now when I need assistance, it tries three times to deflect me from a person after it's already agreed to connect me to one.
Anything they'll allow the bot to do can probably can be done better by a customer facing webpage.
A high quality bot to guide people through their poorly worded questions will be hugely helpful for a lot of people. AI is quickly getting to the point that a very high quality experience is possible.
The premise is also that the bots are what enable the people to exist. The status quo is no interactive customer service at all.
There is zero chance he wants to pay even a single person to sit and take calls from users.
He would eliminate every employee at Facebook it it were technically possible to automate what they do.
Other than on-call roles like Production Engineers, whose absence there would make the company fail within a day?
https://libcom.org/article/phenomenon-bullshit-jobs-david-gr...
Automation is one way to do that.
The sad part is, do you think we'll see this productivity gain as an opportunity to stop the culture of over working? I don't think so. I think people will expect more from others because of AI.
If AI makes employees twice as efficient, do you think companies will decrease working hours or cut their employment in half? I don't think so. It's human nature to want more. If 2 is good, 4 is surely better.
So instead of reducing employment, companies will keep the same number of employees because that's already factored into their budget. Now they get more output to better compete with their competitors. To reduce staff would be to be at a disadvantage.
So why do we hear stories about people being let go? AI is currently a scapegoat for companies that were operating inefficiently and over-hired. It was already going to happen. AI just gave some of these larger tech companies a really good excuse. They weren't exactly going to admit their make a mistake and over-hired, now were they? Nope. AI was the perfect excuse.
As all things, it's cyclical. Hiring will go up again. AI boom will bust. On to the next thing. One thing is for certain though, we all now have a fancy new calculator.
https://impact.economist.com/projects/responsible-innovation...
(I know this is not the commonly accepted meaning of Parkinson's law.)
But if model development and self hosting become financially feasible for the majority of organizations then this might really be a “democratized” productivity boost.
There’s little sign of any AI company managing to build something that doesn’t just turn into a new baseline commodity. Most of these AI products are also horribly unprofitable, which is another reality that will need to be faced sooner rather than later.
To paraphrase Lee Iacocca: We must stop and ask ourselves, how much videogames do we really need?
Yes... basically in life, you have to find the definition of "to matter" that you can strongly believe in. Otherwise everything feels aimless, the very life itself.
The rest of what you ponder in your comment is the same. And I'd like to add that baselines shifted a lot over the years of civilization. I like to think about one specific example: painkillers. Painkillers were not used during medical procedures in a widespread manner until some 150 years ago, maybe even later. Now, it's much less horrible to participate in those procedures, for everyone involved really, and also the outcomes are better just for this factor - because the patients moves around less while anesthetized.
But even this is up for debate. All in all, it really boils down to what the individual feels like it's a worthy life. Philosophy is not done yet.
Perhaps my initial estimate of 5% of the workforce was a bit optimistic, say 20% of current workforce necessary to have food, healthcare, and maybe a few research facilities focused on improving all of the above?
Does society as a whole even have a goal currently? I don't really think it does. Like do ideologists even exist today?
I wish society was working towards some kind of idea of utopia, but I'm not convinced we're even trying for that. Are we?
The work brings over time modest wealth, allows me and my family to live in long term safe place (Switzerland) and builds a small reserve for bad times (or inheritance, early retirement etc. this is Europe, no need to save up for kids education or potentially massive healthcare bills). Don't need more from life.
I’m in America so the paychecks are very large, which helps with private school, nanny, stay at home wife, and the larger net worth needed (health care, layoff risk, house in a nicer neighborhood). I’ve been fortunate, so early retirement is possible now in my early 40s. It really helps with being able to detach from work, when I don’t even care if I lose my job. I worry for my kids though. It won’t be as easy for them. AI and relentless human resources optimization will make tech a harder place to thrive.
Who benefits from the situation? You or I who don’t have to make a u turn to get gas at this intersection, perhaps, but that is not much benefit in comparison for the opportunity cost of not having 3 prime corner lots squandered on the same single use. The clerk at the gas station for having a job available? Perhaps although maybe their labor in aggregate would have been employed in other less redundant uses that could benefit out society otherwise than selling smokes and putting $20 on 4 at 3am. The real beneficiary of this entire arrangement is the fisherman, the owner or shareholder who ultimately skims from all the pots thanks to having what is effectively a modern version of a plantation sharecropper, spending all their money in the company store and on company housing with a fig leaf of being able to choose from any number of minimum wage jobs, spend their wages in any number of national chain stores, and rent any number of increasingly investor owned property. Quite literally all owned by the same shareholders when you consider how people diversify their investments into these multiple sectors.
I recently retired from 40 years in software-based R&D and have been wondering the same thing. Wasn't it true that 95% of my life's work was thrown away after a single demo or a disappointingly short period of use?
And I think the answer is yes, but this is just the cost of working in an information economy. Ideas are explored and adopted only until the next idea replaces it or the surrounding business landscape shifts yet again. Unless your job is in building products like houses or hammers (which evolve very slowly or are too expensive to replace), the cost of doing of business today is a short lifetime for any product; they're replaced in increasingly fast cycles, useful only until they're no longer competitive. And this evanescent lifetime is especially the case for virtual products like software.
The essence of software is to prototype an idea for info processing that has utility only until the needs of business change. Prototypes famously don't last, and increasingly today, they no longer live long enough even to work out the bugs before they're replaced with yet another idea and its prototype that serves a new or evolved mission.
Will AI help with this? Only if it speeds up the cycle time or reduces development cost, and both of those have a theoretical minimum, given the time needed to design and review any software product has an irreducible minimum cost. If a human must use the software to implement a business idea then humans must be used to validate the app's utility, and that takes time that can't be diminished beyond some point (just as there's an inescapable need to test new drugs on animals since biology is a black box too complex to be simulated even by AI). Until AI can simulate the user, feedback from the user of new/revised software will remain the choke point on the rate at which new business ideas can be prototyped by software.
There is a lot of value in being the stepping stone to tomorrow. Not everyone builds a pyramid.
“Creative destruction is a concept in economics that describes a process in which new innovations replace and make obsolete older innovations.”
https://en.wikipedia.org/wiki/Creative_destruction
I think about this a lot with various devices I owned over the years that were made obsolete by smartphones. Portable DVD players and digital cameras are the two that stand out to me; each of them cost hundreds of dollars but only had a marketable life of about 5 years. To us these are just products on a shelf, but every one of them had a developer, an assembly line, and a logistics network behind them; all of these have to be redeployed whenever a product is made obsolete.
It mattered enough for someone to pay you money to do it, and that money put food on the table and clothes on your body and a roof over your head and allowed you to contribute to larger society through paying taxes.
Is it the same as discovering that E = MC2 or Jonas Salk's contributions? No, but it's not nothing either.
Would we have fewer video games? If all our basic needs were met and we had a lot of free time, more people might come together to create games together for free.
I mean, look at how much free content (games, stories, videos, etc) is created now, when people have to spend more than half their waking hours working for a living. If people had more free time, some of them would want to make video games, and if they weren’t constrained by having to make money, they would be open source, which would make it even easier for someone else to make their own game based on the work.
Creativity means play, as in not following rules, adding something of yourself.
Something a computer just can't do.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
The cost, in money or time, for getting certain types of work done decreases. People ramp up demand to fill the gap, "full utilization" of the workers.
Its a very old claim that the next technology will lead to a utopia where we don't have to work, or we work drastically less often. Time and again we prove that we don't actually want that.
My hypothesis (I'm sure its not novel or unique) is that very few people know what to do with idle hands. We tend to keep stress levels high as a distraction, and tend to freak out in various ways if we find ourselves with low stress and nothing that "needs" to be done.
I worry more that an idle humanity will cause a lot more conflict. “An idle mind’s the devil’s playground” and all.
Its always possible that risk would be transitional. Anyone alive today, at least in western style societies, likely doesn't know a life without high levels of stress and distraction. It makes sense that change would cause people to lash out, maybe people growing up in that new system would handle it better (if they had the chance).
Many shows and movies can play a similar role.
I think we would/will see a lot more of that. Even in transitional periods where people can multitask more now as ai starts taking over moment to moment thinking.
Where’s all of the articles that HN loves about kids these days not being bored anymore? What about google’s famous 20% time?
Idle time isn’t just important, it’s the point.
Take 7 hours out if the day because an LLM makes you that much more productive and I expect people wouldn't know what to do with themselves. That could be wrong, but I'd expect a lot more societal problems than we already have today if a year from now a large number of people only worked 4 or 5 hours a week.
That's not even getting to the Shopify CEOs ridiculous claim that employees will get 100x more work done [1].
There are probably plenty of goods that are counter examples, but time utilization isn't one of them, I don't think.
I suspected this would be the case with AI too. A lot of people said things like "there won't be enough work anymore" and I thought, "are you kidding? Do you use the same software I use? Do you play the same games I've played? There's never enough time to add all of the features and all of the richness and complexity and all of the unit tests and all of the documentation that we want to add! Most of us are happy if we can ship a half-baked anything!"
The only real question I had was whether the tech sector would go through a prolonged, destructive famine before realizing that.
I'm kind of ok with doing more work in the same time, though if I'm becoming way more effective I'll probably start pushing harder on my existing discussions with management about 4 day work weeks (I'm looking to do 4x10s, but I might start looking to negotiate it to "instead of a pay increase, let's keep it the same but a 4x8 week").
If AI lets me get more done in the same time, I'm ok with that. Though, on the other hand, my work is budgeting $30/mo for the AI tools, so I'm kind of figuring that any time that personally-purchased AI tools are saving me, I deduct from my work week. ;-)
>very few people know what to do with idle hands
"Millions long for immortality that don't know what to do with themselves on a rainy Sunday afternoon." -- Susan Ertz
We are currently a long way from that kind of change as current AI tools suck by comparison to literally 1,000x increases in productivity. So, in well under 100 years programming could become extremely niche.
Yes, but.
There are more jobs in other fields that are adjacent to food production, particularly in distribution. Middle class does not existed and retail workers are now a large percentage of workers in most parts of the world.
Food is just a smaller percentage of the economy overall.
I would have assumed that if 90% of people are farming its largely subsistence and any trade or happened on a much more local scale, potentially without any proper currency involved.
That said, there’s been areas where 90% of the working population was at minimum helping with the harvest up until the Middle Ages.
We increased production and needed fewer farmers, but we now have so few farmers that most people have very little idea of what food really is, where it comes from, or what it takes to run our food system.
Higher productivity is good to a point, but eventually it risks becoming too fragile.
Screwworm, a parasite that kills cattle in days is making a comeback. And we are less prepared for it this time because previously (the 1950s-1970s) we had a lot more labor in the industry to manually check each head of cattle. Bloomberg even called it out specifically.
Ranchers also said the screwworm would be much deadlier if it were to return, because of a lack of labor. “We can’t fight it like we did in the ’60s, we can’t go out and rope every head of cattle and put a smear on every open wound,” Schumann said.
https://www.bloomberg.com/news/features/2025-05-02/deadly-sc...
It’s somewhat arbitrary where you draw the line historically but it’s not just maximum productivity worth remembering crops used to fail from drought etc far more frequently.
Small hobby farms are also a thing these days, but that’s a separate issue.
In my experience they're very productive by poundage yield, but horribly unproductive when it comes to inputs required, chemicals used, biodiversity, soil health, etc.
The difference is so extreme vs historic methods you can skip pesticides, avoid harming soil health or biodiversity vs traditional methods etc without any issues here and still be talking 1,000x.
Though really growing crops for human consumption is something of a rounding error here. It’s livestock, biofuels, cotton, organic plastics, wood, flowers, etc that’s consuming the vast majority of output from farms.
Two things worth noting though, pounds of food say little about the nutritional value to consumers. I don't have hood links handy so I won't make any specific claims, just worth considering if weight is the right metric.
As far as human labor hours goes, we've gotten very good at outsourcing those costs. Farm labor hours ignores all the hours put in to their off-farm inputs (machinery, pesticides and fertilizers, seed production, etc). We also leverage an astronomical amount of (mostly) diesel fuel to power all of it. The human labor hours are small, but I've seen estimates of a single barrel of oil being comparable to 25,000 hours of human labor or 12.5 years of full employment. I'd be interested to do the math now, but I expect we have seen a fraction of that 25,000x multiplier materialize in the reduction of farm hours worked over the last century (or back to the industrial revolution).
That’s just wildly wrong by several orders of magnitude, to the point I question your judgment to even consider it a valid possibility.
Not only would the price be inherently much higher but if everyone including infants working 50 hours per week we’d still would produce less than 1/30th the current world’s output of oil and going back we’ve been extracting oil at industrial scale for over 100 years.
To get even close to those numbers you’d need to assume 100% of human labor going back into prehistory was devoted purely to oil extraction.
The earlier comment or was talking about the massive reduction in the amount of human labor required to cultivate land and the relative productivity of the land.
That comparison comes down to amount of work done. Whether that work is done by a human swinging a scythe or a human driving a diesel powered tractor is irrelevant, the work is measured in joules at the end of the day. We have drastically fewer human hours put into farm labor because we found a massive multiplier effect in fossil fuel energy.
I'm not sure where solar panels came in, but sure they can also be used to store watts and produce joules of work if that's your preferred source of energy.
Nah, it's not 100% but it says a lot about the nutritional value.
> inputs
You can approximate those with price. A barrel of oil might be a couple hours.
But you do have that option, right? Work 20 hours a week instead of 40. You just aren't paid for the hours that you don't work. In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
For there to be a "better option" (as in, you're paid money for not working more hours) what are you actually being paid to do?
For all the thoughts that come to mind when I say "work 20 hours a week instead of 40" -- that's where the individual's preference comes in. I work more hours because I want the money. Nobody pays me to not work.
Not really. Lots of kinds of work don’t hire part timers in any volume period. There are very limited jobs where the only tradeoff if you want to work fewer hours is a reduction in compensation proportional to the reduction in hours worked, or even just a reduction in compensation even if disproportionate to the reduction in hours worked.
>But you do have that option, right? Work 20 hours a week instead of 40. You just aren't paid for the hours that you don't work. In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
Look the core of your opinion is the belief that market dynamics naturally lead to desirable outcomes always. I simply don’t believe that, and I think interference to push for desirable outcomes which violate principles of a free market is often good. We probably won’t be able to agree on this.
No.. if society wants to disincentive over working by introducing overtime, that's fine by me. I'm not making any moral judgement. You just seem to live in a fantasy world where people aren't exchanging their labor for money.
> Look the core of your opinion is the belief that market dynamics naturally lead to desirable outcomes always.
I didn't say that, and I don't believe that. If you're just going to hallucinate what I think, what's the point in replying?
For my relatives in Germany going part time seems easier and more accepted by companies.
That's the capitalist system. Unions successfully fought to decrease the working day to 8 hrs.
Workers are often looking to make more money, take more responsibility, or build some kind of name or reputation for themselves. There's absolutely nothing wrong with that, but that goal also incentivizes to work harder and longer.
There's no one size fits all description for workers, everyone's different. The same is true for the whole system though, it doesn't roll up to any one cause.
It actually does but due to wrong distribution of reward gained from that tech(automation) it does not work for the common folks.
Lets take a simple example, you, me and 8 other HN users work in Bezos’ warehouse. We each work 8h/day. Suddenly a new tech comes in which can now do the same task we do and each unit of that machine can do 2-4 of our work alone. If Bezos buys 4 of the units and setting each unit to work at x2 capacity, then 8 of us now have 8h/day x 5 days x 4 weeks = 160h leisure.
Problem is, now 8 of us still need money to survive(food, rent, utilities, healthcare etc). So, according to tech utopians, 8 of us now can use 160h of free time to focus on more important and rewarding works.(See in context of all the AI peddlers, how using AI will free us to do more important and rewarding works!). But to survive my rewarding work is to do gig work or something of same effort or more hours.
So in theory, the owner controlling the automation gets more free time to attend interviews and political/social events. The people getting automated away fall downward and has to work harder to maintain their survivality. Of course, I hope our over enthusiastic brethren who are paying LLM provider for the priviledge of training their own replacements figure the equation soon and don’t get sold by the “free time to do more meaningful work” same way the Bezos warehouse gave some of us some leisure while the automation were coming online and needed some failsafe for a while. :)
Add to that progress in robotics and we may reach a point where humans are not needed anymore for most tasks. Then the capitalists will have fully automated factories but nobody who can buy their products.
Maybe capitalism had a good run for the last 200 years and a new economic system needs to arise. Whatever that will be.
What could have been a single paragraph turns into five separate bulleted lists and explanations and fluff.
Your responsibility is now as an AI response mechanic. And someone else that’s ingesting your AI’s output is making sure their AI’s output on your output is reasonable.
This obviously doesn’t scale well but does move the “doing” out of human hands, replacing that time with a guardrail responsibility.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
If a truck has a lifetime of 20 years, that's 20 years' worth of paying a security guard for it.
You really think it could take 20 years' worth of human effort in labor and materials to make a truck more secure? The price of the truck itself in the first place doesn't even come close to that.
This seems observationally true in the tech industry, where the world’s best programmers and technologists are tied up fiddling with transformers and datasets and evals so that the world’s worst programmers can slap together temperature converters and insecure twitter clones, and meanwhile the quality of the consumer software that people actually use is in a nosedive.
This statement is incredibly accurate
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
"In the 1970s when office computers started to come out we were told:
'Computers will save you SO much effort you won't know what to do with all of your free time'.
We just ended up doing more things per day thanks to computers."
There are two metrics in the study:
> AI chatbots save time across all exposed occupations (for 64%–90% of users)
and
> AI chatbots have created new job tasks for 8.4% of workers
There's absolutely no indication anywhere in the study that the time saved is offset by the new work created. The percentages for the two metrics are so vastly different that it's fairly safe to assume it's not the case.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
https://youtube.com/watch?v=ZP4fjVWKt2w
It’s early. There are new skills everyone is just getting the hang of. If the evolution of AI was mapped to the evolution of computing we would be in the era of “check out this room-sized bunch of vacuum tubes that can do one long division at a time”.
But it’s already exciting, so just imagine how good things will get with better models and everyone skilled in the art of work automation!
yawboakye•15h ago
nico•14h ago
Right on point
As shown by never-shrinking backlogs
Todo lists always grow
The crucial task ends up being prioritizing, ie. figuring out what to put in priority at the current moment
hwillis•1h ago
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.