frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Ask HN: How to build a 2D wave-like line graph that responds to keyboard events?

1•absoluteunit1•1m ago•0 comments

Tandy Corporation, Part 4 – By Bradford Morgan White

https://www.abortretry.fail/p/tandy-corporation-part-4
1•rbanffy•2m ago•0 comments

I Asked Four Former Friends Why We Stopped Speaking-Here's What I Learned (2023)

https://www.vogue.com/article/reconnecting-with-ex-friends
1•mooreds•2m ago•0 comments

Qwen-Image – a 20B MMDiT model for next-gen text-to-image generation

https://twitter.com/Alibaba_Qwen/status/1952398250121756992
1•tosh•3m ago•0 comments

Show HN: Modos Developer Kit Live on Crowd Supply

https://www.crowdsupply.com/modos-tech/modos-paper-monitor
1•alex-a-soto•4m ago•0 comments

OpenAI Transparency Letter

https://www.openai-transparency.org/
2•fzliu•5m ago•0 comments

Castro Podcasts – iPad and Device Sync

https://castro.fm/blog/device-sync-and-ipad
1•dabluck•6m ago•0 comments

Evaluation Algorithms for Parametric Curves and Surfaces

https://www.mdpi.com/2227-7390/13/14/2248
1•PaulHoule•10m ago•0 comments

Squashing my dumb bugs and why I log build IDs

https://rachelbythebay.com/w/2025/08/03/scope/
1•zdw•11m ago•0 comments

LLMs Aren't Just for Sissies

https://mattsayar.com/llms-arent-just-for-sissies/
1•MattSayar•12m ago•0 comments

Staan : European Search Index and API

https://staan.ai
1•maelito•12m ago•0 comments

Robin Berjon: Web Standards

https://protocol.ecologies.info/interviews/berjon-web_standards/
1•ntnsndr•13m ago•0 comments

JavaOne 2026 Dates Announced

https://inside.java/2025/08/04/javaone-returns-2026/
1•Sharat_Chander•14m ago•1 comments

A proof is that which is convincing

https://substack.com/inbox/post/170099481
1•mathattack•15m ago•0 comments

Updated Portal Map Editor in Battlefield 6 Runs on Godot Engine

https://80.lv/articles/updated-portal-map-editor-in-battlefield-6-runs-on-godot-engine
1•pjmlp•15m ago•0 comments

AI Embiggens the Big Clouds, Especially Microsoft

https://www.nextplatform.com/2025/08/01/ai-embiggens-the-big-clouds-especially-microsoft/
1•rbanffy•16m ago•0 comments

Firefox Has a New Home

https://windowsreport.com/firefox-has-a-new-home-mozilla-launches-dedicated-firefox-com-download-hub/
2•gwerbret•16m ago•0 comments

Leading phone repair and insurance firm collapses after paying ransomware demand

https://www.tomshardware.com/tech-industry/cyber-security/leading-phone-repair-and-insurance-firm-collapses-after-paying-crippling-ransomware-demand-cutting-100-employees-to-just-eight-wasnt-enough
2•speckx•17m ago•0 comments

What We're Optimizing ChatGPT For

https://openai.com/index/how-we%27re-optimizing-chatgpt
3•meetpateltech•19m ago•1 comments

Zed Shaw's Utu: Saving the internet with hate · weblog.masukomi.org

https://weblog.masukomi.org/2018/03/25/zed-shaws-utu-saving-the-internet-with-hate/
2•janandonly•23m ago•0 comments

You Should Probably Leave Substack

https://leavesubstack.com/
2•aintitthetruitt•23m ago•0 comments

Musk says he's bringing back Vine's archive

https://techcrunch.com/2025/08/04/elon-musk-says-hes-bringing-back-vines-archive/
2•thm•26m ago•1 comments

Tiger Mask Donation Phenomenon

https://en.wikipedia.org/wiki/Tiger_Mask_donation_phenomenon
2•thunderbong•27m ago•0 comments

Lyft Partners with Baidu to Deploy Autonomous Rides Across Europe

https://www.lyft.com/blog/posts/lyft-partners-with-baidu-to-deploy-autonomous-rides-across-europe
4•thm•28m ago•0 comments

What's in a GIF – Bits and Bytes

https://giflib.sourceforge.net/whatsinagif/bits_and_bytes.html
1•cristoperb•29m ago•0 comments

Ask HN: What's the Path for Innovation in Silicon?

1•mikewarot•29m ago•1 comments

Ghost CMS 6.0 Released: New Features – But Self-Hosting Users Miss Out

https://nixsanctuary.com/ghost-cms-6-0-released/
1•linuxlibre•30m ago•0 comments

Perplexity accused of scraping websites that explicitly blocked AI scraping

https://techcrunch.com/2025/08/04/perplexity-accused-of-scraping-websites-that-explicitly-blocked-ai-scraping/
3•01-_-•31m ago•1 comments

South Korea Selects 5 Elite Teams for National AI Project

https://www.businesskorea.co.kr/news/articleView.html?idxno=248777
2•01-_-•33m ago•0 comments

Taking an open model from discovery to a production-ready endpoint on Vertex AI

https://cloud.google.com/blog/products/ai-machine-learning/take-an-open-model-from-discovery-to-endpoint-on-vertex-ai
1•mariuz•33m ago•0 comments
Open in hackernews

AI promised efficiency. Instead, it's making us work harder

https://afterburnout.co/p/ai-promised-to-make-us-more-efficient
114•mooreds•2h ago

Comments

ctoth•51m ago
Bit of a dunk but ... It must have saved you what ever time you were going to spend on this article because I detect a bunch of LLMisms and very little content.

I'm not sure that Claude saves me time -- I just spent my weekend working on a Claude Code Audio hook with Claude which I obviously wouldn't have worked on elsewise, and that's hardly the gardening I intended to do ... but man it was fun and now my CC sessions are a lot easier to track by ear!

Bukhmanizer•46m ago
I think a lot of people would disagree with this article on HN, but I’ve yet to see too many people say it’s made their coworkers more productive. That is, do people feel like they’re getting better, more reviewable PRs?

Personally, I’ve been seeing the number of changes for a PR starting to reach into the mid-hundreds now. And fundamentally the developers who make them don’t understand how they work. They often think they do, but then I’ll ask them something about the design and they’ll reply: “IDK Claude did that.”

By no means am I down on AI, but I think proper procedures need to be put into place unless we want a giant bomb in our code base.

aantix•40m ago
I don't think "IDK Claude did that" is a valid excuse. Immediate rejection.

AI may be multi-threaded, but there's still a human, global interpreter lock in place. :D

If you put the code up for review, regardless of the source, you should fundamentally understand how it works.

This raises a broader point about AI and productivity: while AI promises parallelism, there's still the human in the middle who is responsible for the code.

The promise of "parallelism" is overstated.

100's of PRs should not be trusted. Or at least not without the c-suite understanding such risks. Maybe you're a small startup looking to get out the door as quickly as possible, so.. YOLO.

But it's going to be a hot mess. A "clean up in aisle nine" level mess.

ryandrake•33m ago
It's insane that any company would just be OK with "IDK Claude did that" any more than a 2010 version of that company would be OK with "IDK I copy pasted from StackOverflow." Have engineering managers actually drank this Kool-aid to the point where they're actually OK with their direct reports just chucking PRs over the wall that they don't even understand?
throwawaysleep•30m ago
Depends on your incentives. People anecdotally seem far more impressed with buggy stuff shipped fast than good stuff shipped slowly.

Lots of companies just accept bugs as something that happens.

dontlikeyoueith•20m ago
Depends on your problem space too.

Calendar app for local social clubs? Ship it and fix it later.

B2B payments software that triggers funds transfers? JFC I hope you PIP people for that.

Imustaskforhelp•29m ago
It is even more funnier when you realize that because Claude and all AI models are trained on data including stackoverflow.

So I guess if you asked Claude why it did that, the truth of it might be "IDK I copy pasted from StackOverflow"

The same stuff pasted with a different sticker. Looks good to me.

liveoneggs•26m ago
Of course they are okay with it. They changed the job function to be just that with forced(!) AI adoption.
f1shy•21m ago
This is exactly how I see it. Is not about the tool, is how it is used. In 1990 that would have been “IDK I got it from a BBS” and in 1980 “got if from a magazine“. It doesn’t matter how you get there, you have to understand it. BTW I had a similar problem as I was manager in HW development, where the value of a resistor had no documented calculation. I would ask: where does it came from? If the answer was “I tried and it worked”, or “tested in lab until I found it” or in the 2000 “I run many simulations and was the best value” I would reject and ask for proper calculations, with WCA.
Andrex•17m ago
As vibe coding becomes more commonplace you'll see these historical safeguards erode. That is the danger IMO.

You're right, saying you got something off SO would get you laughed out of programming circles back in the day. We should be applying the same shame to people who vibe code, not encourage it, if we want human-parseable and maintainable software.

vkou•8m ago
> That is the danger IMO.

For whom is this a danger for?

If we're paid to dig ditches and fill them, who are we to question our supreme leaders? They control the purse strings, so of course they know best.

j45•19m ago
The same developers submitting Claude submissions can take 1-2 minutes asking for an explanation of what they're submitting and how it works. Might even learn.

Stack Overflow had enough provenance of copying and pasting. Models may not. Provenance remains a thing or it can add risk to the code.

joseda-hg•15m ago
I don't think it's common, but I've definitely seen it

I've also seen "Ask ChatGPT if you're doing X right?", and basically signing off whatever it recommends without checking

At this point I'm pretty confident I could trojan horse whatever decision I want from certain people by sending enough screenshots of ChatGPT agreeing with me

calebkaiser•14m ago
I don't think this is an AI specific thing. I work in the field, and so I'm around some of the most enthusiastic adopters of LLMs, and from what I see, engineering cultures surrounding LLM usage typically match the org's previous general engineering culture.

So, for example, by and large the orgs I've seen chucking Claude PRs over the wall with little review were previously chucking 100% human written PRs over the wall with little review.

Similarly, the teams I see effectively using test suites to guide their code generation are the same teams that effectively use test suites to guide their general software engineering workflows.

siva7•4m ago
If it pushes some nice velocity metric, most managers would be ok. Though you have to word it a bit differently of course.
throwanem•3m ago
"Look, the build is green and CI belongs to another team, how perfectionist do you need us to be about this?" is the sort of response I would generally expect, and also in the case where AI was used.
SoftTalker•32m ago
As someone who doesn't use AI for writing code, why can't you just ask Claude to write up an explanation of each change for code review? Then at least you can look at whether the explanation seems sane.
threetonesun•31m ago
It will fairly confidently state changes are "correct" for whatever reason it makes up. This becomes more of an issue with things that might be edge cases or vague requirements, in which case it's better to have AI write tests instead of the code.
ahoef•31m ago
Claude also doesn't know, because Claude dreamt up changes that didn't work, then "fixed" them, "fixed" them again and in the process left swathes of code that isn't reached.
thegeomaster•29m ago
This can be dangerous, because Claude doesn't truly understand why it did something. Whatever it writes a post-hoc justification which may or may not be accurate to the "intent". This is because these are still autoregressive models --- they have only the context to go on, not prior intent.
zahlman•16m ago
Indeed. Watching it (well, Anthropic, really) cheat at Baba Is You and then try to give a rationalization for how it came up with the solution (qv. https://news.ycombinator.com/item?id=44473615) is quite instructive.
zahlman•19m ago
Because the explanations will often not be sane; when they are sane, they will focus on irrelevant details and be maddeningly padded out unless you put inordinate effort into trying to control the AI's writing style.

Ask pretty much any FOSS developer who has received AI-generated (both code and explanations) PRs on GitHub (and when you complain about these, the author will almost always use the same AI to generate responses) about their experiences. It's a huge time sink if you don't cut them off. There are plenty of projects out there now that have explicit policy documentation against such submissions and even boilerplate messages for rejecting them.

bcrosby95•2m ago
AI is not a human. If it understands things it doesn't understand things like you or I. This means it can misunderstand things in ways we can't understand.
NegativeLatency•30m ago
> I don't think "IDK Claude did that" is a valid excuse.

It's not, and yet I have seen that offered as an excuse several times.

grogenaut•27m ago
Did you push back?
f1shy•19m ago
At least I would not accept from my team. Is borderline infuriating. And I would promptly insinuate, if that is the answer, next time I do not need you, I will ask directly Claude, you can stay home!
Herring•5m ago
Agreed, but maybe check what's their workload like otherwise. Most engineers I've worked with want to do a good job and ship something useful. It's possible they're offloading work to the LLM because they're under a lot of pressure. (And in this case you can't make them stay home)
corytheboyd•19m ago
> The promise of "parallelism" is overstated.

100% my takeaway after trying to parallelize using worktrees. While Claude has no problem managing more than one context instance, I sure as hell do. It’s exhausting, to the point of slowing me down.

ToucanLoucan•18m ago
> If you put the code up for review, regardless of the source, you should fundamentally understand how it works.

Inb4 the chorus of whining from AI hypists accusing you of being an coastal elitist intellectual jerk for daring to ask that they might want to LEARN something.

I am so over this anti-intellectual garbage. It's gotten to such a ridiculous place in our society and is literally going to get tons of people killed.

zahlman•15m ago
I understand and agree with your frustration, but this is not what discourse here is supposed to look like.
j-bos•16m ago
> I don't think "IDK Claude did that" is a valid excuse. Immediate rejection.

I strongly agree, however manager^x do not and want see report the massive "productivity" gains.

Izikiel43•5m ago
You tell them clippy’s revengeance pr caused an outage worth millions of dollars because of push for productivity and they shouldn’t bother you for a couple of months.
dingnuts•5m ago
Sane CTOs think "Claude did that" is invalid. I assure you: those leaders exist. Refuse to work for idiots who think bots can be held accountable. You must must understand every line of code yourself.

"Claude did that" is functionally equivalent to "idk I copied that from r/programming" and is totally unacceptable for a professional

bcrosby95•4m ago
This.

I know many who have it on from high that they MUST use AI. One place even has bonuses tied not to productivity, but how much they use AI.

Meanwhile managers ask if AI is writing so much code why aren't they seeing it on topline productivity numbers.

kibwen•4m ago
> I don't think "IDK Claude did that" is a valid excuse. Immediate rejection.

That will work, but only until the people filing these PRs go crying to their managers that you refuse to merge any of their code, at which point you'll be given a stern reprimand from your betters to stop being so picky. Have fun vibe-reviewing.

throwawaysleep•40m ago
A lot depends heavily on how you define productive.

At one of my jobs, the PRs are far less reviewable, but now the devs write tests when they didn’t used to bother. I’ve always viewed reviewing PRs as work without recognition, so I never did much of it anyway, but but now that there are passing tests, I often approve without more than a cursory skim.

So yes, it has made their work more productive for me to get off my plane.

skydhash•36m ago
But are the test actually useful? You can have a test suite that is actually harmful if it’s not ensuring business rules or domain correctness. Anything else makes for a brittle dev ecosystem.
whstl•38m ago
> That is, do people feel like they’re getting better, more reviewable PRs?

No, like you I’m getting more PRs that are less reviewable.

It multiplies what you’re capable of. So you’ll get a LOT of low quality code from devs who aren’t much into quality.

alexander2002•38m ago
AI is like a giant calculator you need a formula to make it work for your usecase
asciii•30m ago
That's a great analogy. I recently read about integrating AI similar to the use of calculators in math class -- learn how to do the basic operations first +,-,/,* and then use calculator to scale so you get some theoretical grounding
SamInTheShell•26m ago
Accurate. It's autocomplete on steroids.
liveoneggs•23m ago
Calculators are mostly deterministic and AI is explicitly not.

When computers give different answers to the same questions it's a fundamental shift in how we work with them.

uludag•34m ago
I've been at the same company both before and after the AI revolution. I've felt something similar. People seem to be more detached, more aloof in their work. I feel like we're discussing our code less and are less able to have coherent big-picture plans concerning the code at-large.
qsort•32m ago
It's extremely hard to measure productivity correctly and self-reports are worthless. I don't think AI tools are a net negative in the average case (people are definitely indexing too much on that goddamn METR article) but "i'm 10x more productive, source trust me bro" is equally nonsense.

Using AI tooling means, at least in part, betting on the future.

didericis•2m ago
> Using AI tooling means, at least in part, betting on the future.

It means betting on a particular LLM centric vision of the future.

I’m still agnostic on that. I think LLMs allow for the creation of a lot of one off scripts and things for people that wouldn’t otherwise be coding, but I have yet to be convinced that more AI usage in a sufficiently senior software development team is more valuable than the traditional way of doing things.

I think there’s a fundamental necessity for a human to articulate what a give piece of software should do with a high level of specificity that can’t ever be avoided. The best you can do is piggy back off of higher level language and abstractions that guess what the specifics should be, but I don’t think it’s realistic to think all combinations of all business logic and ui can be boiled down to common patterns that an LLM could infer. And even if that were true, people get bored/like novelty enough that they’ll always want new human created stuff to shove into the training set.

pfisherman•26m ago
> Personally, I’ve been seeing the number of changes for a PR starting to reach into the mid-hundreds now. And fundamentally the developers who make them don’t understand how they work.

Could this be fixed by adjusting how tickets are scoped?

chocolatemario•25m ago
This just sounds like a low quality bar or PRs that are too wide in large in scope for AI. In any case, it sounds like taking a single stab at it with an LLM and calling it good. I’m not really AI for everything type of dev, but there are some tasks that AI excels at. If you’re doing it for feature work on a high churn product without a tight grip on the reins, I fear for your products future.
ratelimitsteve•16m ago
>I’ll ask them something about the design and they’ll reply: “IDK Claude did that.”

I would want someone entirely off of my team if they did that. Anyone who pushes code they don't understand at least well enough to answer "What does that do?" and "Why did you do it that way?" deserves for their PR to be flat out rejected in whole, not just altered.

rafaelmn•11m ago
I've seen the worst impact on mid/junior level devs. Where they would have to struggle through a problem before - AI is like a magic shortcut to something that looks like it works. And then they submit that crap and I can't trust them again - they will give me AI code without even understanding what it does fully. It robbed them of the learning process and made them even less useful, while making it seem to them they were achieving something. I'm seeing these kinds of people getting removed from workforce fast - you can probably prompt the AI better on your own, have one less layer of indirection and it will be faster.
nlawalker•11m ago
> I’ll ask them something about the design and they’ll reply: “IDK Claude did that.”

"Then your job is to go ask Claude and get back to me. On that note, if that's what I'm paying you for now, I might be paying you too much..."

I'm really interested to see how the intersection of AI and accountability develops over the next few years. It seems like everyone's primary job will effectively be taking accountability for the AI they're driving, and the pay will be a function of how much and what kind of accountability you're taking and the overall stakes.

snitzr•44m ago
I feel like the only thing propping up the US economy right now is AI hype.
leptons•23m ago
"AI" is sucking up all the investment money, so unless you are working in "AI", you aren't likely to get funding. It's hurting the economy more than helping.
kachapopopow•41m ago
AI has allowed me to work on projects I was simply too lazy or haven't attached any importance to. It also allows me to skip the entire process of contacting a designer or doing design work myself (which might actually be a bad thing). The thing I know for sure is that none of my smart home additions I had sitting in a box would be finished if AI didn't exist.
SoftTalker•26m ago
If they were so unimportant why do them at all?

It's like buying a trinket just because it's cheap. It's still ultimately wasteful.

kachapopopow•15m ago
Because, why not, just because I don't attach importance doesn't mean they don't make my life or those around me more convenient. It's just a good motivation tool in general.

Also if you buy an ultimately useless trinket, well that's just life. Everything we do can be considered 'ultimately' useless.

GaggiX•41m ago
This is another article based on the study that had a population of 16 people, most of which never used the tool before.

Edit: "amirhirsch" user probably explained this better than me in an above comment.

randysalami•40m ago
Work harder, create datapoints, democratize knowledge. Except that knowledge will be confined eventually and doom the futures of many people. Use AI now to get ahead of your peers by feeding it questions and evaluating responses. Then in 10 years, Insert Field Here will be dominated by models trained by yesterday’s experts. New members of the field will not be able to compete with the collective knowledge of 1000s of their predecessors. Selling the futures of our youth for short-term gains. It’s quite sad and it is what’s happening.

It’s a shame too because it really could have been something so much more amazing. I’d imagine higher education would shift to how it used to be: a past-time for bored elites. We would probably see a large reduction in the middle class and its eventual destruction. First they went for manufacturing with its strong unions, now they go for the white-collar worker who has little solidarity for his common man (see lack of unions and ethics in our STEM field; most likely because we thought we could never be made redundant). Field by field the middle class will be destroyed and the lower class in thrall of addictive social media, substances, and the illusion of selection into the influencer petty-elite (which remain compliant because they don’t offer value proportional to the bribes they receive). The elites will have recreated the dynamic that existed for most of human history. Final point, see the obsession of current elites in using artificial insemination to create a reliable and durable contingent of heirs. Something previous rulers could only dream about in history.

It disgusts me and pisses me off so much.

WillAdams•39m ago
I will let you know tomorrow.

The front-end jan.ai now has a feature where it has an:

>Interface for uploading (or specifying) a folder, then running the prompt on all files in the folder

https://github.com/menloresearch/jan/issues/4909#event-18973...

Hopefully that will allow me to batch process checks/invoices to get them named appropriately, we'll see.

amirhirsch•38m ago
The blogosphere (am I dating myself?) keeps bringing up the METR study (https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...) without really understanding the result. The guy with experience had a huge boost. You are reading the results wrong if your conclusion is this blog.

And that was before Claude Code.

georgeburdell•33m ago
The article itself mentions the J-shaped curve (sacrifice productivity now while you learn the tools, then gain that and more later on). It’s really just poor (or perhaps AI) writing
kebman•38m ago
Here's the kicker: AI was supposed to automate the boring parts so we could “focus on high-leverage, strategic, needle-moving, synergistic core competencies.” Instead, we’re stuck in a recursive loop of prompt engineering, hallucination triage, output validation, re-prompting, Slack channel FOMO, and productivity theater. We’ve basically replaced “doing the work” with “managing the tool that kinda tries to do the work but needs babysitting.” Congrats—we’ve invented Jira for thought. And here's the kicker.
throwawaysleep•32m ago
Or we are all just dev leads now managing junior dev swarms.
leptons•19m ago
Those "junior dev swarms" will never become seniors, so you're perpetually handholding and always getting junior-dev results. It isn't a step forward in any way.
vasco•34m ago
I've created hundreds of small scripts that I wouldn't have bothered with before and either do some manual checks or just not possess the information. Just on "small script" productivity it already saved me a lot of time.

The problem is people trying to make the models do things that are too close to their limit. You should use LLMs for things they can ace already, not waste time trying to get it to invent some new algorithm. If I don't 0-3 shot a problem then I will just either do it manually or not do it.

Similarly to giving up on a Google search that you try a few times and nothing useful comes in the first few prompts. You don't keep at it the whole afternoon.

paulcole•32m ago
I'm in a similar same boat as you.

I'm not a programmer but from time to time would make automations or small scripts to make my job easier.

LLMs have made much more complex automations and scripts possible while making it dead simple to create the scripts I used to make.

timeinput•13m ago
Agreed, and after you've worked with them for a bit you can start to predict where they are going to fail, and do something silly like delete your code base, or have no trouble, and succeed, and add that feature you were after.

A good code review and edit to remove the excess verbosity, and you got a feature done real fast.

Ask it for something at or above its limit then the code is very difficult to debug, difficult to understand has potentially misleading comments, and more. Knowing how to work with these overly confident coworkers is definitely a skill. I feel it varies from significantly from model to model as well.

Its often difficult to task other programmers with tasks at or above their limits too.

pfisherman•33m ago
> But here’s the kicker: we were told AI would free us up for “higher-level work.” What actually happened? We just found more work to fill the space. That two-hour block AI created by automating your morning reports? It’s now packed with three new meetings. The 30 minutes you saved on data analysis? You’re using it to manage two more AI tools and review their outputs.

This is basically the definition of increased productivity and efficiency. Doing more stuff in the same amount of time. What I tell people who are anxious about whether their job might be automated away by AI is this:

We will never run out of the problems that need solving. The types of problems you spend your time solving will change. The key is to adapt your process to allocate your time to solving the right kinds of problems. You don’t want the be the person taking an hour to do arithmetic by hand, when you have access to spreadsheets.

marknutter•12m ago
> We will never run out of the problems that need solving. The types of problems you spend your time solving will change. The key is to adapt your process to allocate your time to solving the right kinds of problems. You don’t want the be the person taking an hour to do arithmetic by hand, when you have access to spreadsheets.

And this has always been the case throughout all of human history.

pftburger•32m ago
AI efficiency gains don’t benefit employees, they benefit _employers_, who get more output from the same salary. When you’re salaried, you’re selling 8 hours of time, not units of work. AI that makes you 20% faster doesn’t mean you work 20% fewer hours or get a 20% raise. It means your employer gets 20% more value from the same labor cost.

Marx: workers sell their capacity to work for a fixed period, and any productivity improvements within that time become surplus value captured by capital.

AI tools are just the latest mechanism for extracting more output from the same wage. The real issue isn’t the technology—it’s that employees can’t capture gains from their own efficiency improvements. Until compensation models shift from time-based to outcome-based, every productivity breakthrough just makes us more profitable to employ, not more prosperous ourselves.

It’s the Industrial Revolution all over again and we’re the Luddites

Herring•12m ago
American workers don't care about "socialism". Look who they elected president. He won the popular vote.

Eventually they will have to care when things get bad enough -- and it's definitely trending that way fast [1]. But not today and not tomorrow.

[1] https://data.worldhappiness.report/chart

dragonwriter•8m ago
> AI efficiency gains don’t benefit employees—they benefit employers who get more output from the same salary.

So, they also benefit developers that become solopreneurs.

So they increase the next-best alternative for developers compared to work as employees.

What happens when you improve the next-best alternative?

> AI tools are just the latest mechanism for extracting more output from the same wage.

The whole history of software development has been rapid introduction of additional automation (because no field has been more the focus of software development than itself), and looking at the history of developer salaries, that has not been a process of "extracting more output from the same wage". Yes, output per $ wage has gone up, but real wages per hour or day worked for developers have also gone up, and done so faster than wages across the economy generally. It is true and problematic that the degree of capitalism in the structure of the modern mixed economy means that the gains of productivity go disproportionately to capital, but it is simply false to say that they go exclusively to capital across the board, and it is particularly easy to see that this has specifically been false in the case of productivity gains from further automation in software development.

esafak•28m ago
Come on, what did you think was going to happen? The historical record has consistently shown that humans have not worked less when given tools that increased productivity; they simply produced more.

This is what the whole four-day workweek movement is about; to reclaim some of that productivity increase as personal time. https://en.wikipedia.org/wiki/Four-day_workweek

Economist Keynes predicted one century ago that the workweek would drop to 15 hours due to rising productivity. It has not happened for social reasons.

I don't know what's going to happen when humans become redundant; that's an incipient issue we'll have to grapple with.

SoftTalker•14m ago
Humans will probably be cheaper than machines for a long time for some things. They reproduce themselves, they self-repair (up to a point), they are quite dexterous, they can be powered fairly cheaply by stuff that grows out of the earth, and they learn by example without explicit programming. They will do the menial but deceptively demanding of high dexterity tasks that need to be done. Laundry, dishes, housecleaning. New construction will largely be done by machines but repairs are more unpredictable so human mechanics, plumbers, etc. will continue to be in demand.

Software development as a career will evaporate in the next decade, as will most "knowledge" work such as general medicine, law, and teaching. Surgeons and dentists will continue a bit longer.

Bottom line, most of us will be doing chores while the machines do all the production and creative work.

darth_avocado•27m ago
AI is bringing efficiency, and it is making us work harder. It’s because AI marginally improves certain workflows but the management is using that to fire employees and offload their work on the remaining ones. You get 1.2x improvement in efficiency but get 2x the work.
turnsout•26m ago
It was ever thus. In each technological revolution—from the Industrial Revolution to personal computing revolution—the promise is that the technology will make work so efficient and productive that we can all work less.

Unfortunately, it is always a deliberate lie by the people who stand to gain from the new technology. Anyone who has thought about it for five seconds knows that this is not how capitalism works. Productivity gains are almost immediately absorbed and become the new normal. Firms that operate at the old level of productivity get washed out.

I simply can't believe that we're still falling for this. But let's hold out hope. Maybe AGI is just around the corner, and literally everyone in the world will spend our time sipping margaritas on the beach while we count our UBI. Certainly AI could never accelerate wealth concentration and inequality, right? RIGHT?

isoprophlex•25m ago
At least AI gave the author an easy think piece without having to write anything in their own style.

"But here's the kicker"

"It's not x. It's y."

"The companies that foo? They bar."

Em-dashes galore.

I'm either hypersensitized, seeing ghosts, or this article got the "yo claude make it pop" treatment. It's sad, but anything overly polished immediately triggers some "is this slop or an original thought actually worth my time" response.

nathan_compton•19m ago
I find music from the late 80s and 90s almost unlistenable, as digital audio processing created a very samey sound for everything. I think its instructive that even the iconoclastic Devo produced absolute dogshit sounding recordings from this era.

New technology often homogenizes and makes things boring for awhile.

bko•25m ago
I don't get these articles. First the author claims that AI made us more productive but now the time we saved is spent on more work (!)

> But here’s the kicker: we were told AI would free us up for “higher-level work.” What actually happened? We just found more work to fill the space. That two-hour block AI created by automating your morning reports? It’s now packed with three new meetings.

But then a few sentences later, she argues that tools made us less productive.

> when developers used AI tools, they took 19% longer to complete tasks than without AI. Even more telling: the developers estimated they were 20% faster with AI—they were completely wrong about their own productivity.

Then she switches back to saying it saves us time but cognitive debt!

> If an AI tool saves you 30 minutes but leaves you mentally drained and second-guessing everything, that’s not productivity—that’s cognitive debt.

I think you have to pick one, or just admit you don't like AI because it makes you feel icky for whatever reason so you're going to throw every type of argument you can against it.

bsenftner•19m ago
It's sloppy click bait writing.
bityard•14m ago
There was also no advice for the main problem posited in the first couple of paragraphs. That is, being asked to do more with less time.

The right answer to this is: speak up for yourself. Dumping your feelings into HN or Reddit or your blog can be a temporary coping mechanism but it doesn't solve the problem. If you are legitimately working your tail off and not able to keep up with your workload, tell your manager or clients. Tactfully, of course. If they won't listen, then its time to move on. There are reasonable people/companies to work with out there, but they sometimes take some effort to find.

downrightmike•23m ago
A doctor can review 50 xrays a day, AI comes in and flags one for re-review, the doctor now can only do 49 reviews a day.
nathan_compton•23m ago
This isn't surprising to me. My experience is that AI is best suited for a single person to get started rapidly with a new project or, if used artfully, to quickly orchestrate refactoring that the user has planned. I've personally found AI to be good for my productivity (haven't measured, however, my work is not conducive to that kind of thing) but I've also found I use AI primarily to look up documentation and to type code out. I still think about software design as much as I ever have, whether its the initial design step or refactoring.
6gvONxR4sf7o•22m ago
We've had technological progress that rapidly shifts the number of person-hours per <output> for generations. We don't have to guess. We've seen this play out many times already.

At first, we spend our time one way (say eight hours, just to pick a number). Then we get the tools to do all of that in six hours. Then when job seeking and hiring, we get one worker willing to work six hours and another willing to work eight, so the eight-hour worker gets the job, all else equal. Labor is a marketplace, so we work as much as we're willing to in aggregate, which is roughly constant over time, so efficiency will never free up individuals' time.

In the context of TFA, it means we just shift our time to "harder" work (in the sense of work that AI can't do yet).

Oras•18m ago
My experience with AI coding is it might be slower to develop for short term, but it’s saving ton of time for the long term.

Here is an example.

I decided to create a new app, so I write down a brief of what it should do, ask AI to create a longer readme file about the platform along with design, sequence diagram, and suggested technologies.

I review that document, see if there is anything I can amend, then ask AI for the implementation plan.

Up until this point, this would probably increased the time I usually use to describe the platform in writing. But realistically, designing and thinking about systems were never that fast. I would have to think about use cases, imagine workflows in my mind, do pen and paper diagrams which I don’t think any of the productivity reports are covering.

firefoxd•18m ago
In my team, it's making one dev more prolific, and everybody else work harder.

The most junior dev on my team was tasked with setting up a repo for a new service. The service is not due for many many months so this was an opportunity to learn. What we got was a giant PR with hundreds of new configurations no one has heard of. It's not bad, it's just that we don't know what each conf does. Naturally we asked him to explain or give an overview, he couldn't. Well because he fed the whole thing to an LLM and it spat out the repo. He even had fixes for bugs we didn't know we had in other repos. He didn't know either. But it took the rest of the team digging in to figure out what's going on.

I'm not against using LLM, but now I've added a new step in the process. If anyone makes giant PRs, they'll also have to make a presentation to give everyone an overview. With that in mind, it forces devs to actually read through the code they generate and understand it.

vkou•6m ago
I think the simpler expectation here is the same one you should have for non-AI code.

Don't allow giant PRs without a damn good reason for them. Incremental steps, with testing (automated or human) to verify their correctness, that take you from a known-good-to-known-good state.

soiltype•17m ago
sic semper operarius.

you will never be given your time back by an employer. you have to take it. you might be able to ask for it, but it won't be freely given, whether or not you become more efficient. LLM chatbots and agents are, in this sense, just another tool that changes our relationship to the work we do (but never our relationship to work).

mmanfrin•13m ago
Modern cotton gin.
ubicomp•11m ago
Just like with "automatic checkout systems" at a grocery store. Passing the labor onto the individual, vs. the expert. We don't even get the same infrastructure professionals get. A PLU for a piece of fruit is a mind-blurring whisk of hands over a dial pad for a pro, and a bit of mind-numbingly arduously piss-poor series of taps for the ill-positioned entrant.
tcrow•7m ago
This is admittedly much easier with greenfield projects, but if you can keep the AI focused on tight, modular development, meeting service specs, and not have the AI try to address cross-cutting concerns, you get much better outcomes. It does put more responsibility on humans for proper design and specification, but if you are willing to do that work, the AI can really assist in the raw development aspect during implementation.