Cursor, Windsurf, Roo Code / Cline, they're fine but nothing feels as thorough and useful to me as Claude Code.
The Codex CLI from OpenAI is not bad either, there's just something satisfying about the LLM straight up using the CLI
The more interesting question is whether this is true across the economy as a whole. In my view the answer is clearly no. Are we already operating at the limit of more software to add value at the margin? No.
So though any particular existing business might stop hiring or even cut staff, it won't matter if more businesses are created to do yet more things in the world with software. We might even end up in a place where across the economy, more dev jobs exist as a result of more people doing more with software in a kind of snowball effect.
More conservatively, though, you'd at least expect us to just reach equilibrium with current jobs if indeed there is new demand for software to soak up.
I know the context window part and Cursor RAG-ing it, but isn't IDE integration a a true force multiplier?
Or does Claude Code do something similar with "send to chat" / smart (Cursor's TAB feature) autocomplete etc.?
I fired it up but it seemed like just Claude in terminal with a lot more manual copy-pasting expected?
I tried all the usual suspects in AI-assisted programming, and Cursor's TAB is too good to give up vs Roo / Cline.
I do agree Claude's the best for programming so would love to use it full-featured version.
it completely blew my mind. i wrote maybe 10 lines of code manually. it’s going to eliminate jobs.
that's the part i'm not sold on yet. it's a tool that allows you to do a year's work in a week - but every dev in every company will be able to use that tool, thus it will increase the productivity of each engineer by an equal amount. that means each company's products will get much better much faster - and it means that any company that cuts head count will be at risk of falling behind it's competitors.
i could see it getting rid of some of the infosec analysts, i guess. since itll be easier to keep a codebase up to date, the folks that run a nessus scan and cut tickets asking teams to upgrade their codebase will have less work available.
Exaggerations like this really don't help your credibility
Brings a crazy new meaning to "fail fast" though
You should never have to copy/paste something from Claude Code...?
I currently use cursor with Claude 4 Sonnet (thinking) in agent mode and it is absolutely crushing it.
Last night i had it refactor some Django / react / vite / Postgres code for me to speed up data loading over websocket and it managed to:
- add binary websocket support via a custom hook - added missing indexes to the model - clean up the data structure of the payload - add messagepack and gzip compression - document everything it did - add caching - write tests - write and use scripts while doing the optimizations to verify that the approaches it was attempting actually sped up the transfer
All entirely unattended. I just walked away for 10 minutes and had a sandwich.
The best part is that the code it wrote is concise, clean, and even stylistically similar to the existing codebase.
If claude code can improve on that I would love to know what I am missing!
Apple builds both the hardware and the software so it feels harmonious and well optimized.
Anthropic build the model and the tool and it just works, although sonnet 4 in cursor is good too but if you've got the 20$ plan often you're crippled on context size (not sure if that's true with sonnet 4 specifically).
I had actually heard about the OpenAI Codex CLI before Claude Code and had the same thought initially, not understanding the appeal.
Give it a shot and maybe you'll change your mind, I just tried because of the hype and the hype was right for once.
I still use the Cursor auto complete but the rest is all Claude Code.
Even without the extension Claude is directly modifying and creating files so you never have to copy paste.
- It generates slop in high volume if not carefully managed. It's still working, tested code, but easily illogical. This tool scares me if put in the hands of someone who "just wants it to work".
- It has proven to be a great mental block remover for me. A tactic i've often had in my career is just to build the most obvious, worst implementation i can if i'm stuck, because i find it easier to find flaw in something and iterate than it is to build a perfect impl right away. Claude makes it easy to straw man a build and iterate it.
- All the low stakes projects i want to work on but i'm too tired to after real work have gotten new life. It's updated library usage (Bevy updates were always a slog for me), cleaned up tooling and system configs, etc.
- It seems incapable of seeing the larger picture on why classes of bugs happen. Eg on a project i'm Claude Code "vibing" on, it's made a handful of design issues that started to cause bugs. It will happily try and fix individual issues all day rather than re-architect to make a less error prone API. Despite being capable to actually fix the API woes if prompted to. I'm still toying with the memory though, so perhaps i can get it to reconsider this behavior.
- Robust linting, formatting and testing tools for the language seem necessary. My pet peeve is how many spaces the LLM will add in. Thankfully cargo-fmt clears up most LLM gunk there.
It absolutely aced an old take-home test I had though - https://jamesmcm.github.io/blog/claude-data-engineer/
But note the problems it got wrong are troubling, especially the off-by-one error the first time as that's the sort of thing a human might not be able to validate easily.
I’ve been avoiding LLM-coding conversations on popular websites because so many people tried it a little bit 3-6 months ago, spot something that doesn’t work right, and then write it off completely.
Everyone who uses LLM tools knows they’re not perfect, they hallucinate some times, their solutions will be laughably bad to some problems, and all the other things that come with LLMs.
The difference is some people learn the limits and how to apply them effectively in their development loop. Other people go in looking for the first couple failures and then declare victory over the LLM.
There are also a lot of people frustrated with coworkers using LLMs to produce and submit junk, or angry about the vibe coding glorification they see on LinkedIn, or just feel that their careers are threatened. Taking the contrarian position that LLMs are entirely useless provides some comfort.
Then in the middle, there are those of us who realize their limits and use them to help here and there, but are neither vibe coding nor going full anti-LLM. I suspect that’s where most people will end up, but until then the public conversations on LLMs are rife with people either projecting doomsday scenarios or claiming LLMs are useless hype.
No surprises here.
It feels very akin to the Uber vs Lyft situation, two companies with very different perceptions pursuing identical business models
It's early days and nobody knows how things will go, but to me it looks that in the next century or so humans are going the way of the horse, at least when it comes to jobs. And if our society doesn't change radically, let's remember that the only way most people have of eating and clothing is to sell their labor.
I'm an AI pessimist-pragmatist. If the thing with AI gets really bad for wage slaves like me, I would prefer to have enough savings to put AIs to work in some profitable business of mine, or to do my healthcare when disease strikes.
We will manage. Hey, we can always eat the rich!
As long as they are not made out of silicon....
I write we have only sold our labor for a couple of hundred years. We had civilization for many thousands of years before.
Dinosaurs did not live before they lived. And they did not die because their mode of production changed. They did not have a mode of production.
Are you suggesting LLMs will exterminate humanity?
How is it early days? AI has been talked about since at least the 50s, neural networks have been a thing since the 80s.
If you are worried about how technology will be in a century, why stop right here? Why not take the state of computers in the 60s and stop there?
Chances are, if the current wave does not achieve strong AI the there will be another AI winter and what people will research in 30 or 40 or 100 years is not something that our current choices can affect.
Therefore the interesting question is what happens short-term not what happens long-term.
There's no comparing the AI we have today with what we had 5 years ago. There's a huge qualitative difference: the AI we had five years ago was reliable but uncreative. The one we have now is quite a bit unreliable but creative at a level comparable with a person. To me, it's just a matter of time before we finish putting the two things together--and we have already started. Another AI winter of the sort we had before seems to me highly unlikely.
You can‘t just judge humans in terms of economic value given the fact that the economy is something that those humans made for themselves. It‘s not like there can be an „economy“ without humankind.
The only problem is the current state where perhaps _some_ work disappears, creating serious problems for those holding those jobs.
As for being creative, we had GPT2 more than 5 years ago and it did produce stories.
And the current AI is nothing like a human being in terms of the quality of the output. Not even close. It‘s laughable and to me it seems like ChatGPT specifically is getting worse and worse and they put more and more lipstick on the pig by making it appear more submissive and producing more emojis.
When you have exponential growth, it's always early days.
Other than that I'm not clear on what you're saying. What is in your mind the difference between how we should plan for the societal impact of AI in the short vs the long term?
The crowd claiming exponential growth have been at it for not quite a decade now. I have trouble separating fact from CEOs of AI companies shilling to attract that VC money. VCs desperately want to solve the expensive software engineer problem, you don't get that cash by claiming AI will be 3% better YoY
Let‘s take the development of CPUs where for 30-40 years the observable performance actually did grow exponentially (unlike the current AI boom where it does not).
Was it always early days? Was it early days for computers in 2005?
I’m not sure. I think we can extrapolate that repetitive knowledge work will require much less labor. For actual AGI capable of applying rigor, I don’t think it clear that the computational requirements are achievable without a massive breakthrough. Also for general purpose physical tasks, humans are still pretty dang efficient at ~100watts and self maintaining.
Neither side is obviously right.
If scaling holds up enough to make AGI possible in the next 5-10 years, slowing down China by even a few years is extremely valuable.
They’re going to do that anyway. They already are. The reason that they want to buy these cards in the first place is because developing these accelerators takes time. A lot of time.
These AI solutions are great, but I have yet to see any solution that makes me fear for my career. It just seems pretty clear that no LLM actually has a "mental model" of how things work that can avoid the obvious pitfalls amongst the reams of buggy C++ code.
Maybe this is different for JS and Python code?
Still, sometimes it can solve a problem like magic. But since it does not have a world model it is very unreliable, and you need to be able to fall back to real intelligence (i.e., yourself).
this is flatly false for two reasons -- one is that all LLMs are not equal. The models and capacities are quite different, by design. Secondly a large number of standardized LLM testing, tests for sequence of logic or other "reasoning" capacity. Stating the fallacy of stochastic parrots is basically proof of not looking at the battery of standardized tests that are common in LLM development.
And the testing does not always work. You can be sure that only 80% of the time it will be really really correct, and that forces you to check everything. Of course, using LLMs makes you faster for some tasks, and the fact that they are able to do so much is super impressive, but that's it.
Yea, they still need to improve a bit - but i suspect there will be a point at which individual devs could be getting 1.5x more work done in aggregate. So if everyone is doing that much more work, it has potential to "take the job" of someone else.
Yea, software is being needed more and more and more, so perhaps it'll just make us that much more dependent on devs and software. But i do think it's important to remember that productivity always has potential to replace devs, and LLMs imo have huge potential in productivity.
At least for C++, I've found it does very mediocre at suggesting project code (because it has the tendency to drop in subtle bugs all over the place, you basically have to carefully review it instead of just writing it yourself), but asking things in copilot like "Is there any UB in this file?" (not that it will be perfect, but sometimes it'll point something out) or especially writing tests, I absolutely love it.
This is not an AI thing, plenty of "mid-level" C++ developers could have made that same mistake. New code should not be written in C++.
(I do wonder how Claude AI does when coding Rust, where at least you can be pretty sure that your code will work once it compiles successfully. Or Safe C++, if that ever becomes a thing.)
I'm able to use AI for Rust code a lot more now than 6 months ago, but it's still common to have it spit out something decent looking, but not quite there. Sometimes re-prompting fixes all the issues, but it's pretty frustrating when it doesn't.
This is the crux of an interview question I ask, and you’d be amazed how many experienced cpp devs require heavy hints to get it
Will the ability to use AI to write such a solution correctly be enough motivation to push C++ shops to adopt rust? (Or perhaps a new language that caters to the blindspots of AI somehow)
There will absolutely be a tipping point where the potential benefits outweigh the costs of such a migration.
Now this isn't a viable way of working if you're paying for this token-by-token, but with the Claude Code $200 plan ... this thing can work for the entire day, and you will get a benefit from it. But you will have to hold its hand quite a bit.
To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.
When AI finally does cause massive disruption to white collar work, what happens then? Do we really think that most of the American economy is just going to downshift into living off a meager universal basic income allotment (assuming we could ever muster the political will to create a social safety net?) Who gets the nice car and the vacation home?
Once people are robbed of what remaining opportunities they have to exercise agency and improve their life, it isn't hard to imagine that we see some form of swift and punitive backlash, politically or otherwise.
IMO Jensen and others don’t know where AI is going any more than the rest of us. Your imaginary dystopia is certainly possible, but I would warn against having complete conviction it is the only possible outcome.
Absent some form of meaningful redistribution of the economic and power gains that come from AI, the techno-feudalist dystopia becomes a more likely outcome (though not a certain outcome), based on a straightforward extrapolation of the last 40 years of increasing income and wealth inequality. That trend could be arrested (as it was just after WW2), but that probably won't happen by default.
Historically, increased productivity has almost literally never increased wages or benefits without worker uprisings
Seriously look into the history of labor and automation
Sustained rates of increase _require_ increasing productivity. So no, they didn't explicitly state it, what they explicitly said was that productivity only results in increased wages if a worker uprising forces it. But that's the logical requirement of that statement.
The pie doesn't have to grow when the pie is already massive, but only 1% of people are taking 90% of the pie
The point was not in the exact number
The point is that there is an answer to "Where could increased wages come from if we don't increase productivity"
We don't have to increase productivity to pay some of the population less and other parts of the population more
I'm not sure you really understood my original post. I never said or meant to imply that wages never grow ever
I was talking about how increases in productivity do not lead to proportionally increased wages
Look, here's an example. Let's say I'm a worker producing Widgets for $20/hour by hand, and I can produce 10 widgets an hour. The company sells widgets for $10 each
In one hour I have produced $100 worth of widgets. The company pays me $20, the company keeps $80
Now the company buys a WidgetMachine. Using the WidgetMachine I can now produce 20 widgets an hour
I now produce $200 worth of Widgets per hour. The company still pays me $20, the company has now earned $180
My productivity has doubled, but my wage hasn't
So next year inflation is $5. The company increases my income to $25, starts charging a couple of cents more per Widget so they can absorb my wage increase without any change to their bottom line
My wage matches inflation, it still "grows" but it is completely divorced from my productivity
More importantly, my wage growing to match inflation doesn't help my buying power even remotely. If my wage only goes up to match exactly inflation then all I'm ever doing is treading water. At best all I can do is keep the exact same standard of living and lifestyle
Increases in "real wages" should have "real" impact on your life, it should let you have a better life than before
So I showed you an increase in worker wages. If there is not corresponding worker uprising, your original claim is false.
And,again, you keep ignoring that the plot I showed is already inflation adjusted. It is a real increase, not a nominal increase.
No it isn't. This is an extremely naive understanding of how any of this works
You even say it yourself, with the silly graph you keep posting
That graph doesn't show productivity it just shows "real inflation adjusted wages", like you keep harping on about.
But in general a person is not increasing their productivity year over year. So why would their wage go up to match inflation if, as you say, wages only go up when productivity increases? That doesn't make sense
The reality is that people already provide their employers with vastly more productivity than they are paid for. Their employers are capturing the majority of the value from that productivity. If someone's wage goes up to match inflation, their productivity hasn't increased, inflation has increase the value of their current productivity
You seriously don't seem to understand how any of this works
You keep shifting the goal posts, you keep talking about unrelated things, and you keep not addressing the core claims, and you keep not responding to the main refutation of your own original claim. I'm done beating my head against this wall. I hope you have a nice day.
I'm amazed you've found that all economists that can universally agree on anything. Economists don't know how any of this works either. Economics is not a science, and you can find data to support basically an unlimited number of arguments.
https://www.epi.org/productivity-pay-gap/
Change 1979q4–2025q1:
Productivity +86.0%
Hourly pay +32.0%
Productivity has grown 2.7x as much as pay
Capital doesn't pay labor unless it has to.
Average wages = f(labor productivity, demand for labor, labor supply)
The AGI future is that demand for labor crashes.
Even AI that's only borderline competent at fuzzy 'file these files into these systems' admin work would wipe out a class of middle income jobs.
And given that's basically here today technologically, we're in the digestion and expansion phase.
Not in the last 40 years in real income terms: https://www.epi.org/productivity-pay-gap/
This trend is likely to accelerate (productivity skyrocketing, wages stagnant)
We’re on HN. AI makes it easier for you to disrupt your employer.
Did the introduction of Python drastically reduce software developer salaries?
First approximation, there are two AI coding futures:
1. AI coding increases software development productivity, decreasing the cost of software, stimulating more demand for the now more efficient development.
2. AI coding increases software development productivity such that the existing labor pool is too large for demand.
I'd hazard (1) in the short term and (2) in the long term.
I know of at least one major company that continually benchmarks market rates and uses those as default raises.
Unsurprisingly, they have an average tenure of 10+ years...
If by "owning class" you actually mean "all people with agency" then, yeah, I agree.
That applies to you, not to your employer - in your hands, "cheap AI software generation" is, well... cheap. On the other hand, your employer owns patents, copyrights, distribution channels, politicians and connections - those become more valuable as the coding skills get cheaper. The "owning class" are those who own most of the high value items enumerated above.
Where have I heard this before? The drawbacks of offshoring are well known by now and AI does not really mitigate them to any extent.
AI should improve code quality for these offshore teams. That leaves time zone issues, which may or may not be a problem. If it is, offshore to Latin America.
I remember, because the same type of people dooming about AI were also telling me, a university student at the time, that I shouldn't get into software development, because salaries would cap out at $50k/year due to competition with low-cost offshore developers in India and Bangladesh.
The endgame isn't more employees or paying them more. It's paying less people or no skilled people when possible.
That's a fairly massive disruption.
I think everything else you’re saying is happening/has happened but companies hiring less because of anticipated AI productivity gains is also occurring. Like the scuttlebutt I hear about certain faangs requiring managers have 9-10 direct reports now instead of 7
Employers would rather pay more to hire someone new who doesn't know their business than give a raise to an existing employee who's doing well. They're not going to pay someone more because they're more productive, they'll pay them the same and punish anyone who can't meet the new quota.
So if you want to have that discussion, that's fine, but it's totally separate from the original discussion about productivity and wages.
If you look at the graph you posted and carried the slope of the pre-70s trajectory forward, assuming that the 70s-90s slump had not happened, would the graph end in the same place it is currently?
No. Not even close
> So if you want to have that discussion, that's fine, but it's totally separate from the original discussion about productivity and wages.
It's absolutely not a separate discussion, the end result is that the same "real wage" that used to provide a comfortable life is now poverty
You cannot just shrug your shoulders and say "well incomes are matching productivity so this is fine actually"
When you massively increase the supply of labor, you’re going to have downward pressure on wages.
What/who has financed productivity increases? Isn’t it tools and infrastructure etc. for the most part, paid for by asset owners? There are likely exceptions, but big picture.
Capital earn more than Work since the 90s-2000s(depending on the country).
What you just said as a rebuttal was pretty much his point, you just didn't internalize what the productivity gains mean at the macro level, only looking at the select few that will continue to have a job
Great lets see an example!
> To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.
Except that innovation has lead to more jobs, new industries, and more prosperity and fewer working hours. The stark example of this: you arent a farmer: https://modernsurvivalblog.com/systemic-risk/98-percent-of-a...
Your shirts arent a weeks or a months income: https://www.bookandsword.com/2017/12/09/how-much-did-a-shirt...
Go back to the 1960's when automation was new. It was an expensive, long running failure for GM to put in those first robotic arms. Today there are people who have CNC shops in their garage. The cost of starting that business up is in the same price range as the pickup truck you might put in there. You no longer need accountants, payroll, and your not spending as much time doing these things yourself its all software. You dont need to have a retail location, or wholesale channels, build your website, app, leverage marketplaces and social media. The reality is that it is cheaper and easier than ever to be your own business... and lots of people are figuring this out and thriving.
> Do we really think that most of the American economy is just going to downshift
No I think my fellow Americans are going to scream and cry and hold on to dying ways of life -- See coal miners.
There isnt a line of unemployed draftsmen out there begging for change cause we invented Autocad: https://azon.com/2023/02/16/rare-historical-photos/
What happened to all the switchboard operators.
How about the computers, the people who used to do math, at desks, with slide rules, before we replaced them with machines.
These are all white colar jobs that we replaced with "automation".
Amazon existed before, it was called Sears... it was a catalog so pictures, and printing, and mailing in checks, we replaced all of that with a website and CC processing.
So I'm quite confident the future will be similar with AI. Yes, in theory, it could already replace perhaps 90% of the white collar work in the economy. But in practice? It will be a slow, decades-long transition as old-school / less tech savvy employers adopt the new processes and technologies.
Junior software engineers trying to break into high paying tech jobs will be hit the hardest hit IMO, since employers are tech savvy, the supply of junior developers is as high as ever, and they simply will take too long to add more value than using Claude unless you have a lot of money to burn on training them.
We used to just call that lying.
> When AI finally does cause massive disruption to white collar work
It has to exist first. Currently you have a chat bot that requires terabytes of copyrighted data to function and has sublinear increases in performances for exponential increases in costs. These guys genuinely seem to be arguing over a dead end.
> what happens then?
What happened when gasoline engines removed the need to have large pools of farm labor? It turns out people are far more clever than a "chat bot" and entire new economies became invented.
> that we see some form of swift and punitive backlash, politically or otherwise.
Or people just move onto the next thing. It's hilarious how small imaginations become when "AI" is being discussed.
AI will crash the price of manufactured goods. Since all prices are relative, the price of rivalrous goods will rise. A car will be cheap. A lakeside cabin will be cheap. A cottage in the Hamptons will be expensive. Superbowl tickets will be a billion dollars each.
>meager universal basic income allotment
What does a middle class family spend its money on? You don't need a house within an easy commute of your job, because you won't have one. You don't need a house in a good school district, because there's no point in going to school. No need for the red queen's race of extracurriculars that look good on a college application, or to put money in a "college fund", because college won't exist either.
The point of AI isn't that it's going to blow up some of the social order, but that it's going to blow up the whole thing.
Quite the opposite, persistent inflation has been with us for a long time despite automation, it's not driven by labor cost (even mainstream econ knows it), it's driven by monopolization which corporate AI facilitates and shifts to overdrive.
> The point of AI isn't that it's going to blow up some of the social order, but that it's going to blow up the whole thing.
AI will blow up only what its controllers tell it to, that control is the crux of the problem. The AI-driven monopolization allows few controllers to keep the multitudes in their crosshairs and do whatever they want, with whomever they want, J. Huang will make sure they have the GPUs they need.
> You don't need a house within an easy commute of your job, because you won't have one.
Remote work has been a thing for quite some time but remote housing is still rare anyway - a house provides access not only to jobs and school but also to medical care, supply lines and social interaction. There are places in Montana and the Dakotas who see specialist doctors only once a week or month because they fly weekly from places as far away as Florida.
> You don't need a house in a good school district, because there's no point in going to school... and college won't exist either.
What you're describing isn't a house, it's a barn! Can you lactate? Because if you can't, nobody is going to provide you with a stall in the glorious AI barn.
The biggest long term competitor to Anthropic isn't OpenAI, or Google... it's open source. That's the real target of Amodei's call for regulation.
> that 50% of all entry-level white-collar jobs could be wiped out by artificial intelligence, causing unemployment to jump to 20% within the next five years
I'm not a betting woman but I feel extremely confident taking the other end of this bet.
I am curious to hear why you think that?
So far, I've seen jobs lost to tariffs. I've yet to see a job lost to AI. Observations are not evidence, but so far there is no evidence I see that shows AI to be a stronger macro economic force than say recessions, tariffs (trade wars) or actual wars.
It makes a lot of mistakes in its own code, trivial ones, like creating functions and calling them with the arguments reversed.
The idea that it's going to blackmail me somehow by showing me what /looks/ like an email, sounds laughable.
If AI becomes sufficiently advanced and cheap productivity will go up and companies will as a result need to hire more AI.
Let’s be clear. If an AI is developed that is equal to a human in intelligence and cheaper than a human to employ capitalism in general is impacted in a major way.
If that happens just pray that robotics doesn’t become sufficiently advanced such that jobs requiring crafting or manual work doesn’t get replaced.
Also to be clear I’m not advocating or saying whether or not any of these things will happen. I am simply saying that hypothetically if AI progresses in a certain way then the following consequence is inevitable.
C.f. openAI who released o3 and didn’t publish any model card or safety eval at all, their justification being “it’s not GA, only the top paid subscription can use it”. That’s not how safety works.
The same happened with blue collar jobs.
Right now we're betting on sp500 going up, which is mostly backed by the belief that machines are going to replace us soon.
the only thing that I certain is I would take advantage of this "AI revolution" so called, maybe just maybe Human would get replace with Human + AI tools for now at least
sorcerer-mar•10h ago
AFAICT this is a complete article of faith. Or insofar as it's true, it's true because doing it in the open allows other stakeholders to criticize and shape its direction – which seems precisely the dialogue that Jensen seems allergic to (makes sense given his incentives, of course)