frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

CyberChef Payment Cryptography Extensions

https://www.jacobmarks.com/2026/04/cyberchef-payment-cryptography.html
1•J8K357R•26s ago•0 comments

Show HN: AI CAD Harness

https://fusion.adam.new/install
1•zachdive•1m ago•0 comments

Show HN: My Private GitHub on Postgres

https://github.com/calebwin/gitgres
1•calebhwin•5m ago•0 comments

Artemis II Fault Tolerance

https://alearningaday.blog/2026/05/01/artemis-ii-fault-tolerance/
1•speckx•6m ago•0 comments

MetaModel – AI builds structured, formula-driven apps from plain English

https://www.metamodel.app
1•joegra•6m ago•0 comments

Show HN: Sanishne – shared bookmarks for teams (so links don't vanish in Slack)

https://sanishne.org/
1•flamestro•7m ago•0 comments

The X-Files Has Made Me Nostalgic for a Time I Never Experienced

https://midnightmurmurations.substack.com/p/the-x-files-has-made-me-nostalgic
2•Teever•10m ago•0 comments

Runway Forecaster – weekly cash flow forecasting, no spreadsheet

https://www.runwayforecaster.com/
1•thoneyville•11m ago•0 comments

Clipport – paste screenshots into SSH sessions from iTerm

https://github.com/arihantsethia/clipport
2•arihantsethia•11m ago•1 comments

Epidemiology of Snakebites

https://en.wikipedia.org/wiki/Epidemiology_of_snakebites
1•skilled•12m ago•0 comments

Show HN: N=1 – iOS app for structured longevity self-protocols

https://apps.apple.com/us/app/n-1-tracker/id6762523189
1•middleastbeast•14m ago•0 comments

Fossil: A Coherent Software Configuration Management System

https://fossil-scm.org/home/doc/trunk/www/index.wiki
1•whatisabcdefgh•14m ago•0 comments

Roger Sweet, Creator of the He-Man Action Figure, Dies at 91

https://www.nytimes.com/2026/04/29/arts/roger-sweet-dead-he-man.html
2•ChrisArchitect•15m ago•1 comments

AWS stops billing Middle East cloud customers as repairs to war damage drag on

https://arstechnica.com/gadgets/2026/05/amazon-stuck-with-months-of-repairs-after-drone-strikes-o...
2•johnbarron•16m ago•0 comments

Launched tool for contractors, then found the mod sticky banning "SaaS bros"

https://quotr-8r2q.vercel.app
1•atdl•18m ago•0 comments

Understand Anything

https://github.com/Lum1104/Understand-Anything
2•taubek•18m ago•0 comments

Mayo Clinic AI helps specialists detect pancreatic cancer up to 3 years early

https://newsnetwork.mayoclinic.org/discussion/mayo-clinic-ai-detects-pancreatic-cancer-up-to-3-ye...
1•sreekanth850•19m ago•1 comments

Flock cameras keep telling police a man who doesn't have a warrant has a warrant

https://www.youtube.com/watch?v=nHwxV0Sd9V8
21•johnbarron•19m ago•3 comments

Eka Robotics

https://www.wired.com/story/when-robots-have-their-chatgpt-moment-remember-these-pincers/
1•temur•20m ago•0 comments

Maintenant: One container to monitor your stack

https://github.com/kOlapsis/maintenant
2•kadrek•21m ago•0 comments

We stopped hiring engineers for coding ability

https://eliseai.com/blog/we-stopped-hiring-engineers-for-coding-ability
8•ohxh•21m ago•2 comments

What's in the latest eLxr Pro? An overview for enterprise Linux folks

https://www.windriver.com/blog/Advancing-the-Enterprise-Latest-eLxr-Pro
1•ohjeez•25m ago•0 comments

Open Source Email Signature Generator

2•jcobhams•26m ago•0 comments

AI Uses Less Water Than the Public Thinks

https://californiawaterblog.com/2026/04/26/ai-water-use-distractions-and-lessons-for-california/
3•hirpslop•26m ago•2 comments

Ailom: How AI Permanently Makes Everything Less Meaningful

https://www.jsanilac.com/ailom/
1•PieUser•26m ago•0 comments

As Tim Cook steps down, Apple hit record sales – but a chip shortage looms

https://techcrunch.com/2026/04/30/as-tim-cook-steps-down-apple-hit-record-sales-but-a-chip-shorta...
1•Brajeshwar•28m ago•0 comments

Waymos, robotaxis can now be ticketed by California police. But how exactly?

https://www.latimes.com/california/story/2026-05-01/california-can-ticket-robotaxis-that-violate-...
3•dangle1•28m ago•0 comments

AI model did better than ER doctors at diagnosing patients

https://www.npr.org/2026/04/30/nx-s1-5804474/ai-doctors-openai-patient-care-diagnosis
1•marojejian•30m ago•1 comments

Analyzing GPT-5.5 and Opus 4.7 with ARC-AGI-3

https://arcprize.org/blog/arc-agi-3-gpt-5-5-opus-4-7-analysis
1•meetpateltech•34m ago•0 comments

Earth is splitting open beneath the Pacific Northwest, scientists say

https://www.sciencedaily.com/releases/2026/04/260429232851.htm
2•unsnap_biceps•36m ago•0 comments
Open in hackernews

Uber Torches 2026 AI Budget on Claude Code in Four Months

https://www.briefs.co/news/uber-torches-entire-2026-ai-budget-on-claude-code-in-four-months/
178•lwhsiao•1h ago

Comments

PessimalDecimal•1h ago
Is this a submarine? https://paulgraham.com/submarine.html
MichaelNolan•1h ago
> 95% of Uber engineers now use AI tools monthly with 70% of committed code originating from AI.

Well, that’s to be expected when using AI tools becomes relevant in your performance evaluation.

Sherveen•1h ago
I don't understand this critique. (1) Did you previously think you weren't getting paid for doing what a company wants you to do, aka what THEY thought was productive? (2) Do you think all this AI generated code is useless?

Edit: y'all are some whiney folk, ain't ya?

danaw•59m ago
you're missing their point; LLM use is often a part of your evaluation at some of these larger companies and they expect you to use them heavily or you will get a lashing
RHSeeger•56m ago
I think the point was that, when you make a metric goal of "you must use AI this much", then people will use AI even in ways that isn't adding to productivity.
bobsomers•56m ago
Not OP, but:

1. At my level, the company is not just paying me to do a task the way they want it done, they are paying for my experience to orchestrate the best way to do it. They want an outcome, and I'm responsible for figuring out how to get to that outcome with the right balance of cost, correctness, etc. But yes, the most dystopian reality is what you said.

2. It's not useless, but the AI generated code is absolutely lower quality than what I would have written myself, but there is no desire to clean it up. Companies have always had a disastrously bad understanding of technical debt and they finally have tool they can shove down developers throats that trades even more velocity for even less quality. They're going to take that trade every single time.

arcanemachiner•54m ago
To answer your second question: Yes, much of it is worse than useless. The tools need guidance to produce useful output. If you use it poorly, you will get garbage output that may do more harm than good.

And your response does not address the point being made in the comment you replied to: Many people are being evaluated by how many tokens they burn, which is about as good a metric as lines of code written.

txru•53m ago
Goodhart's Law isn't a problem immediately. If you want more code to be written, and the only feasible way to write it to goals is to heavily use AI, then you might run into the problems of AI-generated code, and an infrastructure that's poorly architected and much less understood than it would've been ten years ago.
skydhash•52m ago
GP just saying that any metric will be gamed and if you have some costs that is associated to that, it will grow. Let’s say you set some metric that says the most productive dev are the ones that has the most files changes, you can soon expect every function and structure to be its own file. Same if you say that sales commision are based on how much time you spend calling, expect the phone bills to grow a lot.
misterbwong•50m ago
I think parent is saying "% of code being generated by AI" is not a generally good, direct metric for business value. It's akin to the "we are pushing SO MUCH CODE" phase of early ai marketing.

If we're trying to measure the value of adopting tool, it's probably better to measure the ROI of that tool rather than the usage % of that tool, especially when usage is basically mandated.

To directly answer your questions:

1. You're being paid to create value for the business, which "doing what they think is productive" is a proxy for. You're not being paid to use a tool a high % of the time.

2. I doesn't seem like parent even commented on the quality of the code generated. I think anyone that uses it regularly can agree that: a) the code is not useless and b) all generated code is not immediately production ready c ) AI generation of code is an accelerant for software development

miyoji•47m ago
1) I think if the company I work for spends too much effort on things that aren't going to make money, they won't be able to pay me anymore, no matter what they "think" is productive. That's not how executives at companies like this make decisions, though.

2) Mostly, yes.

jcgrillo•27m ago
> (1) ...getting paid for doing what a company wants you to do...?

At my previous company, when the thing they thought they wanted me to do (which was not the thing they actually wanted... but whatever) diverged from my values I quit. You can just do things.

> (2) Do you think all this AI generated code is useless?

Almost universally, yes. Especially in organizations that historically haven't been particularly careful about hiring and have a huge number of young, inexperienced people. There are exceptions but they're rare enough that throwing that particular baby out with the bathwater isn't a big loss.

miltonlost•1h ago
When managers and VPs all say, you must use AI or else you will not work here, then yes, people will use it.
fidotron•59m ago
It's actually incredible the extent to which non devs imposing KPIs on devs underestimate how badly this will get gamed, whether it's AIs, PR/line counting or whatever.
Nuzzerino•53m ago
Easily fixable with another KPI to measure the gaming itself :P
ambicapter•32m ago
Someone at my job uses AI tools to reformat his code...
i_love_retros•14m ago
My coworker said he does that too. Also have coworkers using AI to run git commands. Nothing fancy either- just pull, push, merge etc
SatvikBeri•8m ago
I actually do this, but that's mostly because our team reviewed all the existing autoformatters for the relatively obscure language we use, and either really hated the formatting or found that they actually introduced errors!
darth_avocado•27m ago
Gaming is one thing, fundamentally not understanding how engineering works will lead to shittier outcomes and cost the company in ways the management will never understand.

Management in the age of AI is falling for the doorman fallacy wrt engineering. If lines of code were the most valuable aspect of software engineering, my front end JavaScript intern would’ve been the most valuable person in the company. https://www.jaakkoj.com/concepts/doorman-fallacy

jimbokun•12m ago
I think PRs is pretty good, IF

1. you sample a few to see that they are actually meaningful,

2. they go to prod and are validated without having to roll back.

Still needs to be managed. But it should be much easier for a manager to catch an engineer gaming PRs than something like AI use or lines of code.

NicuCalcea•1h ago
Can these AI-generated articles not be prompted to at least cite the primary sources? How do I know any of this is true?

Here's a much better article: https://aimagazine.com/news/why-uber-has-already-burned-thro...

fcarraldo•1h ago
The OP isn't a good article, but this one is about an entirely different subject?
NicuCalcea•1h ago
Ah sorry, it's one of those annoying websites that automatically load another article when you scroll down too far. Updated the link.
woah•1h ago
It's very easy to blow through hundreds of dollars a session using API tokens especially with the 1m context if you aren't careful about clearing old context.

At the same time the subscription will allow the same usage for hundreds of dollars a month.

Either Anthropic is absolutely hosing API users, massively subsidizing subscriptions, or a little bit of both.

internetter•1h ago
https://www.forbes.com/sites/annatong/2026/03/05/cursor-goes...

"Cursor estimated last year that a $200-per-month Claude Code subscription could use up to $2,000 in compute, suggesting significant subsidization by Anthropic. Today, that subsidization appears to be even more aggressive, with that $200 plan able to consume about $5,000 in compute"

jackp96•1h ago
Really curious how many people actually get close to that level of usage? Their general business plan only offers the $100 version, with pay-as-you-go above that.

If 95% of people are using $100 of value a month, the whales may not be hurting them that badly.

ageitgey•1h ago
Anthropic has a very "interesting" business model where you get subscription pricing as long as you are under 150 employees. When you hit 151, you have to start paying API prices overnight for everyone, and your total bill instantly multiplies.

They are getting you hooked on cheaper tokens, then raking you in when you get scale. I'm sure Uber gets a break on list price, but I doubt they are anywhere near <150 employee subscription pricing.

rogerrogerr•55m ago
Strange pricing model for a company selling the idea of having fewer employees.
ambicapter•30m ago
Don't the incentives align? If you have fewer employees, then you pay less...
jcgrillo•39m ago
Yeah, it's basically the opposite of how "product-led growth" SaaS works. Generally pay-as-you-go pricing is expensive at scale, but attractive initially. So you start on a pay-as-you-go plan, but as you scale you end up transitioning off pay-as-you-go to a negotiated commit. I.e. you call sales and sign a contract. Anthropic basically flips that around backwards.
brightball•57m ago
I evaluated the pricing and could not justify the jump to Enterprise from Team. You lose the monthly subscription entirely when you jump to enterprise so you lose your ability to control costs.

You can cap per user, but not having the rolling cap are you really just going to tell a member of your team “No AI for the rest of the month”

It’s a risky deal as it sets up now IMO.

internetter•1h ago
I know I'm responding to AI right now, but

> which means figuring out if the company can afford this level of productivity at scale.

If it was actually productive, then the revenue would increase and affordability wouldn't be a question.

sonofhans•1h ago
Yes, my thoughts exactly. Productivity by definition creates things, hopefully valuable things. Is all the extra burn on chatbots worth the cost? Has Uber somehow gotten dramatically more efficient and effective due to this massive budget overrun? Or have they just given people shiny and expensive ways to push the same work around?
orf•47m ago
Not every change a developer makes increases revenue, and the changes that do often have a lag time.
guywithahat•35m ago
This is my thought too. The eggheads in accounting set budgets, and we produce products within that budget. I could be twice as productive with twice as many people, and maybe 50% more productive with good AI, but if it's not budgeted for it's an issue (especially short-term before the product is released).
fg137•34m ago
I'd argue it's often the contrary -- since it's easy to ship features and fixes, people often ship things without questioning if it makes business sense to support a use case, or if the design is solid. Now you have exactly the same revenge but more things to maintain
fragmede•3m ago
What if you're the SRE and the code fixes mean the site goes from 99.0% uptime to 99.9% up? How do you measure the revenue from that?
solenoid0937•6m ago
> If it was actually productive, then the revenue would increase and affordability wouldn't be a question.

Revenue has increased. Have you seen Meta's latest earnings? +33% revenue - in this economy.

Affordability is not a question. There is a reason companies like Meta have no issue with their engineers spending $1k/day on tokens. It's just not that much compared to how much they make per employee.

mkozlows•1h ago
This terrible unsourced article seems to be citing this information piece: https://www.theinformation.com/newsletters/applied-ai/uber-c...

... but the key fact about "$500-$2000" per engineer does not appear there, and seems to be fabricated.

bhagyeshsp•1h ago
Wonderful, so when will I see novel features in my Uber app?
danaw•55m ago
if you mean novel bugs than probably at the next app update
bhagyeshsp•8m ago
Hahaha.. good one :D
lattalayta•46m ago
You can now reportedly book a hotel from the Uber app...which is totally a useful feature that I'm sure everyone will start to use /s

https://investor.uber.com/news-events/news/press-release-det...

bhagyeshsp•8m ago
I didn't know this. There's a term for this--which everyone of us now know--enshitification.
KolmogorovComp•1h ago
Honest question, does Uber need that much R&D? And do they expect the ROI to be positive?
danaw•57m ago
i assume this also includes their self driving vehicle research and trucking, not just their consumer mobile app dev
jeffbee•1h ago
It's obvious that the word productivity has been used in this discussion to mean something other than the plain meaning of the word. If AI was productive, there would be no question about whether it could be afforded. If you're asking whether you can afford it then it isn't productive by definition.

They are using it to mean a mechanism that produces prodigious amounts of toxic waste. That does not conform to the historical understanding of the word.

abuani•1h ago
I take a peak every month or so at spend for my company and notice more and more are consumed $1k in tokens a month and it is bewildering to me how. I use llms daily, and see anywhere from $200-$400 tops. This is using the most expensive models, in deep thinking mode. So I'm not a Luddite against the usage of them. I just can't figure how _how_ to burn that much money a month responsibly.

I genuinely challenge someone spending $5-$10k a month to demonstrate how that turns into $50-$100k in value. At a corporate level, I'd much rather hire a junior engineer who spends $100-$200/month and becomes productive then try and rationalize $100k/year in token spend.

CyberDildonics•1h ago
They keep forgetting to put "make no mistakes", "think deeply" and "get it right the first time" in their prompts.

When people have no ability to understand what they are doing, they will just rerun it endlessly hoping they get something passable. When that doesn't happen they burn money.

dpark•57m ago
I doubt most of this is from rerunning the same prompts over and over. This token burn is more likely from people using swarms of agents and orchestrators for “efficiency”.

“I’ve got 2 dozen agents churning through the backlog to build this feature that would take one agent an hour to implement.”

cyanydeez•49m ago
managers call meetings and agents call swarms.
entropicdrifter•59m ago
I'm on the same page. Do people not analyze the problems themselves? Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works?

I don't get it.

swiftcoder•58m ago
> Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works?

That is exactly what they are doing, yes

Verdex•51m ago
That's my take as well. I've had my unPRed branches grabbed up and blindly merged by an agent twice now. The guy doing it was shocked both times that his PR had my change sets in it.

Also one engineer is treating the code as assembly. I've asked some pointed questions about code in his PR and the response was "yeah, I don't know that's what the agent did".

Edit:

To everyone freaking out about the second guy. Yeah, I think being unable to answer questions about the code you're PRing is ill advised. But requirement gathering, codebase untangling, and acceptance testing are all nontrivial tasks that surround code gen. I'm a bit surprised that having random change sets slurped up into someone else's rubber stamped PR isnt the thing that people are put off by.

esafak•48m ago
To that last guy, as the manager I would say "What is it that you do here??"
npongratz•42m ago
That's just a straight-shooter with "upper management" written all over him.
Mistletoe•40m ago
“I’m the prompter.”
esafak•38m ago
I take the prompts to the AI so the manager doesn't have to! I have prompting skills!!

I just can't make the joke work. There really are people that think they can get paid to press the agent's on button. How long before their checks stop clearing and it "just works itself out naturally"?

storus•25m ago
That's literally how some Meta AI jobs looked a few years back - set up a few parameters, push a button, wait until training and evals are finished; repeat if. needed. $500k+/year.
fragmede•20m ago
What color is your stapler?
throwup238•38m ago
He signs the TPS reports.
sikozu•48m ago
So he's being paid and is sitting there letting an AI tool do his work for him? Insanity.
robotresearcher•33m ago
We didn’t mind when typesetting was automated. Or when compilers were invented. Why is this different?
mrhottakes•22m ago
Do typesetters or compilers write the code for you? Or are you perhaps using a disingenuous analogy?
calmingsolitude•21m ago
Because he's paid to deliver code that works. Letting an AI agent do everything would be fine if it didn't make any mistakes, but that's far from reality.
vga1•16m ago
Resistance to technological change has been a thing since farming was invented. Socrates thought that writing will ruin everyone's memory, and that people who just rely on written word will appear knowledgeable while actually knowing nothing.

The only difference is that this is happening to us.

hliyan•14m ago
Do typesetters inexplicably change the meaning of the book or document being typeset? Do compilers alter the behavior intended by the programmer, sometimes in ways that are not immediately obvious? Did the invention of typesetters lead to investments so massive, that the investors had to herald the end of handwriting (no equivalent analogy for compilers)?
steveBK123•31m ago
My friend is a CTO at a non-tech company and he's now dealing with code from non-SWEs trying to self serve with LLMs.

But it's like a kid running a lemonade stand. Total DIY weekend project quality stuff that they are demanding go live. Hardcoded credentials, no concept of dev/qa/prod environments, no logging, no tests, no source control.

I'm not really sure teaching basic SWE practices / SDLC / system design to people whose day job is like.. accounting makes sense compared to just accelerating developer productivity.

bonesss•8m ago
It’s the same dilemma as old: it’s easier to teach a doctor UML than a coder Doctoring. But, critically, that’s about making doctor-facing IT systems not performing their skilled jobs.

Bringing code does not help, but a validated user story with flow diagrams, a UI suggestion, and a valid ticket could. That’s the bridge to gap.

Were I that CTO I’d explain that code carries liability, SWEs can end up in jail for malfeasance, fines, penalties, and lawsuits are what awaits us for eff-ups. “Coders” get fired if their code doesn’t work. Same speech to the devs, do exactly as much unsolicited Accounting as you wanna get fired for. Talk fences, good neighbours.

ravenstine•46m ago
And why wouldn't they? Companies are quite literally instructing them to do so. I work at such a company and have heard similar anecdotes from colleagues that work at other companies.
solenoid0937•18m ago
Why wouldn't you do this even if not instructed to do so?

I can do so much more with my spare time now. I throw agents at problems and get way more done.

$1k in tokens every day is easy to hit.

fnordpiglet•40m ago
To be fair, taking an average SWE at $160k/y, and spending $1k/m, and offloading mechanical ticket work from their working set sounds like a bargain to me. They could be spending the time on design and planning and working on new things, figuring out how to save costs in optimizations. In fact for every soul sucking mechanical task you offload, the better of you are overall.

It’s not like AI is the first time this happened. CI/CD and extensive preflight and integration and canary testing is also a way of saving engineer time and improving throughput at the cost of latency and compute resources. This is just moving up the semantic stack.

Obviously as engineers we say “awesome more features and products!” but management says “awesome fewer engineers!” either way pasting the ticket in and letting a machine do the work for a fraction of the cost was the right choice. There’s no John Henry award.

swiftcoder•36m ago
> pasting the ticket in and letting a machine do the work for a fraction of the cost was the right choice

If it were producing equivalent outcomes, sure. So far I haven't personally seeing strong evidence for that. LLMs do write code pretty competently at this point, but actually solving the correct problem, and without introducing unintended consequences, is a different matter entirely

entropicdrifter•27m ago
This. LLMs are terrible at planning/architecture and maintaining clarity of vision across a project. There are lots of tools that mitigate these issues but they're going to keep coming up regardless because of the fundamental nature of LLMs.

If you're not doing the design of the solutions for problems as an engineer or at least making the decisions and owning the maintenance of that architecture/design, what even is your job at that point?

Aurornis•21m ago
> and offloading mechanical ticket work from their working set sounds like a bargain to me

Unfortunately the people who offload the work of understanding and interacting with tickets just end up offloading the consequences to everyone else who has to do extra work to make sure their LLM understands the task, review the work to make sure they built the right thing, and on and on.

The same thing happens when people start sending AI bots to attend meetings: The person freed up their own time, but now everyone else has to work hard to make sure their AI bot gets the right message to them and follow up to make sure what was supposed to happen in the meeting gets to them.

AnimalMuppet•16m ago
If someone sends a bot to a meeting, warn them the first time. Fire them the second, for exactly the reason that you said in your last paragraph: They're pushing their work onto other people.
entropicdrifter•38m ago
It's bizarre to me that people being paid to use their brains with a job title including the word "engineer", which essentially means "clever thought thinker" in Latin, just offloading all of their thinking to a bot instead of just using it as a way to ensure clean execution and faster understanding of the structures of underdocumented projects.
blmarket•3m ago
You should.

If it manages to solve the working solutions - then it's great! why would you waste your time on it?

It it fails - then it's great! you find your value by solving the ticket, which can be a great example where human can still prevail to the AI (joke: AI companies might be interested to buy such examples)

(All assuming that your time cost is pricier than token spending. Totally different story if your wage is less than token cost)

dpark•59m ago
> responsibly

There’s your problem. You’re trying to be responsible instead of trying to burn tokens so you can have your name on top of some leaderboard for most wasteful AI users.

tcoff91•58m ago
The perverse incentives created by these AI leaderboards are crazy.
dpark•55m ago
The leaderboards are dumb, but I understand the point of telling people not to worry about tokens and just use it. They are trying to get people to try it, to discover new uses without asking “is this worth testing”. It’s basically early R&D budget. Eventually these companies will decide it’s time to transition into efficient usage.
tcoff91•13m ago
Yes I love that my employer says go wild with it. But I feel like the leaderboard is dumb.
gjulianm•58m ago
Several options on how to burn that amount of money without being specifically looking to tokenmaxx

- Agents that spawn other agents

- Telling agents to go look at the entire codebase or at a lot of documents constantly

- MCP/API use with a lot of noise

- Loops where the agent is running unattended.

I do think it's not really responsible use and a loop where the agent is trying to fix CI for one hour for something that would take you five minutes (for example) is absurd. But people do that.

_alternator_•36m ago
One of the new dynamics is a loop between a "code review" LLM and a "fix LLM". It's super annoying because the code review LLM often finds more bugs on a follow-up review that were there from the beginning, but at least I can loop both until check go green.
brokencode•58m ago
Really depends on the repo you’re working in.

If it’s very large, especially if the tool needs to refer to documentation for a lot of custom frameworks and APIs, you often end up needing very large context windows that burn through tokens faster.

If it’s smaller or sticks with common frameworks that the model was trained on, it’s able to do a lot more with smaller context windows and token usage is way lower.

some-guy•46m ago
I'm currently in repos where the context window required is so large that the output is almost always "wrong" for the problem at hand. Quite a few people at my company burn through tokens this way, and it certainly isn't providing value to the company.
AlotOfReading•32m ago
As always, improving accessibility for humans makes automation more effective. If the humans need to remember a PhD's worth of source code/documentation to contribute effectively, your codebase stinks.
bonesss•20m ago
I agree, in the general context of how I code.

The LLM hype train has me reflecting on what a spoiled existence working in a ‘proper’ language provides though…

React devs, JS devs, front-end devs working on large sites and frameworks might be triggering tens of files to be brought into context. What an OCaml dev can bring in through a 5 line union type can look very different in less token-efficient and terse languages.

ivirshup•8m ago
People at my company have started writing docs specifically for claude. They're quite useful for me too, but kinda disappointing they never wrote these docs for their colleagues.
conartist6•45m ago
So if the AI could do the same work on huge codebases with far fewer tokens, would it be good or bad for the AI companies do you think?
lukan•40m ago
It would be good for the first AI company offering this.
conartist6•28m ago
Or an anti-ai company of course too; one whose goal was to level the playing field between humans and AIs again
anon84873628•26m ago
Unquestionably good. They want a product that provides value anywhere it's tried so as to establish the reputation as a magic human replacement. Gaming consumption based pricing at this point would be quitting before the race is over. They can always tweak the pricing knobs later once the industry is fully hooked.
conartist6•12m ago
Right but what if the thing that made fewer tokens necessary also kneecapped the idea of making humans dependent on AI to write software.
quaintdev•36m ago
Begs the question if we should move on to minimal microservices so that whole project lives in context of llm. I hardly have to do anything when I'm working with small project with llm.
giantg2•31m ago
Orchestration between those services and the integration testing for any reasonably complex change can still be quite large.
mlsu•31m ago
Why not take it a step further? Make each function in the codebase its own project. Then the codebase can fit into the context window easily. All you have to do is debug issues between functions calling each other.
andai•10m ago
Wait, is this a joke about Lambda?
Retr0id•30m ago
The whole service might fit in a context window but the details of the system around it will still be relevant.
Aurornis•29m ago
In my experience, the result is just more crawling across the separate microservices and additional reasoning to confirm how it all fits together.

The monolithic codebases are easier to crawl for any problem that can't be conveniently isolated to a single microservice.

phkahler•18m ago
A good API should be documented, and AI should not have to read the internal code to understand how to use it.
Aurornis•16m ago
Like I said, if your work is already contained neatly inside one microservice then it doesn't matter.

The same would be true in a monolith: The context to understand what's happening would be contained to a few files.

When the work starts crossing through domains and potentially requiring insight into how other pieces work, fail, scale, etc. then the microservice model blows up complexity faster than anything, even if you have the API documented.

hadlock•14m ago
I've done the opposite, moving multiple tightly coupled repos into a single monorepo. Saves the step of the llm realizing there's a bigger context, finding the repo, then also scanning/searching it. Especially for fixes that are simply one line each in two repos.
Aurornis•30m ago
The codebase and the topic you're working on are huge variables.

I don't use LLMs to write code (other than simple refactors and throwaway stuff) but I do use them heavily to crawl through big codebases and identify which files and functions I need to understand.

Some of the codebases I explore will burn through tokens at a rapid rate because there is so much complex code to get through. If I use the $20 Claude plan and Opus I can go through my entire 5-hour allocation in a single prompt exploring the codebase some times, and it's justified.

Other times I'm working on simple topics, even in a large codebase, and it will sip tokens because it only needs to walk a couple files to get to what it needs to answer my questions.

andai•11m ago
On larger repos it spends a lot of time just finding the one line of code that needs to change. (I have the same problem, as a human!)
wolttam•58m ago
It turns out writing good prompts helps to keep token usage down as the model wastes fewer tokens discovering context it needs that wasn't hinted at in the prompt.

Whereas a good prompt will give solid leads to all the specifics needed to complete the task.

jp57•56m ago
Claude is a mediocre programmer that can do great things with great supervision, but it can't make mediocre human programmers into good ones, because they can't provide great supervision.

It will try and try and try, though.

cyanydeez•52m ago
id bet its the LLM doom loop: vaguely ask it to do something, tab to news.ycombinator.com for 30 minutes, tab back, noticed it misunderstood the prompt. Restart with new improved prompt, tab back to HN.

So yeah, probably the same thing people do anyway, just not compile time its now generating time.

ajross•55m ago
> I'd much rather hire a junior engineer who spends $100-$200/month

I'd much rather hire a junior engineer at $1.20/hour too! Can you hook me up with your contract services provider?

Obviously I know you're talking about AI costs only. But the idea of doing that analysis without looking at the salary of the person running the tool seems to be completely missing the point.

Now, sure, there are legitimate arguments to be made about efficacy and efficiency and sustainability and best practices. But, no, $100k/year absolutely doesn't need to be "justified" if it works. That's cheaper than the alternative, and markedly so.

hvb2•49m ago
> But, no, $100k/year absolutely doesn't need to be "justified" if it works. That's cheaper than the alternative, and markedly so.

If you're trying to say that 100k is less than 200k, you're right.

I don't see how any of that won't need to be justified. You can spend a lot of money and not get enough of a return...

ajross•38m ago
FWIW, you're nitpicking a strawman. I put "justified" in scare quotes for a reason, qualified it with "if it works" (which is, quite literally, the definition of a justification) and put it immediately after a sentence enumerating a list of legitimate questions for debate (all of which would be part of any justification analysis).

You agree with me, basically.

The core point is that these very large AI bills are not actually large in context, as the pre-existing scale of expenses for software engineering are larger still and this at least promises to reduce those markedly.

To wit: argue about whether AI works[1] for software development, don't try to claim it's too expensive, it's clearly not.

[1] "Is justified" in the vernacular.

boringg•55m ago
Keep word doing A LOT of lifting “responsibly”
maxdo•55m ago
In your fictional world you hire a junior who will write code manually, right?

First , I interview people, Junior skills in manual coding dropped sharply this year. These are people who started they school manual and switched mid-course. In two years there will be no such people.

well, that will never happened anymore in this world unless we will go back to caves, especially for juniors. Junior that writes good code is already a dying unicorn.

The outcome will be ... you will hire a junior ... who will burn more tokens, and chances of mistakes with less expensive model, less tokens are even higher.

krainboltgreene•52m ago
> well, that will never happened anymore in this world unless we will go back to caves

The bubble is an echo chamber.

maxdo•48m ago
I'm interviewing juniors. Their manual skills drops sharply, and that's for people who went to school in manual age, and maybe last year it stopped to be manual. Lets see what will be in a year or two lol
sikozu•40m ago
At this point do you even need to hire any juniors at all? It seems like there's a heavy reliance on AI agents and LLM especially for juniors. Is hiring a warm body that sits on a chair and prompts at a computer a good use of money?
maxdo•37m ago
yes, and no. Everybody is trying to hunt junior unicorn, They exist , but the ratio is 1 out 30. For these people, AI is a real elevator of their career.
AntiUSAbah•11m ago
Puh not good signs at all.

I mean even the normal people we get in interviews have no clue, like 80% are just ignorant.

I stoped an interview after 5 minutes: when i asked what ls -ahl is doing, he started telling me how he vibe/ai codes stuff and thats his workflow. Okay if you don't know the basics, guess what? everyone can replace you or at least i'm not hiring you (i only told him thats not what we are looking for and thanked him)

we are doomed :D

maccard•54m ago
There was a tool posted called codeburn that showed a breakdown of what activity your usage was spent on. Mine was almost all coding but other people in the thread said >50% of their usage was conversation. I’m inclined to agree with you that someone who is reasonable with their compute usage is likely to be thinking things through rather than just burning tokens to get an LLM to solve the problem
embedding-shape•46m ago
> I just can't figure how _how_ to burn that much money a month responsibly.

Same but in regards to quotas. I'm on the 200 EUR ChatGPT plan, so presumable have the highest quota, using the "most expensive" models, on highest reasoning, in fast-mode (1.5x quota usage) and after a full day of almost exclusively doing programming with agents, I still get nowhere close to hitting my quota.

In fact, since I started using agents for coding, the only time I even got close, was when I was doing cross-platform development with the same as above, but on three computers at the same time, then I almost hit my weekly quota. But normally, I get down to ~20% of the quota but almost never below that. I don't see how I could either, I'm already doing lots of prompts and queries "for fun" basically.

jackdoe•40m ago
I am running a bunch of autoresearch loops that optimize various compilers and its pretty easy to burn through as much money as you want if you have a measurable goal and good tests.
embedding-shape•36m ago
> have a measurable goal and good tests

I have both of those, yet seemingly I guess I'm not setting my goal in such a way that it supports "endless inference" like that. My goals have eventually ends, and that's when I move on. Optimization sure sounds like something you can throw away a good amount of tokens/quotas on, so yeah.

adi_kurian•38m ago
Codex quota is suspiciously high right now. Either way, the subscription plans are not sustainable, and perhaps less relevant to any discussion about corporate API use. The prosumer developer plans are an insane deal. It is a golden age right now and it will end. If you tried to use the APIs to achieve the same thing, you would be spending thousands upon thousands of dollars a month. My completely unfounded conjecture is that OpenAI is trying to grab developers back from Claude by burning $$$$.
embedding-shape•34m ago
> If you tried to use the APIs to achieve the same thing, you would be spending thousands upon thousands of dollars a month.

Yeah, obviously, not sure why anyone would be using APIs at this point, seems bananas to spend more than 10 EUR per day when these "almost-endless" subscriptions exists.

> My completely unfounded conjecture is that OpenAI is trying to grab developers back from Claude by burning $$$$.

Unlikely, since codex TUI was launched OpenAI pretty much had every developers pocket already as the agent is miles and leagues ahead of Claude Code, pretty much from inception. No other provider comes close to ChatGPT's Pro Mode either, I don't even think it's a quota/pricing thing, have the best models and people will flock by themselves.

fragmede•22m ago
> miles and leagues ahead of Claude Code, pretty much from inception.

Can codex run background tasks yet? CC's ability to run a process in the background and monitor its output for errors while another process access that first process, is probably what got cc so popular for web development over codex to start with.

Aurornis•13m ago
> Same but in regards to quotas. I'm on the 200 EUR ChatGPT plan,

The API rates and monthly plan rates are not the same.

If you're using enough to justify the 200EUR plan (instead of the 100EUR plan), your use might actually be as high as some of the API bills discussed above.

jampekka•9m ago
I have to churn to get to my ChatGPT Plus $20 plan limits with gpt-5.5 xhigh. Starts to feel like I'm doing something wrong.
adastra22•7m ago
There are tools that let you extract out what the API price would be for a subscription plan use. I typically have monthly runs that are on the order of $2k - $4k at API prices, despite paying a mere $200/mo to Anthropic.
_pdp_•45m ago
Do you run 20 claud code agent on max for 8 hours a day? :)
lumost•40m ago
I spend 400-500 dollars per day during active development at this point. However with more aggressive task breakdowns I can spend ~5k per day.

These spend rates are in part due to operating on a larger code base. Operating on a larger code base means more time searching and understanding the code, tests, test output. They are also due to going all-in on agentic coding.

It can feel painfully slow to go back to coding by hand when for a dollar you can build the same functionality in a minute. Now do this with multiple sessions and you can see where the cost goes.

steveBK123•36m ago
Your reply answers how you are able to spend money, not if it is returning sufficient dollar value per spend..

> I genuinely challenge someone spending $5-$10k a month to demonstrate how that turns into $50-$100k in value.

solenoid0937•12m ago
The problem with HN is that everyone here thinks like an engineer, not like a business owner.

$10k a month on tokens is just not that much when you're already making $2M per engineer. If their productivity has increased even 10% then the spend was well worth it.

Case in point, Meta made 33% more revenue this earnings report. Now you can nitpick and ask for attribution down to the dollar, but macro trends speak for themselves.

sailfast•37m ago
In addition to what folks are saying here about larger code bases and multiple features at once, there’s also the time requirement to be efficient. It takes time to be more efficient with token usage and it may not be worth it for some of these companies so… burn away until we start to get more data and then we’ll check in.
gordonhart•36m ago
On the OpenAI side, GPT-5.5 generates spend at a prolific rate that's even faster if you use it through an ACP connection in a tool like Zed. I used to never think about Codex rate limits and now I'm hitting mine every 5 hour block and spending ~$100/day on top of that in adhoc credit purchases.
bdangubic•35m ago
> I use llms daily

this is your “problem” - you are missing the “nightly” part. on my box LLMs run 24/7 :)

bigbuppo•34m ago
You're probably generating new code rather than analyzing old code for "improvement".
Salgat•30m ago
Do lots of deep research and code reviews on large legacy codebases. I've created lots of documentation to reduce token consumption but it's still a lot of token consumption.
readitalready•27m ago
I think companies are charged API prices vs individual prices. That alone is 10x for Anthropic. Not sure though.
xboxnolifes•19m ago
I dont use automated agent workflows or anything, I just use clause as a pair programmer of sorts. A month or so ago I used claude Opus 4.6 for 2-4 hours on API pricing and racked up $20 in spend, which surprised me since that was much higher than my usual.

I dont know about $10,000, but i can see hitting $1,000 pretty easily if you aren't looking at the costs.

crystal_revenge•17m ago
One thing that stands out it is it sounds like you're using LLMs for only one part of your process. You're having LLMs help you write code, but the code you're writing doesn't itself make use of LLMs.

My current job basically involves trying to improve processes that themselves make heavy use of LLMs. Once you have multiple agents in parallel running multiple experiments on improving the performance of primarily LLM driven tools it's not that hard to get your token usage pretty high.

stronglikedan•17m ago
I don't think it's about value. Tokenmaxxing is a thing now since that one CEO said he wants his $250k/yr devs to use $400-$500k/yr in tokens, so now it's all about how many agents can you have running concurrent tasks all day long.
o10449366•16m ago
I spent $24,096.47 in "API" costs with my $200 Claude Code Max subscription in April.

I'm building my own saas. I spent 6 months writing the code by hand before using Claude, and that was fine, but its much faster to give the exact specs to Claude and have 3-4 sessions working in parallel with me. When you validate changes with exact test specs there's much less correction you need to do. I always hit my weekly limit and it's far cheaper for me to use this than to hire someone and spend time onboarding them.

rconti•13m ago
Many companies actively hide the cost from their employees.
BeetleB•10m ago
First: There's the obvious "If the company is letting me do it, I'll be wasteful." This includes not clearing/compacting the context often. Opus now has a 1M context window, and quality is good to at least 200K. So each query is burning a lot of tokens until you clear/compact.

People have already mentioned the size/complexity of the codebase. I'm new to my team and the codebase isn't huge, but it's large enough that there are plenty of parts I have little understanding about. When I'm given a task, then yes, I definitely go to Claude and ask it to find the relevant parts of code so I can understand the existing workflow before even attempting to change it.

The downside is that I don't build expertise. But the reality is that with Claude, I can get the work done in 1 day that would take me 5 days of struggling, and if everyone is doing it, I can't be left behind. So I take the middle route - I get it done in 2-3 days instead of 1 so I can at least spend some time with the code.

Especially with AI, the rate at which code changes in our codebase is insane. So I built a tool that takes a pull request, and tells the LLM to go deep and explain to me what that pull request does. (Note: I'm not the reviewer, I just want to keep tabs on the work that is going on in the team).

And this is just the beginning. I haven't actually spent time to come up with more ways to use the LLM to help me.

My usage is similar to yours, but if I were fairly experienced with the code base, I'd do a lot more. I haven't asked, but I suspect there are people in my team who go over $1K/month.

As always, the bottleneck is proper testing and reviews.

bs7280•9m ago
I have ancedotal examples of claude code choosing a solution to a problem that is ridiculously token inefficient.

One example - was giving several agents different sub problems to solve in a complex ML / forecasting problem. Each agent would write + run + read a jupyter notebook. This worked ok, the notebooks would be verbose but it was fine... until one of them wrote out hundreds of thousands of rows to a cell output, creating a 500MB ipynb file. Claude tried several times to read it and it used my entire context limit.

The solution was to prescribe a better structure of doing the world (via CLI analysis scripts + folders to save research results to). But this required some planning, thought, and design work by me the operator.

When I see people spending $10k a month in tokens, I can only assume they are taking lazy hands off approaches to solving problems with the expensive hammer that is claude code. EX: have claude read all your emails every day... the lazy solution is to simply do that, but a smarter solution is to first filter the email body HTML to remove the noise.

DeathArrow•9m ago
It really depends on the way you use AI. If you just prompt it for a task and either accept or reject the output, you won't spend much.

But if you are like me, you aggressively document and brainstorm before planning, you review that documentation with subagents, make modifications, you aggressively plan, you verify that plan with subagents,make modifications, have a large number of phases, planning again for each phase, writing tests to cover 100%, implement each phase, do intermediate and final code reviews with subagents, apply fixes, write final documentation and do all these in parallel, if you have multiple tabs in your terminal each running Claude Code for 10-12 hours a day, then $5000 per day is not much.

If you use Anthropic or Open AI subscription and you spend $1000 per month, you are not using AI much.

hliyan•8m ago
The answer may be agentic loops that keeps cycling through the same problem again and again until they land on a non-erroneous outcome. Some people boast having multiple such agents working in parallel on different problems, tending to one while another is processing, perhaps not unlike the movie mad scientist who runs around the lab throwing switches while laughing maniacally at the prospect of his impending success.
kansface•6m ago
> I just can't figure how _how_ to burn that much money a month responsibly.

I always have a few agents (2-5) doing research and working on plans in parallel. A plan is a thorough and unambiguous document describing the process to implement some feature. It contains goals, non-goals, data models, access patterns, explicit semantics, migrations, phasing, requirements, acceptance criteria, phased and final. Plans often require speculative work to formulate. Plans take hours to days to a couple of weeks to write. Humans may review the plans or derived RFCs. Chiefly AI reviews the code (multiple agents with differing prompts until a fixed point is reached between them). Tests and formal methods are meant to do heavy lifting.

In my highest volume weeks, I ship low hundreds of thousands of lines of software not counting changes to deps.

> At a corporate level, I'd much rather hire a junior engineer

Any formulation of problem sufficient for a truly junior engineer to execute is better given to an agent. The solution is cheaper, faster, and likely better. If the later doesn't hold, 10 independent solutions are still cheaper and faster than a junior engineer.

There is no longer any likely path to teaching a junior engineer the trade.

paulsutter•5m ago
I'm working on some serious data analysis + realtime asynchronous code, and I use 200-400 million tokens a day with Claude Code alone (via ccusage). The complexity of the code seems to have a big impact on the number of tokens used. I use many fewer tokens on other sorts of coding projects.
Anon1096•5m ago
The fully loaded cost of a senior engineer is already well past 400k. +5k a month is not that much if it helps them be XX% more productive. Personally at a different big tech I'm in the mid 4 digits AI spend per month and it helps me a lot, basically all coding has been trivialized and I work on an extremely large codebase. I'm spending more time on things closer to direct value generation like data analysis and experiment tweaking rather than spending time moving a variable across 10 layers of abstraction and making sure code compiles.
iLoveOncall•4m ago
It's easily explained. People are losing their skill in real time and literally cannot develop anymore without AI. That's it.
wahnfrieden•4m ago
You are probably guiding them step by step and reading the results. Maybe you also sit and wait for the results.

Agents can iterate on a problem for hours if they can see their results and be given a higher level goal to evaluate their progress toward.

When you have an agent working for minutes or hours, never wait on it. Use that time to spin up another agent.

You can also spin up several agents in parallel to attempt the same item of work and compare their results to choose which to work off for next steps, instead of rolling the dice on a single option at a time and gambling that it's better to refine that first attempt instead of retrying from the start several more times.

If you want to use up more tokens to get more done (though more outside of your control and ability to review of course), that's how.

davidcann•59m ago
> 70% of committed code originating from AI.

How are they calculating that? They could be using my tool, Buildermark, but I do t think they are: https://buildermark.dev

ninjagoo•56m ago
According to [1], there are about 5500 people in Engineering at Uber. Using $1250 as the mid-point of the $ spend range, that comes to about $6.8 Million in engineering AI spend, ballpark, with the range being $2.75 Million - $12 Million. The article lists $3.4 Billion as the R&D spend.

The AI spend does not appear to be a significant chunk of R&D spending (0.3% in 4 months or 1% annualized). If they didn't plan for it, sure, it's not peanuts in the budget, but in context not that much.

The real question is, what did they get for that amount? The article claims that 70% of the code commit is now AI-generated, so presumably the code passed review and tests. Did it accelerate the feature count? did it reduce quality problems? Did it lead to other benefits?

Sadly the article is silent on the outcomes, besides the higher spend.

Maybe 4 months is too soon to assess the benefits. On the other hand, in an agile world ...

[1] https://www.unifygtm.com/insights-headcount/uber

mkozlows•54m ago
Everything in this article is purely fake. The numbers don't add up, don't match any reported info, and are just fiction.
yorwba•50m ago
The actual source https://www.theinformation.com/newsletters/applied-ai/uber-c... says "about 11% of real, live updates to the code in its backend systems are being written by AI agents built primarily with Claude Code, up from just a fraction of a percent three months ago" and "He wouldn’t disclose exact figures of the company’s software budget or what it spends on AI coding tools."
AndrewKemendo•54m ago
This continues to boggle my mind so hopefully somebody can explain how this is happening.

I’ve been using all these tools since they started popping out around 2021 personally and professionally. I probably built four or five products at this point with assistance, not to mention the thousands and thousands of back-and-forth conversations for research or search or rubber ducking or whatever.

I have never spent more than whatever the professional max plan is that is consistently $20 a month.

I asked a friend of mine who spent a couple hundred dollars in like an few hours how they did it. The answer was they basically getting these agent groups of agents stuck in a loop and they’re constantly just generating verbose bullshit that is not even interrogated and doesn’t come out with any artifact that is inspectable no matter how expert you are.

The couple of stories I have heard of these massive crazy spends are people literally just assuming these things can complete an entire human task in one shot, so they continue to hit the “spin the wheel” button until they get something closer to what they want

But I’ve yet to see that actually work

and it actually flies in the face of every instruction guide or documentation or prompt engineering process that has been described over the last almost 5 years

taf2•53m ago
i bet someone mentioned openclaw one too many times
dataranger•52m ago
we run an agentic pipeline in a different domain (data sourcing) and the only way the math works is to be ruthless about which stages actually need which model.

As a founder, the question I always have is "what is the marginal value per token relative to engineer-hours saved." More of a gut feel at the moment, but would be great to calculate.

hyperpape•50m ago
I love how these articles drop, and all of a sudden HN is filled with people who think engineering productivity is simple to measure.

Yes, productivity means revenue (or cost reduction), and that is measurable.

However:

1. You spend money today to build features that drive revenue in the future, so when expenses go up rapidly today, you don’t yet have the revenue to measure. 2. It’s inherently a counterfactual consideration: you have these features completed today, using AI. You’re profitable/unprofitable. So AI is productive/unproductive, right? No. You have to estimate what you would’ve gotten done without AI, and how much revenue you would’ve had then. 3. Business is often a Red Queen’s race. If you don’t make improvements, it’s often the case that you’ll lose revenue, as competitors take advantage. 4. Most likely, AI use is a mixture of working on things that matter and people throwing shit against the wall “because it’s easy now.” Actually measuring the potential productivity improvements means figuring out how to keep the first category and avoid the second.

This isn’t me arguing for or against AI. It’s just me telling you not to be lazy and say “if it were productive you’d be able to measure it.”

jcgrillo•47m ago
If it were 10x productive you'd be able to measure it indirectly, you'd be unable to avoid measuring it. So the initial claims were clearly lies. The research question is:

  Is it >1.0x productive?
I agree that's very hard to measure. But given what this shit costs, it had better be answerable, and the multiple had better justify the cost.
dijit•43m ago
> HN is filled with people who think engineering productivity is simple to measure.

I think the prevailing (correct) consensus is that developer productivity is actually very hard to measure, and every time it is attempted the measure is immediately made a target making the whole thing pointless even if it had been a solid measurement- which it wasn't.

IDK where you're getting the idea here that measuring productivity of anyone who isn't a factory worker is easy.

causal•41m ago
I mean, the option is not zero productivity or some productivity: it could be negative.

We doubt the productivity because we have enough experience with Claude Code to know that flooding your organization with that many tokens isn't just unproductive, it's actively harmful.

emp17344•24m ago
Minor shifts in productivity are hard to measure. Major jumps in productivity would be obvious. I think it’s clear that, if AI is affecting productivity, it’s to a minor degree at best.
jcgrillo•49m ago
AI token austerity when
tunesmith•47m ago
I think as it becomes more common for executives to think we can replace software engineering with agents, I wonder if they might be basing their decisions off of unrealistic perceptions of the average software engineer. I guess I'm mulling two somewhat contradictory senses:

1. You get out of it what you put into it. A savvy CTO might be incredibly excited by everything they can do with agents, and improperly think that all the software engineers can do the same thing, when in reality your org's average software engineers might not have the creativity to even think of many cases where it could save them work. So by mandating agent usage, you might find that productivity hasn't improved while AI costs have increased.

2. When using AI, there are two gaps that become more obvious. First is the gap of: who tells the agent what to do? In many orgs, product isn't technically savvy enough to come up with a detailed spec/plan that LLM can use. And many cog-in-machine developers aren't positioned to come up with the spec, they just want to implement it. By expecting work to be implemented by agent-using developers, you might instead find a lot of idle workers waiting for work to show up. Second is the qa/review cycle. You've introduced a big change to the org but are you really saving cost or shifting it?

I'm all for introducing LLM as optional to help existing developers increase velocity and quality, but I think the "let's restructure the org" movement is really dicey, especially for mid-size or smaller employers.

tills13•39m ago
> You get out of it what you put into it.

Beyond that, it's a force multiplier and it doesn't care if the force is positive or negative. Someone with poor software engineering principals can use AI to make an absolute mess quickly.

saos•45m ago
Interesting. Some companies have rolled it out to every department with a small budget.

I wonder how this will end as AI becomes more expensive to use. If you can't quantify ROI then I guess you're cooked.

uncircle•43m ago
Now AI slop factories make the HN front page?
tribune•42m ago
Might as well get while the getting is good and Anthropic is subsidizing the cost of compute
dwa3592•41m ago
I am confused - what did they ship based on this spending? - it is totally alright to spend that money if it made significant progress in some area.

or did the engineers just chill and let claude take over daily duties? (this is also a benefit for employees in my opinion)

cassianoleal•32m ago
> Uber's unexpected budget burn matters because it signals how valuable AI tools have become to engineering productivity

That's a bit of a logical leap with no demonstrable increase in productivity.

All this shows is that they're spending a lot more on AI than they budgeted for. Nothing else.

Cyphus•16m ago
I think the tech industry in general is taking advantage of the fact that software productivity is hard to quantify to say whatever they want about their AI productivity gains. Apparently we are past the point of having to justify anything and can just equivocate increased AI spend with success.
rconti•5m ago
Could be negative! All it shows is that Uber is probably incentivizing token usage just like so many other companies are.

You get what you measure.

geetee•31m ago
No mention of if it actually improved outcomes.
2ndorderthought•9m ago
That's not the point silly goose.
Cyphus•31m ago
> what started as an experiment in productivity became a runaway success

Successfully burning through cash and tokens, alright, but what have they gotten out of it?

trjordan•30m ago
> figuring out if the company can afford this level of productivity at scale

This is the thing that boggles my mind. They spent their budget. They have 4 months of data. What do they have to show for it?

I'm not a hater; I'm not a luddite. I have a $200 Max plan and I use it.

But are you saying that Uber made this tool available, urged everybody to use it, and is confused about what happens when it worked? It's one thing if they decide AI isn't productive enough to be worth the cost.

Are they out of ideas on what to build next, or something?

bakugo•13m ago
> I'm not a hater; I'm not a luddite. I have a $200 Max plan and I use it.

I'm glad to see we've reached the point of AI discourse at which anything that might be construed as criticism must be prefixed by "I'm also part of the cult, I'm not a non-believer, but" to avoid being dismissed as a heretic.

zeafoamrun•5m ago
The personal max and teams plan actually are an amazing bargain compared to the API PAYG cost you get with Enterprise. I guess they really need their Enterprise features though, otherwise they could just tell users to expense a $200 max sub. Enterprises gonna Enterprise.
mattas•24m ago
Wonder how many tokens would be saved if everyone just put “be brief” in their prompts.

Also wonder if there is some perverse incentive for models to be verbose to juice tokens.

Painsawman123•23m ago
If they burned through their ML budget in four months while using heavily subsidized models, we're going to see companies burn through their ML budgets in less than a week once those subsidies are no longer in place and they have to pay per tokens used.....
retired•16m ago
Have we reached a point yet where companies are spending millions a year on software licenses, cloud and AI to the point where the return isn't worth it?

Years ago I did work for a company that was spending over a million on Oracle product licenses and I was part of the consultant team they hired to rip it all out and just go for simple maintainable code based on open source products. Not only did it transform into a codebase that the average newly hired developer could maintain, you also had the savings of not paying Oracle a significant portion of your revenue.

I feel like that will repeat itself in a few years time with the current cloud and AI train everyone is on.

I haven't been in a professional setting for a while, I just code for fun nowadays so perhaps I'm somewhat out of the loop.

phillipcarter•15m ago
> Monthly API costs per engineer ranged from $500 to $2,000 as adoption skyrocketed across the company.

That's...not exactly a lot per engineer. It sounds like they just didn't budget correctly. Especially if the net of that work is more features that would have otherwise required hiring more engineers, which would cost a lot more than $500 to $2000 a month.

AntiUSAbah•10m ago
Its a lot. Its a lot for being able to generate that many tokens.

And i'm not talking about some genies 10x developer who is working with multiply git worktrees on x tasks in parallel in high quality

Animats•10m ago
What is Uber developing? They're an app and a car allocator back end. Both work OK. Why are they spending so much?

They gave up on self-driving, so that's not it.

jitler•6m ago
> Both work OK

If only. The optimizations they do on their matching algorithm has made the UX so terrible, I regularly use Lyft instead now.

ookblah•10m ago
this is pointless without knowing what they are measuring. you could genuinely moving faster or you could be optimizing for engineers in a rat race to push more code because all their peers are now doing it because those are the metrics you are measuring for "ai productivity".
pier25•6m ago
> the AI coding tools represent a meaningful chunk that nobody expected would require this much capital so quickly

Surprised Pikachu moment.

And it's going to become even more expensive when AI companies start charging to actually make a profit.

monooso•4m ago
> Uber's unexpected budget burn matters because it signals how valuable AI tools have become to engineering productivity.

This infers value from spend, which makes no sense. Burning the budget tells us engineers like the tool, not that it's producing value.

Show me how to make two dollars whilst spending one, and budget isn't a problem.

tzury•4m ago
What are the sources for the “facts” presented in this post?