Getting real tired of people new to AI thinking only recent LLMs are AI somehow. BoW was a pretty solid technique and that only requires you to learn how to count to one.
If the chairman dictates DEI, DEI it is. Most software developers put up the proper flags in their Twitter "bios" and purged opponents. The same developers now queue to work for Zuckerberg's "male energy" company.
If the chairman and the industry dictate AI, AI it is. The same people who said girls and coal miners have to code now talk about efficiency, products and rationalize layoffs.
This is the product of an industry that has been dominated by bullshitters for at least two decades.
So which companies are betting so big that it might actually threaten them? Oracle maybe?
https://www.axios.com/2026/03/18/ai-enterprise-revenue-anthr...
Only with the blessing of shareholders. Frankly Google's search box and ad-tech has been carrying all of its failed bets but at some point people will start questioning if Google is returning enough cash given the results of new investments. Google's management does not own the cash - it holds the cash on behalf of the owners.
On the other hand, how the stock does will matter to other employees because they’re shareholders and they have a stake in the outcome.
> These apps will win awards at the next all-hands. In two years they’ll be unmaintainable tech debt some poor soul inherits and rewrites from scratch.
Huge assumption/prediction that I think is actually just wrong. There's this weird assumption from a certain crowd, never justified or explained, that tech debt accrued by AI is now, and will forever be, impossible for AI to address, and will for some reason require humans to fix. Working at pace with agents I accrue tech debt every day, then go through the code nightly, again with agents, to clean and tidy everything up.
The more I see this view espoused the more bizzare it seems. People's assumptions seem to be "if AI couldn't one shot this perfectly the first time, then it's useless to try to have it go back over the codebase and identify and address issues". This doesn't match my personal experience at all, second or third passes over code with CC or Codex are almost always helpful and weed out critical issues, but I'm open to hearing from the rest of the HN crowd on their experiences on this.
I think human understanding of the surface area of a company is already very unwieldy. AI balloons the surface area. at some point using more AI to solve AI is reasonable! But to whatever extent a human needs to interface and manage this world, that's the accrued debt.
Most of the time a human works over code multiple times, and still produces tech debt.
Give an AI agent enough time, by prompting it multiple times, and explicit instructions to look for and address tech debt of various forms, and it will.
Maybe you can describe what the various forms of tech debt are that you are talking about?
There is no need to improve their mental model of a problem and ability to code to recognise the refactoring opportunities that already exists in the code. It only takes a sufficient skill, and effort invested on refactoring. The way to get a model to invest that effort is to ask it. As many times as you're willing to.
> Maybe you can describe what the various forms of tech debt are that you are talking about?
Any. Whether or not you need to prompt much to address it depends on consistency. In general I have a simple agent whose instructions are just to look for opportunities to refactor, and do one targeted refactor per run. All the frontier models knows well enough what good looks like that it is unnecessary to give it more than that.
The best way of convincing yourself of this, is to try it. Ask Claude Code or Codex to "Explore the code base and create a plan for one concrete refactor that improves the quality of the code. The plan should include specific steps, as well as a test plan." Repeat as many times as you care to, or if in Claude Code, run /agents and tell Claude Code you want it to create an agent to do that. Then tell it to invoke it however many times you want to try.
In my experience, an agent will rarely recognise a common pattern and lift it into a new abstraction. It requires a human with taste and experience to do it. For example, an agent will happily add a big amount if branches in different places of the codebase where a strategy pattern or enum would be better (depending on the language).
If you have a working prompt or harness that ameliorates this, I'd be glad to see it.
But if there aren't enough returns soon the money will eventually dry up for OAI and Anthropic and Google will not be trusted with their cash balance.
Its amazing how people here think that money is a play-thing and this dance can go on forever. It cant and wont and the fear-induced marketing doesnt work forever either.
Both more data and better data are very expensive. Procuring... Handling... All of the above...
You can spend bottomless piles of cash and by not doing the right things not get there. I can count on one hand the number of times I've seen business/investor incentives line up with r&d incentives.
There's no guarantee that there is enough or good-enough data, regardless of how much money you have.
Then we went through ~10 complete rewrites based on the learnings from previous attempts. As we went through these iterations, I became much more knowledgeable of the domain - because I saw failure points, I read the resulting code and because I asked the right questions.
Without AI, I would likely have given up after iteration 2, and certainly would not do 10 iterations.
So the nuance here is that iterating and throwing away the entire thing is going to become much cheaper, but not without an engineer being in the loop, asking the right questions.
Note: each iteration went through dual reviews of codex and opus at each phase with every finding fixed and review saying everything is perfect, the best thing on earth.
The problem is that vanishingly few people actually understand the code and are asking the agents to do all of the interpretation and reasoning for them.
This code that you've built is only maintainable for as long as you are still around at the company to work on it -- it's essentially a codebase that you're the only domain expert in. That's not a good outcome for companies either.
My prediction is that the companies that learn this lesson are the ones that are going to stick around. LLMs won't be in wide use for features but for throwaway busy-work type problems that eat lots of human resources and can't be ignored.
I'm positive that the last company's CEO probably mandates by now that nobody must write a single line of code by hand and there's likely some rigid process everyone has to follow.
Fun times ahead.
I was big on correctness, software safety (think medical devices, not memory) and formal proofs anyway, so I think I'm just going to take the pay cut and start selecting for those types of jobs. Your run of the mill SaaS or open source+commercial companies are all becoming a death march.
Most of them already were death marches to begin with, now they are firing squads
If anything there will be less tech debt, because it's easier than ever to clean up.
Unit and end-to-end tests are free now.
Telling an agent to go through your code base and find bugs is free now.
Telling an agent to generate all possible user interaction to your API and then performance test it is free now.
Everything is so easy and takes much less time than it used to.
Explain please. I might agree with discounted.
Not free because agents have a big tendency towards useless tests, so you need to verify them and make sure they're testing real things and the things that matter. I'll agree that it's a lot easier to generate thorough tests matching a spec, though.
It probably won't pass a security review (It rather curiously generated a document.write call in a React project to print a component). I had to disable a number of accessibility related lint checks to get the code committed, as it didn't pass them. It has no tests.
The effort to meet the company's stated standards is genuinely likely to be greater than creating the thing.
The corporate world has always been 80% lies, fake KPIs and theatre. "Synergies", "disruptive innovation" "digital transformation", same shit since the 90s. Managers don't give a flying fuck about your clever moat. They wake up one day, get a spreadsheet from McKinsey saying "cut 15%" and boom - your undocumented wizardry gets deleted along with your badge. Nothing personal, just Excel doing what Excel does.
Yes, the corporate bullshitry has been turbocharged with AI now. But it's nothing new and nothing that much tragic. At the very least the same AI can help me finally release personal projects that have been collecting dust for years. Who knows what the future will bring. I'd be much more worried of oil supply chokehold than of AI turbo circus in the corporate world. No oil means not having enough food tomorrow; or medical supplies. My child might die because of this. But AI temporarily causing perturbations at work is just another round of corporate theatre. Been there many times.
Employment danger is real, but not apocalyptic. Some jobs will evaporate, sure. But even as the same articles states, now once thing ("AI know-how") replaced another thing ("domain knowledge siloing"). The corporate machine still needs warm bodies for the messy human parts: sales, talking to customers (customers hate talking to a robot, what a fucking surprise), covering ass. I would say, covering ass is the most important one, along with delegating the project management to someone else below on the corporate hierarchy, so upper management wouldn't have to work and would only keep asking for status updates. They would always need someone to type the actual AI requests. It's not like top management or VP would ever do that, neither they would ever run it automatically, since AI can delete production (happened many times), and they don't want to be the scapegoats.
So yeah, the article is overdramatic trash for clicks. AI is just another round of that circus. The "famine" won't be real, it'll be a bunch of overpromises, just as usual. Same as it ever has been.
The buzzwords you cite are the vulnerabilities of the corporations which predator consultancies rely on to make sales. I don't know that the corporate world is 'about' those things so much as it suffers from them.
It's been argued frequently that families and tech companies are structured like socialist states. Central planning, flatter structures, division of labor...I'm not starting down that thread or opening up that debate.
This only is not a capitalist structure but capitalism itself doesn't really offer any ideas about structure or governance beyond encouraging the free movement of capital.
Not pulling any punches over there. It does feel like 95% of the "AI industry" consists of wrappers and associated tools.
Having looked at some of the project descriptions, I realised that they would need to invest far more manpower, special expertise and time if they wanted to implement them with a moderate chance of success.
I believe this is not uncommon in large organisations worldwide.
BTW, it’s great that somebody has drawn a comparison with China’s Great Leap Forward. Not many people know about it and it always serves as a stark reminder of how crazy state-ordered “progress” could be.
supliminal•1d ago
madrox•1d ago
supliminal•1d ago
Not sure why I was downvoted. I read the post and the linked articles.
ilovebnpl•14h ago
don’t be fooled, they lie and deceive too. Klarna who? Investor fraud its what they do. #shutdown
upsetpotato•13h ago
Then the KYC bypass allowing fraud rings to buy 20k worth of products with stolen identities is outrages.
Then the IPO stuff… That entire thing is a mess and there’s so many issues with their F-1A and being dishonest.
Though this is just my subjective opinion and shouldn’t be taken seriously.