frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Discuss – Do AI agents deserve all the hype they are getting?

4•MicroWagie•3h ago•1 comments

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

48•UmYeahNo•1d ago•30 comments

LLMs are powerful, but enterprises are deterministic by nature

3•prateekdalal•7h ago•5 comments

Ask HN: Non AI-obsessed tech forums

28•nanocat•18h ago•25 comments

Ask HN: Ideas for small ways to make the world a better place

18•jlmcgraw•21h ago•21 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

44•Invictus0•1d ago•11 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•5d ago•520 comments

Ask HN: Who is hiring? (February 2026)

313•whoishiring•5d ago•514 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•16h ago•1 comments

AI Regex Scientist: A self-improving regex solver

7•PranoyP•22h ago•1 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

18•jchung•2d ago•13 comments

Ask HN: Why LLM providers sell access instead of consulting services?

5•pera•1d ago•13 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•1d ago•7 comments

Ask HN: Is it just me or are most businesses insane?

8•justenough•1d ago•7 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•4d ago•122 comments

Kernighan on Programming

170•chrisjj•5d ago•61 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•3 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•18h ago•2 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•4d ago•1 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: How Did You Validate?

4•haute_cuisine•2d ago•6 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments
Open in hackernews

Ask HN: Where is legacy codebase maintenance headed?

6•AnnKey•4w ago
I've seen a few anecdotes lately that say that they use Claude Code on legacy codebase and with relatively little supervision it can work on complex problems. Then the claim that Claude Code writes most of its own code, and that they no longer mentor their newcomers - instead, AI answers their questions and they can start making meaningful changes within the first few days. To me it sounds almost too good to be true, so I'd love to have some reality check.

I've spent most of my career in legacy codebases, reading, tracing behavior, making careful changes, and writing tests to protect them. I've taken a sabbatical though, which ends soon, and I'm quite worried and excited to what has happened during this time.

For those working on legacy codebases:

- Has the workflow really shifted to prompting AI, reviewing output, and maintaining .md instructions?

- Does your company allow Claude Code, Codex or similar tools? If not, what do you use?

- Do companies worry about costs and code privacy?

- Where do you think this is headed, a year from now?

Concrete examples, good or bad, would be especially helpful. Thanks.

Comments

al_borland•4w ago
I work mostly in Ansible and Copilot is completely incompetent when trying to deal with it. I’ve tried several models that are available (Claude, Gemini, various GPTs, Codex), and they’ve all been pretty bad.

For example, I asked just this week if a when condition on a block was evaluated once for the block or applied to each task. I thought it was each task, but wanted to double check. It told me it was done once for the block, which was not what I was expecting. I setup a test and ran it; it was wrong. It evaluates the condition on every task in the block. This seems like a basic thing and it was completely wrong. This happens every time I try to use it. I have no idea how people are trusting it to write 80% of their code.

We recently got access to agent mode, which is the default now. Every time it has tried to do anything it destroys my code. When asking it to insert a single task, it doesn’t understand how to format yaml/ansible, and I always have to fix it after it’s done.

I can’t relate to anything people are saying about AI when it comes to my job. If the AI was a co-worker, I wouldn’t trust them with anything, and would pray they were cut in the next round of layoffs. It’s constantly wrong, but really confident about it. It apologizes, but never actually corrects its behavior to improve. It’s like working with a psychopath.

In terms of training AI on our code base, that seems unlikely. We’re not even allowed to give our entire team (of less than 10 people) access to our code. We also can’t use whatever AI tool is out there. We can only use Copilot and the models it has, and only through our work account with SSO so it applies various privacy rules (from my limited understanding). We don’t yet have access to a general purpose AI at work, but they are in pilot I think.

I have no idea where it’s heading, as I have trouble squaring the reality of my experience with the anecdotes I read online, to the point where I question if any of it is real, or these are all investors trying to keep the stock prices going up. Maybe if I was working in a more traditional language or a greenfield environment that started with AI… maybe it would be better. Right now, I’m not impressed at all.

raw_anon_1111•4w ago
I don’t use Ansible. But both Codex (and just using ChatGPT) and Claude Code are excellent with CloudFormation, Terraform and the CDK. Sometimes with ChatGPT I have to tell it to “verify its code using the latest documentation” for newish features
journal•4w ago
realistically, you can get a project started within low enough tokens, to have a long enough conversation to generate something that looks like 1.0. eventually you will reach a point where every request becomes more expensive and caching doesn't help. you'll have to truncate/prune/hoist the context however you do that, summarize is what i do and i get creative. i have absolutely no idea how anyone using agents is producing anything maintainable over a long iterative period.

this is llm bitcoin moment, you will find them raise the price so high that running these agents like you are used to now will leave you with no pants on. you need to aim for minimum context, not stuff it with everything irrelevant.

jf22•4w ago
Yes the workflow has shifted.

I've handled the sloppiest slop with llms and turned the worst code into error free modern and tested code in a fraction of the time it used to take me.

People aren't worried about cost because $1k in credits to get 6 months of work done is a no brainer.

A year from not semi-autonomous llms will produce entire applications while we sleep. We'll all be running multi-agents and basically write specs and md files all day.