I've spent most of my career in legacy codebases, reading, tracing behavior, making careful changes, and writing tests to protect them. I've taken a sabbatical though, which ends soon, and I'm quite worried and excited to what has happened during this time.
For those working on legacy codebases:
- Has the workflow really shifted to prompting AI, reviewing output, and maintaining .md instructions?
- Does your company allow Claude Code, Codex or similar tools? If not, what do you use?
- Do companies worry about costs and code privacy?
- Where do you think this is headed, a year from now?
Concrete examples, good or bad, would be especially helpful. Thanks.
al_borland•15h ago
For example, I asked just this week if a when condition on a block was evaluated once for the block or applied to each task. I thought it was each task, but wanted to double check. It told me it was done once for the block, which was not what I was expecting. I setup a test and ran it; it was wrong. It evaluates the condition on every task in the block. This seems like a basic thing and it was completely wrong. This happens every time I try to use it. I have no idea how people are trusting it to write 80% of their code.
We recently got access to agent mode, which is the default now. Every time it has tried to do anything it destroys my code. When asking it to insert a single task, it doesn’t understand how to format yaml/ansible, and I always have to fix it after it’s done.
I can’t relate to anything people are saying about AI when it comes to my job. If the AI was a co-worker, I wouldn’t trust them with anything, and would pray they were cut in the next round of layoffs. It’s constantly wrong, but really confident about it. It apologizes, but never actually corrects its behavior to improve. It’s like working with a psychopath.
In terms of training AI on our code base, that seems unlikely. We’re not even allowed to give our entire team (of less than 10 people) access to our code. We also can’t use whatever AI tool is out there. We can only use Copilot and the models it has, and only through our work account with SSO so it applies various privacy rules (from my limited understanding). We don’t yet have access to a general purpose AI at work, but they are in pilot I think.
I have no idea where it’s heading, as I have trouble squaring the reality of my experience with the anecdotes I read online, to the point where I question if any of it is real, or these are all investors trying to keep the stock prices going up. Maybe if I was working in a more traditional language or a greenfield environment that started with AI… maybe it would be better. Right now, I’m not impressed at all.
raw_anon_1111•13h ago