A principal engineer once asked me why CodeWeave exists.
I realised I never properly articulated it.
Last year, I watched a DevOps team spend four hours debugging a GitHub Actions workflow.
The root cause?
A YAML indentation error confidently hallucinated by an AI tool.
Their platform engineer said something that stuck with me:
“AI tools are fast, but they are never right. We always have to fix them.”
I am a DevOps engineer.
I’ve been that person at 2am, staring at Terraform drift because an AI suggested a command that looked correct, but quietly wiped part of the backend configuration.
The problem was not that AI was useless.
It was that someone still had to own the outcome.
Better engineers than me were already using AI.
So I took a different approach:
What if AI optimised for accountability instead of speed?
We built CodeWeave around that idea.
Not just generating infrastructure, but grounding decisions in official documentation, surfacing trade-offs, and exposing what’s missing.
Launching was the easy part.
I gave my email to every engineer who signed up.
When something broke, they messaged me.
One CTO pinged me at 11pm because a Kubernetes template was not generating network policies.
They didn’t want links to docs.
They wanted answers they could trust in production.
So I joined their calls.
Watched them use CodeWeave live.
Real environments. Real incidents. Real pressure.
That’s when the real pattern emerged.
Generic AI produced “working examples.”
But production infrastructure had gaps everywhere:
No RBAC
No monitoring
No disaster recovery
No cost visibility
Enterprise teams don’t want quick fixes.
They want infrastructure that survives audits, incidents, and growth.
So we rebuilt everything.
Security scoring.
Compliance checks.
Production-readiness validation.
Cost and blast-radius visibility.
Now when you generate Terraform, you see the financial impact.
When you build CI/CD pipelines, you see the security trade-offs.
We don’t have headlines or hype.
But yesterday, a platform team avoided a £40k cloud bill because CodeWeave flagged orphaned resources before they shipped.
That’s why CodeWeave exists.
Not to replace DevOps engineers but to help them make decisions they are prepared to stand behind.
Curious how others here think about trust and accountability when using AI in production systems.
CodeWeave•1h ago
Last year, I watched a DevOps team spend four hours debugging a GitHub Actions workflow.
The root cause? A YAML indentation error confidently hallucinated by an AI tool.
Their platform engineer said something that stuck with me:
“AI tools are fast, but they are never right. We always have to fix them.”
I am a DevOps engineer. I’ve been that person at 2am, staring at Terraform drift because an AI suggested a command that looked correct, but quietly wiped part of the backend configuration. The problem was not that AI was useless. It was that someone still had to own the outcome.
Better engineers than me were already using AI. So I took a different approach: What if AI optimised for accountability instead of speed?
We built CodeWeave around that idea.
Not just generating infrastructure, but grounding decisions in official documentation, surfacing trade-offs, and exposing what’s missing.
Launching was the easy part.
I gave my email to every engineer who signed up. When something broke, they messaged me. One CTO pinged me at 11pm because a Kubernetes template was not generating network policies. They didn’t want links to docs. They wanted answers they could trust in production.
So I joined their calls. Watched them use CodeWeave live. Real environments. Real incidents. Real pressure.
That’s when the real pattern emerged.
Generic AI produced “working examples.” But production infrastructure had gaps everywhere:
No RBAC No monitoring No disaster recovery No cost visibility
Enterprise teams don’t want quick fixes. They want infrastructure that survives audits, incidents, and growth.
So we rebuilt everything.
Security scoring. Compliance checks. Production-readiness validation. Cost and blast-radius visibility.
Now when you generate Terraform, you see the financial impact. When you build CI/CD pipelines, you see the security trade-offs.
We don’t have headlines or hype. But yesterday, a platform team avoided a £40k cloud bill because CodeWeave flagged orphaned resources before they shipped.
That’s why CodeWeave exists.
Not to replace DevOps engineers but to help them make decisions they are prepared to stand behind. Curious how others here think about trust and accountability when using AI in production systems.
You can check out CodeWeave here: https://copilot.codeweave.co/