The back side of that coin is that it similarly just(?) seems to be a fancy way of feeding the terraform provider docs to the LLM, which was already available via `tofu provider schema -json` without all this http business. IMHO the fields in the provider binary that don't have populated "description" fields are a bug
out of curiosity, what are you paying them for? most orgs that use tf dont
Hashicorp, even after huge discounts, wanted around 200-300K a year (I think the original offer was like 400 or 500k).
Spacelift is around 60-80K, I think, and we have more features, more runners, they fail less, and everyone is happier.
But as mdaniel notes in a sibling thread, this doesn’t seem to do much at this point.
No clue why you would say its a major source of danger. We have plenty of mechanism in place to prevent issues and due to the nature of IaC and how we handle state, we could literlay tear down everything and are back up running in around 2h with a complex system with 10 componentes based on k8s.
(I write as someone who really likes Terraform, fwiw.)
leetrout•6mo ago
I try to avoid modules out of the gate until I know the shape of a system and the lifecycles of things and I've been pleasantly surprised with how well the AI agents get AWS things correct out of the gate with HCL.
This should super charge this workflow since it should be able to pull out the provider docs / code for the specific version in use from the lockfile.
te_chris•6mo ago
What I enjoyed using cursor was when shit went wrong it could generate the gcloud cli commands etc to interrogate, add the results of that to the agent feed then continue.
Lucasoato•6mo ago
Ok, it’s probably something that a developer should know how to do, but who remembers every single command for cloud providers cli?
Querying the resources actual state makes these AI infra tools so powerful, I found them so useful even when I had to manage Hetzner based terraform projects.
te_chris•6mo ago