That's when I started building Tansive, an open-source platform to help teams securely integrate AI agents into real workflows.
I've been impressed with what AI agents can do, especially in routine tasks where the human toil is real and probability of human error is higher. But there are problems taking them to production.
For example:
- How do you prevent an agent from accidentally restarting production pods?
- How do you audit what it actually did when something goes wrong?
- When a workflow achieves an undesirable outcome, was it a bug in the tool, an incorrect prompt, a runaway agent, or a prompt injection attack?
- How do you verifiably make sure the agent didn't access Alice's records when responding to Bob's health question?
- How do you integrate agents with existing security policies and compliance requirements?
While DevOps scenarios gone wrong make for dramatic examples, most business processes that are automated need controls and guardrails.
I built Tansive to address these problems.
Here’s what Tansive enables:
- Runtime focus – Instead of focusing on building agents, Tansive focuses on their runtime execution - what they access, which tools they call, actions they take, and who triggered them.
- Declarative Catalog – A repository of agents, tools, their context and resources partitioned by environment, and segmented by namespaces, so policy rules can be defined over them. Written in yaml (GitOps friendly)
- Runtime policy enforcement – For example, “this agent can restart pods, but only in dev.” or "a finance agent that can only reconcile certain accounts"
- Session pinning – Transform or restrict sensitive data via user-defined functions (e.g., "Bob's session cannot access Alice's data", or "if feature flag X is set, then inject a WHERE clause into all SQL queries the agent makes")
- Tamper-evident, hash-linked logs
- Write tools in any language - whatever your team uses - to integrate agent workflows in to your system.
Demo video: https://vimeo.com/1099257866?share=copy - a real example of policy enforcement and session pinning in action.
(Agent can restart pods in dev but not in prod; A Health Bot pinned to one patient's ID cannot access another patient's record)
I also spent time thinking about how to get teams to adopt AI based automation. The biggest blocker I had faced was that every tool had to be written in Python using specific SDKs. This was a non-starter for teams already using different languages.
I realized that a generic agent that handles LLMs and tool calls, with functionality in language-agnostic tools, would work much better. Teams can write tools in whatever they already use - Go or Java for services, JavaScript for support, bash for ops. And this will fit well in to any of today's popular agent frameworks.
Transforms came from asking 'How do I use my existing scripts, but adapt the LLM's input into a format my scripts can understand?'
Why this matters:
AI Agents are amazing, but the boring stuff around security boundaries, compliance, and predictable behavior are important for their adoption. Tansive seeks to address that gap.
Tansive is in early alpha (v0.1.0) - intended for preview, but functional enough to try in real workflows in non-prod.
This field is nascent and my goal is to go after the easy, but the most pressing problems first, and build from there.
And I'd love feedback from anyone in infra or exploring AI agent security, integration, and compliance - or just curious to kick the tires.
Happy to answer questions and hear what you think!
GitHub: https://github.com/tansive/tansive
Docs: https://docs.tansive.io