- How to run multiple agents that might touch the same data?
- How do one have policies about what agents can/can't do?
- Has someone successfully mixed different frameworks (LangGraph, CrewAI and custom agents)?
How do you handle this in the system?
ps3544•3mo ago
1. For policies (what agents can/can't do): We've had a lot of success with an in-process, "pre-execution hook" model. Instead of having the agent's core logic be full of if/else permission checks, we use a lightweight policy engine. Before any critical action (like a tool call), the agent's intent is passed to the engine, which evaluates a set of simple, deterministic Python functions (our policies) against the current context (user role, budget, etc.). If any policy returns BLOCK, an exception is raised and the action is never executed. It's like a sub-millisecond, framework-agnostic version of OPA that doesn't require a sidecar.
2. For managing shared data & mixing frameworks: The key insight for us was to treat this as a classic observability problem. The only way to debug race conditions or interoperability failures between a LangGraph agent and a CrewAI agent is to have a unified, high-fidelity trace of the entire system.
We built a simple tracing system with a @trace decorator that we can apply to functions in any framework. All traces are written to a single, local SQLite database. This gives us a "global view" of the system. When data gets corrupted, we can run a SQL query like SELECT * FROM spans WHERE attributes->>'data_id' = 'xyz' ORDER BY start_time_ns to see the exact sequence of reads and writes from all agents that touched that data.
(For anyone interested, we've open-sourced this entire approach as a single toolkit called Clearstone. The repo is in my profile. It's in an early beta, but it has the policy engine and the local SQLite tracer we're using.)
pingu-73•3mo ago
Pre execution hook model for policies make a lot of sense. Much simpler than what I was imagining (heavier coordination primitives but in-process checks avoid complexity)
I still have a few questions:
1. When tracing conflicts in SQLite do you find they're usually not critical, or have you hit cases where they caused real problems? I'm curious if there are scenarios where you need to prevent the race upfront.
2. How do you handle policies that depend on system wide state? Do those still work with in-process hooks or do you need something centralized?