I've been building LLM-based agents for a while and two things kept biting me.
1. Loops — an agent node would get stuck calling the same thing over and over, and I wouldn't notice until the API bill showed up. Lost $200+ on one run. 2. LLM would return garbage that didn't match what downstream code expected, and everything would just crash.
I looked around and couldn't find something simple that handled both. Most frameworks assume your node function just works. In practice it doesn't — LLM calls fail, JSON comes back broken, state gets weird.
So I built AgentCircuit. It's a Python decorator that wraps your agent functions with circuit breaker-style protections:
from agentcircuit import reliable
from pydantic import BaseModel
class Output(BaseModel):
name: str
age: int
@reliable(sentinel_schema=Output)
def extract_data(state):
return call_llm(state["text"])
That's it. Under the hood it:- Fuse — detects when a node keeps seeing the same input and kills the loop - Sentinel — validates every output against a Pydantic schema - Medic — auto-repairs bad outputs using an LLM - Budget — per-node and global dollar/time limits so you never get a surprise bill - Pricing — built-in cost tracking for 40+ models (GPT-5, Claude 4.x, Gemini 3, Llama, etc.)
There's no server, no config files, no framework lock-in. It works at the function boundary so it composes with LangGraph, LangChain, CrewAI, AutoGen, or just plain functions.
GitHub: https://github.com/simranmultani197/AgentCircuit PyPI: https://pypi.org/project/agentcircuit/