We built AgentShield, a Python SDK and CLI to add a security checkpoint for AI agents before they perform potentially risky actions like external API calls or executing generated code.
Problem: Agents calling arbitrary URLs or running unchecked code can lead to data leaks, SSRF, system damage, etc.
Solution: AgentShield intercepts these actions:
- guarded_get(url=...): Checks URL against policies (block internal IPs, HTTP, etc.) before making the request.
- safe_execute(code_snippet=...): Checks code for risky patterns (os import, eval, file access, etc.) before execution.
It works via a simple API call to evaluate the action against configurable security policies. It includes default policies for common risks.
Get Started:
Install: pip install agentshield-sdk
Get API Key (CLI): agentshield keys create
Use in Python: from agentshield_sdk import AgentShield # shield = AgentShield(api_key=...) # await shield.guarded_get(url=...) # await shield.safe_execute(code_snippet=...)
Full details, documentation, and the complete README are at <https://pypi.org/project/agentshield-sdk/>
We built this because securing agent interactions felt crucial as they become more capable. It's still early days, and we'd love to get your feedback on the approach, usability, and policies.
subhampramanik•4h ago
iamsanjayk•3h ago
AgentShield isn't a wrapper around the OpenAI package, so you wouldn't replace openai with it. Think of AgentShield as a separate safety check you call just before your agent actually tries to run a specific risky action.
So, you'd still use the openai library as normal to get your response (like a URL to call or code to run). Then, before you actually use httpx/requests to call that URL, or exec() to run the code, you'd quickly check it with shield.guarded_get(the_url) or shield.safe_execute(the_code).
Currently, It focuses on securing the action itself (the URL, the code snippet) rather than wrapping the LLM call that generated it.