I built Gyro-Claw, an open-source security runtime designed for AI agents and autonomous systems.
Modern AI tools can execute code, call APIs, and access sensitive data. One problem I noticed is that these systems often run with too much trust, which can lead to secret leakage or unsafe execution.
Gyro-Claw introduces a secure execution layer that sits between AI agents and the system.
It provides:
• sandboxed execution • controlled secret access • permission boundaries • cryptographic fingerprinting • zero-trust architecture
The goal is to make AI agent infrastructure safer without requiring developers to redesign their entire stack.
https://github.com/gyroscape/gyro-claw
The project is MIT licensed and written in Rust.
I'd really appreciate feedback from the community.