Modern AI apps often follow this pattern: 1. Service receives request 2. Queries database (PostgreSQL/Redis/MongoDB) 3. Sends data to LLM API (OpenAI/Anthropic/Bedrock) 4. Consumes or returns the AI generated response
Security teams often don't know: - Which services are making AI calls - What databases they're accessing first - Whether PII is being sent to third-party APIs - What libraries and packages are being used for AI
Our eBPF based tool attaches to network and fs syscalls to observe: - Outbound connections to AI API endpoints (pattern matching on domains/IPs) - Database protocol detection (PostgreSQL, MySQL, MongoDB wire protocols) - Service-to-service communication within the cluster - Libraries invoked by processes (PyTorch, HF, OpenCV etc)
Architecture: - eBPF with C in kernel space - Go userspace agent processes events - Results sent to in-cluster exporter - Next.js for visualization
GitHub: https://github.com/aurva-io/AIOstack Demo: https://aurva.ai
Questions for you guys: 1. What classifications/buckets would you like to see for apps? 2. Other protocols/services we should detect? 3. Performance overhead-what's acceptable in prod?