I built llm-authz-audit because I kept seeing the same security issues in LLM-powered applications: API keys hardcoded next to OpenAI calls, FastAPI endpoints serving chat completions with zero auth, user input concatenated straight into prompts, and shared conversation memory with no session isolation.
These aren't hypothetical — they're patterns I found repeatedly across open-source LLM projects and production codebases.
What it does:
It's a static analyzer (think eslint/semgrep but purpose-built for LLM security) that scans Python, JavaScript, and TypeScript codebases for authorization and security gaps. It ships with 13 analyzers and 27 rules covering the OWASP Top 10 for LLM Applications:
- Prompt injection risks (unsanitized input in prompts, missing delimiters) - Hardcoded API keys (OpenAI, Anthropic, HuggingFace, AWS, generic) - Unauthenticated LLM endpoints (FastAPI, Flask, Express) - LangChain/LlamaIndex tools without RBAC - RAG retrievals without document-level access controls - Over-permissioned MCP server configs - Shared conversation memory without user scoping - Missing rate limiting, audit logging, output filtering - Credentials forwarded to LLM via prompt templates Would love feedback from anyone building or securing LLM applications.