I'm Mahesh, and together with Michael Fornaro we built kguardian in our free time because we kept running into the same loop: deploy a workload, figure out what traffic it needs, write a NetworkPolicy from memory, break something in staging. Repeat for seccomp profiles, except now the surface is 400+ Linux syscalls with no good way to know which ones your container uses without just running it and watching.
The gap between what you think your application does and what it actually does at runtime is where security incidents live.
What kguardian does:
- Runs a DaemonSet using eBPF — kernel programs that fire on TCP connections, UDP sends, and syscall entries with ~1-2% CPU overhead - Attributes every event to the right pod via network namespace inodes — no sidecars, no proxy injection, no application changes - Detects silently-dropped NetworkPolicy traffic by counting TCP SYN retransmissions — otherwise nearly invisible
Using the UI:
kubectl port-forward svc/kguardian-frontend 5173 -n kguardian
Open the dashboard, pick a namespace, and you see your actual network topology — not what you declared, what the kernel recorded. Pods are grouped by workload identity. Edges are colored by type: blue for internal traffic, amber for external, red for connections being silently dropped by an existing policy. Each edge is labeled with the top port and protocol (HTTP :80, HTTPS :443, DNS :53, K8s API :6443).Click any workload → Build Policy → kguardian generates a least-privilege NetworkPolicy YAML in seconds, resolving IPs to pod selectors, deduplicating ClusterIP flows, and scoping egress to exactly what was observed. You can also view live generated NetworkPolicies and Seccomp profiles based on current cluster traffic.
An AI assistant (MCP server — Claude, OpenAI, Gemini, GitHub Copilot) lets you query in plain English: "Which pods are making unexpected DNS queries?" or "Are any workloads hitting the Kubernetes API directly?"
CLI for GitOps:
helm install kguardian oci://ghcr.io/kguardian-dev/charts/kguardian \
--namespace kguardian --create-namespace
# Let workloads run for a few minutes...
kubectl kguardian gen networkpolicy --all -n production --output-dir ./policies
kubectl kguardian gen seccomp my-app -n production --output-dir ./seccomp
kubectl apply -f ./policies/
Where this is going:We're working on capturing L7 HTTP traffic (paths, methods, headers) at the eBPF layer and turning that into L7-aware Cilium policies — not just "allow port 443" but "allow GET /api/v1/health from this workload." Beyond that: Istio AuthorizationPolicies from observed mesh traffic, Cilium Gateway API resources (HTTPRoutes, Gateways, VirtualServices), and AI-assisted anomaly detection for behavioral drift. The long-term vision: observe → generate → apply → monitor → alert → repeat.
Links: - GitHub: https://github.com/kguardian-dev/kguardian - Docs: https://docs.kguardian.dev
It's free and open source. We'd love feedback, bug reports, or ideas. Contributions welcome.
Thanks, Mahesh & Michael