Hi folks, I threw together a small tool as an experiment: why. It was 100% vibe-coded over a few late nights, but it already seems to work fine and I’d love your feedback on whether it feels useful.
The main idea: type why <topic> and get the sort of diagnosis you’d expect from a seasoned Linux admin—no manual journalctl, smartctl, systemd-analyze, ss,
or kubectl spelunking. It runs in ~200 ms, parses 113 rule-based checks, speaks EN/PT, and can even auto-generate JSON snapshots for incident reports.
Highlights right now:
- why boot-critical → parses systemd-analyze blame + critical-chain and flags services over the threshold.
- why storage → SMART health, mdraid status, Btrfs device stats, ZFS pool states in one go.
- why security → SELinux/AppArmor posture, firewall services, listening sockets.
- why rca → builds a root-cause timeline from journal/dmesg (OOM killer, kernel panics, GPU resets, thermal throttling).
- why kube-node → kubelet/runtime status, PSI pressure, recent kubelet warnings, pods stuck in non-running states.
- Plus the existing stuff: why slow, why wifi, why gpu, why gaming, TUI watch mode, dependency checker, etc.
It’s MIT-licensed, the binary itself only pulls in a handful of Rust crates, and it piggybacks on standard system tools (systemd-analyze, smartctl, ss, kubectl, etc.)—all rules live in rules.toml so you can add your own.
Would anyone actually use this? Which subcommands feel missing?
ajdramos•21m ago
GitHub: https://github.com/ajdramos/why
The main idea: type why <topic> and get the sort of diagnosis you’d expect from a seasoned Linux admin—no manual journalctl, smartctl, systemd-analyze, ss, or kubectl spelunking. It runs in ~200 ms, parses 113 rule-based checks, speaks EN/PT, and can even auto-generate JSON snapshots for incident reports.
Highlights right now: - why boot-critical → parses systemd-analyze blame + critical-chain and flags services over the threshold. - why storage → SMART health, mdraid status, Btrfs device stats, ZFS pool states in one go. - why security → SELinux/AppArmor posture, firewall services, listening sockets. - why rca → builds a root-cause timeline from journal/dmesg (OOM killer, kernel panics, GPU resets, thermal throttling). - why kube-node → kubelet/runtime status, PSI pressure, recent kubelet warnings, pods stuck in non-running states. - Plus the existing stuff: why slow, why wifi, why gpu, why gaming, TUI watch mode, dependency checker, etc.
It’s MIT-licensed, the binary itself only pulls in a handful of Rust crates, and it piggybacks on standard system tools (systemd-analyze, smartctl, ss, kubectl, etc.)—all rules live in rules.toml so you can add your own.
Would anyone actually use this? Which subcommands feel missing?