Package scanning: Paste an npm/package URL or code. We do quick static + dynamic checks: install/postinstall scripts, obfuscation/eval, exfil endpoints, suspicious APIs, and typosquat/reputation signals.
AI explainer: An LLM summarizes the behaviors and risk patterns in plain English (why it’s risky, what to verify, how to mitigate), and produces a shareable report.