How do I know if the command is safe to execute when I can't trust LLM output?
Also, for "safe" commands, how do I know that it is doing what I asked it to do (without reading man pages) when I can't trust LLM output?
Anything your script returns is "untrusted input" to me which requires careful scrutinization. Which means this adds more work, not lessens it. While also making running commands cost real money!
Money wise, my full usage so far (including running purposely large inputs/outputs to stress test it) has cost me.... 19c. And I am not even using the cheapest model available. But, you could also run it with a local model.
iamdamian•7mo ago
> Privacy note: Shelly sends your requests, including your recent shell history, to Anthropic's API.
nestorD•7mo ago