Thee LLM was actually implementing nearly everything, finding the term vibrator, and was then erasing its output.
* It's not reliable, the project’s own readme mentions false positives.
* It adds a source of confusion where an AI agent tells the user that the CLI tool said X, but running it manually with the same command line gives something different.
* The user can't manually access the functionality even if they want to.
Much better to just have an explicit option to enable the new behaviors and teach the AI to use that where appropriate.
Just be honest. You're failing in this "fat the man, man" thing on AI and llms.
It's better to work with the future than pretend that being a Luddite will work in the long run
I guess in that scenario, AI agents would have a project-specific "stealth mode" to protect the user.
ritzaco•21h ago
But to make your tool behave differently just causes confusion if a human tries something and then gets an agent to take over or vice versa.
hoistbypetard•20h ago
kristianc•18h ago
hoistbypetard•18h ago
I do not think it's a particularly good way to assist such users.
JoshTriplett•19h ago
ethan_smith•15h ago