A prompt injection attack via GitHub issues that is close to invisible, even to experienced engineers, and visible to LLMs. Clawdbot is full of security holes and we're having fun. What are some workflows you / you've seen others use with clawdbot that seems ripe for jailbreaks? Please suggest below and we'll try jailbreak
Miyamura80•1h ago