Prompt instructions like "never do X" don't hold up. LLMs ignore them when context is long or users push hard.
Curious how others are handling this: - Hard-coded checks before every action? - Some middleware layer? - Just hoping for the best?
We built a control layer for this — different methods for structured data, unstructured outputs, and guardrails (https://limits.dev). Genuinely want to learn how others approach it.
chrisjj•1h ago
Serious question. Assuming you knew this, why did you choose to use LLMz for this job?