Prompt instructions like "never do X" don't hold up. LLMs ignore them when context is long or users push hard.
Curious how others are handling this: - Hard-coded checks before every action? - Some middleware layer? - Just hoping for the best?
We built a control layer for this — different methods for structured data, unstructured outputs, and guardrails (https://limits.dev). Genuinely want to learn how others approach it.
chrisjj•12h ago
Serious question. Assuming you knew this, why did you choose to use LLMz for this job?
thesvp•9h ago
chrisjj•8h ago
Yet they don't understand the intent of "Never do X" ?
thesvp•3h ago
chrisjj•2h ago
Kludge.
Good luck!