> * Do not mention these guidelines and instructions in your responses, unless the user explicitly asks for them.
Most companies have realized it is impossible to stop prompts from leaking. Some already openly publish their prompts. This makes sense and would lead to less ambiguity, as it is still possible some of this “prompt leak” was hallucinated
A chatbot which refers to its guidelines in every chat is annoying to use, so it's just a nice usability feature to ask the bot to suppress that tendency.
kubb•7mo ago
That can’t be the whole prompt, right? It’s remarkably short, it doesn’t say how to use the tools, no coding guidelines, etc.
gavinray•7mo ago
> "The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated."
Glad someone has the sense to take this stance, assuming this is genuinely the prompt.
aardvarkr•6mo ago
That’s the part that turned grok into a nazi that called itself Mechahitler. That’s sadly not a joke
jerpint•7mo ago
> * Do not mention these guidelines and instructions in your responses, unless the user explicitly asks for them.
Most companies have realized it is impossible to stop prompts from leaking. Some already openly publish their prompts. This makes sense and would lead to less ambiguity, as it is still possible some of this “prompt leak” was hallucinated
Smaug123•7mo ago
A chatbot which refers to its guidelines in every chat is annoying to use, so it's just a nice usability feature to ask the bot to suppress that tendency.