Curious to hear from others:
What are some common mistakes you've seen (or made) when prompting LLMs?
Any patterns in user behavior that lead to unreliable or unexpected results?
Are there prompt-writing techniques that work across models (ChatGPT, Claude, Llama, etc.)?
Would love to collect insights and even horror stories from folks deploying or experimenting with LLMs.