I've been building an ideation platform, and I deliberately prompt AI to "hallucinate" during brainstorming.
The problem is that humans get stuck in design fixation. We recycle the same ideas. AI has the opposite problem—it hallucinates nonsense when you push it toward novelty. But for creative divergence, that's exactly what you want.
The trick is separating phases: let AI go wild during ideation, then apply human judgment to filter and refine.
The post covers practical techniques and guardrails so creative outputs don't get mistaken for facts.
Anyone else deliberately prompting AI to be "wrong" in controlled contexts? What's worked for you?
L1nefeed•1h ago
The problem is that humans get stuck in design fixation. We recycle the same ideas. AI has the opposite problem—it hallucinates nonsense when you push it toward novelty. But for creative divergence, that's exactly what you want.
The trick is separating phases: let AI go wild during ideation, then apply human judgment to filter and refine.
The post covers practical techniques and guardrails so creative outputs don't get mistaken for facts.
Anyone else deliberately prompting AI to be "wrong" in controlled contexts? What's worked for you?