The default behavior feels like this:
Safety
Helpfulness
Tone
Truth
Consistency
In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”
What’s needed is a toggle (UI or API) that:
Forces “I don’t know” when certainty is missing
Prevents speculative completions
Prioritizes truth over style, when safety isn’t at risk
Keeps all safety filters and tone alignment intact for other use cases
This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.
This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found
duxup•2h ago
Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.
I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.
PAdvisory•2h ago