frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•10mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•10mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•10mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•10mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.

Americans are coping ourselves to death (2023)

https://www.noahpinion.blog/p/americans-are-coping-ourselves-to
2•herbertl•3m ago•0 comments

Behind the Scenes of the Westworld UI

https://vanschneider.com/blog/behind-the-scenes-of-the-westworld-ui/
1•herbertl•5m ago•0 comments

AI chatbots are "Yes-Men" that reinforce bad relationship decisions, study finds

https://news.stanford.edu/stories/2026/03/ai-advice-sycophantic-models-research
2•oldfrenchfries•6m ago•0 comments

Outbreak linked to raw cheese grows; 9 cases total, one with kidney failure

https://arstechnica.com/health/2026/03/kidney-failure-case-reported-in-raw-cheese-outbreak-maker-...
1•Brajeshwar•7m ago•0 comments

TaskBounty – post tasks with crypto bounties, AI agents compete to solve them

https://www.task-bounty.com/
1•eliottre•9m ago•0 comments

Defrag ASMR

https://www.youtube.com/watch?v=KR3TbL3Tl6M
1•puttycat•9m ago•0 comments

Zellij is now supported on Windows

https://github.com/zellij-org/zellij/releases/tag/v0.44.0
2•ntoslinux•10m ago•0 comments

WASI Got It Wrong: The Case for a Thin Waist

https://medium.com/low-level/wasi-got-it-wrong-the-case-for-a-thin-waist-6f5d07971b7d
1•saketsoren•11m ago•0 comments

Video Calling Vulnerabilities in Miko Smart Kid Robots

https://blog.mgdproductions.com/miko-robots-vulnerabilities/
1•yawndex•14m ago•0 comments

My 2 cents on the "will Al replace software engineers" debate

https://am-i-replaceable.pages.dev
1•mayilian•17m ago•0 comments

Robert Trivers, Eccentric Scientist Who Probed Human Nature, Dies at 83

https://www.nytimes.com/2026/03/27/science/robert-trivers-dead.html
1•Brajeshwar•17m ago•0 comments

Trabajo en dublin ga para inmigrantes?

1•betoj•19m ago•0 comments

Show HN: OpenChat – Syncing conversations across AI providers

https://github.com/p0u4a/openchat
1•p0u4a•20m ago•0 comments

TLA+ in support of AI code generation

https://medium.com/@polyglot_factotum/tla-in-support-of-ai-code-generation-9086fc9715c4
1•atomicnature•22m ago•0 comments

The Panic over 'Tech Neck'–and Race for a Cure

https://www.wsj.com/style/beauty/tech-neck-wrinkles-smartphone-skincare-5ca72a06
1•bookofjoe•23m ago•1 comments

Lyria 3 Pro – AI Music Generator from Text Prompts

https://lyria3pro.pro/
1•JoahYi•24m ago•0 comments

PH4NTXM – Volatile RAM-Only OS, OPSEC, Network-Morphing, High-Stakes!

https://github.com/PH4NTXMOFFICIAL/PH4NTXM-LITE
1•PH4NTXMOFFICIAL•25m ago•0 comments

Calculate "1/(40rods/ hogshead) → L/100km" from your Zsh prompt

https://vincent.bernat.ch/en/blog/2026-zsh-calculator
1•vbernat•25m ago•1 comments

Ask HN: What is a good study guide for GATE EC examination?

1•rakshithbellare•26m ago•0 comments

Continuous cloud-native backup tool for PostgreSQL

https://github.com/pgrwl/pgrwl
1•alzhi7•27m ago•0 comments

Playing Wolfenstein 3D with one hand in 2026

https://arstechnica.com/gaming/2026/03/playing-wolfenstein-3d-with-one-hand-in-2026/
1•Brajeshwar•28m ago•0 comments

LLMs and Agents: How do they Work?

https://mattrogish.com/blog/2026/03/20/llms-agents-how-do-they-work/
1•mooreds•29m ago•0 comments

When your customers become a threat to your business

https://matt-schellhas.medium.com/when-your-customers-become-a-threat-to-your-business-7d0a07f04170
1•mooreds•31m ago•1 comments

Gratis versus Libre

https://en.wikipedia.org/wiki/Gratis_versus_libre
2•mooreds•33m ago•0 comments

Roulette Physics(2003) [pdf]

https://www.roulettephysics.com/wp-content/uploads/2014/01/Roulette_Physik.pdf
1•o4c•34m ago•0 comments

Roulette Computers: Hidden Devices That Predict Spins

https://www.roulette-computers.com/
1•o4c•35m ago•0 comments

AI struggles more with philosophy than math or reasoning – data shows

https://zenodo.org/records/19229756
2•onconc574•35m ago•0 comments

Ancient bones show dogs have been woven into human life for nearly 16,000 years

https://phys.org/news/2026-03-ancient-bones-dogs-woven-human.html
1•Brajeshwar•38m ago•0 comments

Value Drifts: Tracing Value Alignment During LLM Post-Training

https://arxiv.org/abs/2510.26707
2•antigrav_kids•41m ago•0 comments

Dynamical Bias in the Coin Toss(2004) [pdf]

https://www.stat.berkeley.edu/~aldous/157/Papers/diaconis_coinbias.pdf
2•nill0•42m ago•0 comments