frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•8mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•8mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•8mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•8mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.

Ask HN: Is Prettier extension working for you in Cursor?

1•vldszn•54s ago•0 comments

Show HN: A New and comprehensive Vibe Coding web platform is here

https://hypeframe.ai
1•theonlyvasudev•1m ago•0 comments

Memgraph 3.8: Atomic GraphRAG and Vector Single Store with Performance Upgrades

https://memgraph.com/blog/memgraph-3-8-release-atomic-graphrag-vector-single-store-parallel-runtime
1•taubek•1m ago•0 comments

BleuNova – Ethical self-hosted AI agent (privacy-first)

https://github.com/BleuRadience/BleuNova-AI-Agent
1•bleuradience•1m ago•1 comments

Show HN: ConsentScope – detect cookies loaded before user consent

https://www.consentscope.pro/
1•murzynalbinos•2m ago•0 comments

Weight-loss revolution (does not much) show up in the data

https://www.ft.com/content/0de44a07-528d-4515-9fb4-f6636d9c4230
1•marojejian•3m ago•1 comments

U.S. Smuggled Thousands of Starlink Terminals into Iran After Protest Crackdown

https://www.wsj.com/world/middle-east/u-s-smuggled-thousands-of-starlink-terminals-into-iran-afte...
1•fortran77•3m ago•1 comments

Origin of "To Err Is Human; to Foul Things Up Requires a Computer" (2010)

https://quoteinvestigator.com/2010/12/07/foul-computer/
1•shagie•3m ago•0 comments

Show HN: Simple tool to expolre UK companies with goods trading data

https://corpsignals.com/
1•rzykov•4m ago•0 comments

Show HN: BotMode checks if your site renders correctly for Googlebot

https://pagegym.com/botmode
1•razcoj•4m ago•0 comments

1,300-year-old world chronicle unearthed in Sinai

https://www.heritagedaily.com/2026/02/1300-year-old-world-chronicle-unearthed-in-sinai/156948
1•Anon84•5m ago•0 comments

Unorthodox Analytical Engine Utilizing Tinygrad

https://github.com/ronfriedhaber/autark
1•ronfriedhaber•8m ago•0 comments

Can medical "AI" lie? Large study maps how LLMs handle health misinformation

https://medicalxpress.com/news/2026-02-medical-ai-large-llms-health.html
1•ck2•8m ago•1 comments

Why are people disconnecting or destroying their Ring cameras?

https://www.usatoday.com/story/news/nation/2026/02/10/ring-super-bowl-ad-dog-camera-privacy/88606...
2•toofy•9m ago•0 comments

Show HN: Been using this for my setup. Now opening it. AI hedge fund

https://github.com/DanisHack/ai-hedge-fund
1•danishhm•9m ago•0 comments

Show HN: Custom Pricing Units in Flexprice (price in credits, bill in USD)

https://flexprice.io/
4•ShreyaChaurasia•9m ago•0 comments

Formalization and Inevitability of the Pareto Principle

https://arxiv.org/abs/2602.11131
2•bikenaga•11m ago•1 comments

In Defense of SaaS

https://twitter.com/finbarr/status/2021999185172775288
1•Finbarr•12m ago•0 comments

Zero State Architecture deep dive

1•buttersmoothAI•14m ago•0 comments

Agent Death by a Thousand Cuts: UX Anti-Pattern Skill

https://github.com/cassiozen/UX-antipatterns
1•cacozen•15m ago•0 comments

Show HN: Myrlin – Open-Source Workspace Manager for Claude Code

https://github.com/therealarthur/myrlin-workbook
1•therealarthur•16m ago•2 comments

Michael Burry's Manifesto: Why I'm Short Palantir

https://michaeljburry.substack.com/p/palantirs-new-clothes-foundry-aip
1•robertkoss•16m ago•1 comments

Denver schools blocking ChatGPT over group chats, adult content

https://www.chalkbeat.org/colorado/2026/01/13/denver-schools-blocking-chatgpt-over-concerns-about...
1•Balgair•16m ago•0 comments

Custom machine kept man alive without lungs for 48 hours

https://arstechnica.com/health/2026/01/custom-machine-kept-man-alive-without-lungs-for-48-hours/
2•PaulHoule•16m ago•0 comments

RL on GPT-5 to write better kernels

https://arxiv.org/abs/2602.11000
2•atallahw•19m ago•1 comments

How much of AI labs' research is "safety"?

https://fi-le.net/safety-blogs/
1•gk1•19m ago•0 comments

'Something Will Go Wrong': Anthropic's Chief on the Coming A.I. Disruption

https://www.nytimes.com/2026/02/12/opinion/artificial-intelligence-anthropic-amodei.html
3•quadtree•19m ago•0 comments

X has under 30 engineers

https://twitter.com/nikitabier/status/2021992577642508562
3•olalonde•19m ago•2 comments

BashoBot – A Personal AI Assistant Built with Bash

https://github.com/uraimo/bashobot
4•drtse4•19m ago•0 comments

Don't build agents, build context enrinchment

https://trunk.io/blog/don-t-build-agents-build-context-enrichment
2•elischleifer•19m ago•1 comments