frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•11mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•11mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•11mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•11mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.

Serendipity Machines

https://www.shishyko.com/essays/serendipity-machines.html
1•shishy•3m ago•0 comments

Project Deal: Claude-run marketplace experiment

https://www.anthropic.com/features/project-deal
1•EFLKumo•3m ago•0 comments

Show HN: Lazytilt TUI for Tilt.dev

https://github.com/tdi/lazytilt
1•tdi•4m ago•0 comments

Creastor beats stan and all others on fees alone

https://creastor.com/
1•TheFireTiger•6m ago•1 comments

Clawcenter – Minimal Mission Control

1•borjasolerme•6m ago•0 comments

Ask HN: What's a mind-blowing fact you know?

1•chistev•6m ago•0 comments

42 lost pages of the New Testament manuscript discovered

https://phys.org/news/2026-04-lost-pages-testament-manuscript.html
2•pseudolus•6m ago•0 comments

Claude Opus 4.7 has turned into an overzealous query cop, devs complain

https://www.theregister.com/2026/04/23/claude_opus_47_auc_overzealous/
1•freedomben•8m ago•0 comments

You probably wouldn't notice if an AI chatbot slipped ads into its responses

https://theconversation.com/you-probably-wouldnt-notice-if-an-ai-chatbot-slipped-ads-into-its-res...
2•geox•11m ago•0 comments

Possibility of modifying an image to see without glasses? (2010)

https://stackoverflow.com/questions/2563471/is-it-possible-to-modify-an-image-so-someone-with-myo...
1•zeristor•12m ago•1 comments

Meta signs agreement with AWS to power agentic AI on Amazon's Graviton chips

https://www.aboutamazon.com/news/aws/meta-aws-graviton-ai-partnership
1•ksec•15m ago•1 comments

Why LLMs Can't Replace Strategic Insight

https://hbr.org/2026/03/researchers-asked-llms-for-strategic-advice-they-got-trendslop-in-return
1•Antibabelic•18m ago•0 comments

The art of splitting without splitting

https://www.youtube.com/watch?v=jr8KxZvosYI
1•RebootStr•18m ago•0 comments

Rust open-source headless browser for AI agents and web scraping

https://github.com/h4ckf0r0day/obscura
2•guerby•26m ago•0 comments

Gleam gets source maps, 1.16.0

https://gleam.run/news/javascript-source-maps/
1•birdculture•34m ago•0 comments

A fun 5 minute take on AI in business

https://www.youtube.com/watch?v=nDL3Ch7Nz8c
1•lifeisstillgood•34m ago•0 comments

DeFi United calls on the world for $292M rsETH relief

https://defiunited.world/
2•kindkang2024•39m ago•0 comments

I wrote an async LSM storage engine in Rust

https://github.com/mehrdad3301/tiny-lsm
2•mehrdad__3301•42m ago•1 comments

Code Is Free Now. What's Left Is Us

https://p.ocmatos.com/blog/code-is-free-now-whats-left-is-us.html
1•pmatos•43m ago•0 comments

Agentic AI for Hormuz Shock Modelling

https://avkcode.github.io/blog/hormuz-shock.html
1•KyleVlaros•45m ago•0 comments

Elon Musk's near-daily online posts about race are turning off some fans

https://www.washingtonpost.com/technology/2026/04/24/musk-online-posts-race-whiteness/
5•vrganj•47m ago•0 comments

You don't have to be filthy rich to enjoy an airport shower

https://www.nytimes.com/2026/04/24/travel/airport-lounges-showers-beds.html
1•strogonoff•49m ago•0 comments

Markdown (Aaron Swartz: The Weblog)

http://www.aaronsw.com/weblog/001189
1•tahazsh•50m ago•0 comments

Vanishing Culture: A Report on Our Fragile Cultural Record

https://archive.org/details/vanishing-culture-2026
2•stared•50m ago•0 comments

SiGit Code: local-first coding agent

https://github.com/getsigit/sigit
1•kampak212•53m ago•0 comments

Show HN: StudyHall – A Virtual Workspace

https://studyhall.app
3•kornatzky•58m ago•0 comments

A Dose of Wisdom from Silicon Valley's Favorite Prophet

https://www.nytimes.com/2026/04/24/opinion/ezra-klein-podcast-stewart-brand.html
2•throwrg25•1h ago•0 comments

The Wind in the Willows and reading out loud

https://interconnected.org/home/2026/04/24/willows
1•Tomte•1h ago•0 comments

Why 'Atomic Habits' may not be working for you (2023)

https://www.krishnabharadwaj.info/why-atomic-habits-may-not-be-working-for-you/
1•n_e•1h ago•0 comments

What Is the Next Moat?

https://substack.com/profile/73011963-ming/note/c-248876715
1•dooku0721•1h ago•0 comments