frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•8mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•8mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•8mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•8mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.

AI Hiring in 2026: What Changes for Founders and Candidates

https://foundersarehiring.com/hiring-resources/ai-hiring-in-2026-changes-for-founders-and-candidates
1•niksmac•38s ago•0 comments

Moltbook is a bad takeoff scenario where human psychology itself is the exploit

https://twitter.com/PimDeWitte/status/2017571392384872742
1•lebek•1m ago•0 comments

Don't invert established UX mental models

https://thoughts.wyounas.com/p/dont-invert-established-ux-mental
1•simplegeek•2m ago•0 comments

A gen AI-discovered TNIK inhibitor for idiopathic pulmonary fibrosis

https://www.nature.com/articles/s41591-025-03743-2
1•frizlab•2m ago•0 comments

Building the Immich Editor

https://immich.app/blog/immich-editor
1•bo0tzz•3m ago•0 comments

Foley (Sound Design)

https://en.wikipedia.org/wiki/Foley_(sound_design)
1•lawlorino•5m ago•0 comments

Cryptomator – Encrypt files before loading into gdrive, Dropbox, etc.

https://github.com/cryptomator/cryptomator
1•l1am0•6m ago•0 comments

You've reached your rate limit. Please try again later

1•Haeuserschlucht•7m ago•0 comments

How to Beat Lyme Disease

https://tim.blog/2026/01/30/lyme-disease-ketogenic-diet/
1•maximedupre•9m ago•0 comments

Amazon is rolling out Alexa+ to all users. But not everyone wants it.

https://www.wired.com/story/alexa-plus-early-access-rollout-2026/
1•bookofjoe•9m ago•1 comments

The Ladder to Nowhere, Part 2: OpenAI's Complete Picture of You

https://insights.priva.cat/p/the-ladder-to-nowhere-part-2-openais
1•privacat•15m ago•0 comments

A no-bullshit introduction to groups: Part 1

https://iczelia.net/posts/groups/
1•mci•15m ago•0 comments

Finding Your Tribe of Mentors

https://leaddev.com/career-development/how-find-tribe-mentors
1•shehabas•17m ago•0 comments

Doom on a Fursuit [video]

https://bsky.app/profile/jtingf.bsky.social/post/3mdnhnwzbgk22
2•Kye•20m ago•0 comments

Sell America is the new trade on Wall Street

https://www.nytimes.com/2026/01/31/business/sell-america-dollar-financial-markets.html
2•softwaredoug•22m ago•0 comments

I guess I'm AI-pilled now?

https://brittanyellich.com/i-guess-im-ai-pilled-now/
2•mooreds•24m ago•1 comments

Show HN: Envware – E2EE CLI to manage environment variables across devices

2•humbertocruz•25m ago•0 comments

Launching My Side Project as a Solo Dev: The Walkthrough

https://alt-romes.github.io/posts/2026-01-30-from-side-project-to-kickstarter-a-walkthrough.html
3•romes•26m ago•0 comments

Do you really know your position?

https://agrisacademy.com/do-you-really-know-your-position/
1•mooreds•27m ago•0 comments

Show HN: Envware – E2EE CLI to manage environment variables across devices

https://www.envware.dev
1•humbertocruz•28m ago•0 comments

LazyVim for Ambitious Developers

https://lazyvim-ambitious-devs.phillips.codes/course/chapter-1/
1•Townley•28m ago•0 comments

Moltbook: Early Evidence of Agent-Targeted Influence Mechanics

https://medium.com/technomancy-laboratories/compressed-alignment-attack-early-evidence-of-agent-t...
1•pstryder•32m ago•0 comments

The Inverted Panopticon

https://shanakaanslemperera.substack.com/p/the-inverted-panopticon
1•MassPikeMike•33m ago•0 comments

Kling 3 – Cinematic AI video generator with character consistency

https://kling3.app/
1•Jenny249•35m ago•0 comments

GNU Recutils: a database management system using human-readable text files

https://labs.tomasino.org/gnu-recutils/
2•fanf2•36m ago•1 comments

Agent Consent Protocol (ACP)

https://github.com/o1100/Agent-Consent-Protocol
1•mooreds•39m ago•0 comments

Icons That Move with Intent

https://www.itshover.com/
1•typeofhuman•41m ago•0 comments

The Hidden Conversation in Breast Milk: Katie Hinde's Groundbreaking Research

https://ifeg.info/2025/12/15/the-hidden-conversation-in-breast-milk-katie-hindes-groundbreaking-r...
1•thunderbong•43m ago•0 comments

Agent Trace by Cursor: open spec for tracking AI-generated code

https://agent-trace.dev/
1•mustaphah•43m ago•0 comments

Show HN: Lumenyx – Bitcoin scarcity and full EVM, CPU mining, zero premine

https://github.com/lumenyx-chain/lumenyx
1•missed2009•44m ago•0 comments