frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Vibro-Braille for Deaf-Blind

1•Billiamdan•1m ago•0 comments

Acute Febrile Neutrophilic Dermatosis (Sweet Syndrome)

https://emedicine.medscape.com/article/1122152-overview
1•wjb3•2m ago•0 comments

Managed Code – New Site

https://www.managed-code.com/
1•managedcode•4m ago•0 comments

It's time to accelerate the development of antimatter for space propulsion

https://caseyhandmer.wordpress.com/2025/11/26/antimatter-development-program/
1•modeless•6m ago•0 comments

Requiem for Early Blogging

https://www.elizabethspiers.com/requiem-for-early-blogging/
1•bookofjoe•7m ago•0 comments

Baikonur launch pad damaged after Russian Soyuz launch to Space Station

https://www.reuters.com/science/baikonur-launch-pad-damaged-after-russian-soyuz-launch-internatio...
1•perihelions•7m ago•0 comments

Drones have revolutionized warfare. They're about to do it again

https://www.cnn.com/2025/11/27/world/history-future-of-drones-intl-hnk-ml-dst
1•breve•8m ago•0 comments

Lucas 'Granpa' Abela live [video]

https://www.youtube.com/watch?v=2aMBMbEWc6I&list=RD2aMBMbEWc6I
1•marysminefnuf•8m ago•0 comments

Emma Mazzenga is redefining athletic longevity at 92 years old

https://www.cnn.com/2025/11/27/sport/athletics-emma-mazzenga-redefining-longevity-92-intl
1•breve•9m ago•0 comments

From Cloudwashing to O11ywashing

https://charity.wtf/2025/11/24/from-cloudwashing-to-o11ywashing/
1•BerislavLopac•12m ago•0 comments

T1 accepts Elon Musk's challenge for top LoL team to compete against Grok AI

https://www.dexerto.com/league-of-legends/t1-accepts-elon-musks-challenge-for-top-lol-team-to-com...
1•andsoitis•13m ago•1 comments

Yawning Abyss of the Decimal Labyrinth

https://oh4.co/site/numogrammaticism.html
1•austinallegro•14m ago•0 comments

AWS-Lambda-Roadmap

https://github.com/aws/aws-lambda-roadmap
1•rmason•16m ago•1 comments

Show HN: No Black Friday – A directory of fair-price brands

https://no-blackfriday.com
1•rouuuge•18m ago•0 comments

Docker model runner integrates vllm

https://www.docker.com/blog/docker-model-runner-integrates-vllm/
1•robot-wrangler•21m ago•0 comments

Rust 2027 considering replacing poisoned locks

https://github.com/rust-lang/rust/issues/149359
1•vsgherzi•24m ago•0 comments

FileZilla Pro "Perpetual License" – A Warning to All Users

https://github.com/x011/FileZilla-Pro-Download
2•lobito25•24m ago•1 comments

Major fire rages at Hong Kong housing estate [video]

https://youtu.be/1WD0j0mW5qo
1•busymom0•25m ago•0 comments

You might be carrying an invisible gun

https://www.modernleader.is/p/invisible-gun
1•sebg•26m ago•0 comments

The Impossible Prompt

https://teodordyakov.github.io/the-impossible-promt/
1•emn13•26m ago•1 comments

An Empirical Study on Why LLMs Struggle with Password Cracking

https://arxiv.org/abs/2510.17884
1•gnabgib•29m ago•0 comments

Neural Architecture Design as a Compositional Language

https://lambpetros.substack.com/p/neural-architecture-design-as-a-compositional-32e
1•speiroxaiti•38m ago•0 comments

Refrag: Rethinking RAG Based Decoding

https://arxiv.org/abs/2509.01092
1•redbell•38m ago•1 comments

OpenAI Hacked, a Lot Leaked

https://peq42.com/blog/openai-hacked-a-lot-leaked/
4•peq42•38m ago•2 comments

The first $1B company run by one person is coming

3•AkshatRaj00•39m ago•2 comments

Setting Secrets in Env Vars

https://hugovk.dev/blog/2025/secrets-in-env-vars/
1•todsacerdoti•39m ago•0 comments

Lazy Linearity for a Core Functional Language (POPL 2026)

https://alt-romes.github.io/posts/2025-11-26-lazy-linearity-popl26.html
1•romes•40m ago•0 comments

Effective harnesses for long-running agents

https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents
1•handfuloflight•40m ago•0 comments

UFO Flap

https://en.wikipedia.org/wiki/UFO_flap
1•handfuloflight•42m ago•0 comments

Show HN: One click LinkedIn posts library

https://chromewebstore.google.com/detail/popup-linkedin-knowledge/pmejgpmingcbhpifjefenjkjaamlomha
1•rakeshkakati_47•42m ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•6mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•6mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•6mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•6mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.