frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Best Anonymous Payment Methods 2025

https://cloudexplorer.ai/anonymous-payment-methods/
1•BlackPlot•1m ago•0 comments

Pluribus an Unintentional Allegory for AI

https://www.polygon.com/pluribus-episode-3-chatgpt-ai-vince-gilligan/
1•mojomark•2m ago•1 comments

Comparing the homepage-claims of popular Git hosting providers

https://www.zufallsheld.de/2025/12/02/comparing-homepage-claims-of-git-providers/
1•zufallsheld•2m ago•0 comments

Warner Bros Discovery gets mostly cash offer from Netflix

https://www.reuters.com/business/media-telecom/warner-bros-discovery-gets-mostly-cash-offer-netfl...
1•andsoitis•2m ago•0 comments

America's elite colleges have an extra-time-on-tests problem

https://www.theatlantic.com/magazine/2026/01/elite-university-student-accommodation/684946/
1•fortran77•3m ago•1 comments

Has the TSA added immigration enforcement to "Secure Flight"?

https://papersplease.org/wp/2025/12/02/has-the-tsa-added-immigration-enforcement-to-secure-flight/
1•greyface-•3m ago•0 comments

Study Finds AI Wildlife Videos Creates a Disconnect Between People and Animals

https://petapixel.com/2025/12/02/study-finds-ai-wildlife-videos-creates-a-disconnect-between-peop...
2•gbugniot•3m ago•0 comments

A Comparative Study of Time on Mars with Lunar and Terrestrial Clocks

https://iopscience.iop.org/article/10.3847/1538-3881/ae0c16
1•layer8•3m ago•0 comments

AWS and Google Cloud collaborate to simplify multicloud networking

https://cloud.google.com/blog/products/networking/aws-and-google-cloud-collaborate-on-multicloud-...
1•mikecarlton•3m ago•0 comments

Did the Giant Heads of Easter Island Once Walk?

https://www.nytimes.com/2025/11/26/science/archaeology-easter-island-rapa-nui.html
2•benbreen•3m ago•0 comments

Ask HN: Favorite Engineering Blogs Currently?

1•localbuilder•3m ago•0 comments

Show HN: Messaging and file sharing that stores nothing on servers

https://diode.io/products/collab-family/
1•diode_sovereign•4m ago•0 comments

Zig's new plan for asynchronous programs

https://lwn.net/SubscriberLink/1046084/4c048ee008e1c70e/
2•messe•4m ago•0 comments

Show HN: Genlook – Virtual try-on for Shopify stores

https://www.genlook.app/
1•thibaultmthh•8m ago•0 comments

Running Great Refinement Meetings

https://www.jakeworth.com/posts/running-great-refinement-meetings/
1•jwworth•8m ago•0 comments

Show HN: Interactive Chromaticity and 3D Color Gamut Visualization

https://agaura.github.io/chromaticity-clouds/main.html
2•clubers•9m ago•0 comments

Every cheat site has the wrong code for Michelin Rally Masters

https://32bits.substack.com/p/under-the-microscope-michelin-rally
1•bbayles•10m ago•0 comments

Meta's Instagram orders employees back to the office 5 days a week

https://www.cnbc.com/2025/12/01/meta-instagram-rto-return-to-office.html
1•chollida1•10m ago•0 comments

Catch the liar in this addictive party word game

https://www.imposterwords.com/
1•bozhou•12m ago•1 comments

Neovim as Git Mergetool

https://smittie.de/posts/git-mergetool/
1•EPendragon•12m ago•0 comments

Better Solutions for Cloud-to-Cloud Transfers

https://www.ricedrive.com/
1•leolula•13m ago•1 comments

Show HN: Piperead – An AI librarian to find your next book

https://piperead.com
1•sirinnes•13m ago•0 comments

Medley Interlisp for the Newcomer

https://primer.interlisp.org
11•birdculture•14m ago•0 comments

My First Impressions of MeshCore Off-Grid Messaging

https://mtlynch.io/first-impressions-of-meshcore/
2•mtlynch•16m ago•1 comments

Saved by Stoppard

https://bsky.app/profile/harrywallop.co.uk/post/3m6ykow3vvs2o
1•choult•16m ago•1 comments

FreeBSD 15.0 Now Officially Available with Many Updates, Reproducible Builds

https://www.phoronix.com/news/FreeBSD-15.0-RELEASE
1•ksec•17m ago•0 comments

Scientists just found a way to tell if quantum computers are wrong

https://www.sciencedaily.com/releases/2025/11/251130205506.htm
2•jonbaer•18m ago•0 comments

Canonical Now Offering Ubuntu Pro for WSL

https://www.phoronix.com/news/Ubuntu-Pro-For-WSL
1•mikece•19m ago•0 comments

Show HN: Steer – Stop debugging agents, start teaching them (Open Source)

https://github.com/imtt-dev/steer
2•steerlabs•19m ago•0 comments

Passless – Virtual FIDO2 device and client FIDO 2 utility

https://github.com/pando85/passless
1•modinfo•19m ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•6mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•6mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•6mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•6mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.