frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why you're probably going to lose money on Polymarket

https://www.msn.com/en-us/money/investment/why-you-re-probably-going-to-lose-money-on-polymarket/...
1•Gaishan•1m ago•0 comments

Voting Before the Secret Ballot

https://www.historytoday.com/archive/feature/voting-secret-ballot
1•Petiver•4m ago•0 comments

Basic linear algebra algorithms (since C++26)

https://en.cppreference.com/cpp/numeric/linalg
1•tosh•4m ago•0 comments

ShinyHunters claims – 9k schools affected by Instructure Canvas data breach

https://edscoop.com/shinyhunters-claims-nearly-9000-schools-affected-by-canvas-data-breach/
1•Gaishan•4m ago•0 comments

Everyone gets faster writes: We turned off FPW's in Neon

https://neon.com/blog/turning-off-fpw-for-faster-writes
1•tosh•7m ago•0 comments

What Makes Axavive Different from Other Supplements?

1•DeclanGrimley•10m ago•0 comments

Does structured prompting change how LLMs reason, or just what they say?

https://doi.org/10.5281/zenodo.20116625
1•h_hasegawa•11m ago•0 comments

Democratizing AI Psychosis: Why Smart People Are Captured by AI Hype

https://perilous.tech/democratizing-ai-psychosis-why-smart-people-are-captured-by-ai-hype/
1•thoughtpeddler•14m ago•1 comments

Do you take after your dad's RNA?

https://knowablemagazine.org/content/article/living-world/2026/epigenetic-effects-of-sperm-on-off...
1•asplake•18m ago•0 comments

Typing Is Being Replaced by Whispering–and It's Way More Annoying

https://www.wsj.com/tech/typing-is-being-replaced-by-whisperingand-its-way-more-annoying-a804fee7
1•petethomas•19m ago•1 comments

Microsoft's African Data Center Falters on Payment Demands

https://www.bloomberg.com/news/articles/2026-05-10/microsoft-s-african-data-center-falters-on-pay...
1•1vuio0pswjnm7•20m ago•0 comments

Show HN: ChatbotX, an open-source alternative to ManyChat

https://github.com/ChatbotXIO/ChatbotX
1•hunterist•20m ago•0 comments

Study Finds iOS Users Have Shorter Relationships as Compared to Android Users

https://finance.yahoo.com/sectors/technology/articles/hanker-dating-study-finds-ios-185300885.html
3•iamkrazy•21m ago•1 comments

Chris Hohn's hedge fund slashes $8B Microsoft stake in warning over AI

https://www.ft.com/content/ac5d90a9-b010-4529-9616-706420920681
1•1vuio0pswjnm7•22m ago•0 comments

Clone Yourself into Agents

https://lorentz.app/blog-item.html?id=soulify-the-llm
1•baalimago•22m ago•0 comments

Miii – Claude Code-level terminal workflows offline, no API keys

https://www.npmjs.com/package/miii-cli
2•maruakshay•28m ago•0 comments

Social Cognition and Interpersonal Violence

https://www.pnas.org/doi/abs/10.1073/pnas.2519361123
1•neehao•30m ago•0 comments

Pi Slate – A Raspberry Pi5 handheld Linux cyberdeck with 5" 1920×720 touchscreen

https://www.cnx-software.com/2026/05/11/pi-slate-a-raspberry-pi-5-handheld-linux-cyberdeck-with-a...
1•anonymousiam•31m ago•1 comments

Microsoft's African data center falters on payment demands, Bloomberg reports

https://www.reuters.com/world/africa/microsofts-african-data-center-falters-payment-demands-bloom...
1•1vuio0pswjnm7•32m ago•0 comments

Why You Actually Want Machines Writing the Code for Your Next Flight

https://decodingvibes.com/blog/why-you-actually-want-machines-writing-the-code-for-your-next-flight/
1•altmanaltman•36m ago•0 comments

South Korea Exploring Using Hyundai Robots as Army Numbers Fall

https://www.bloomberg.com/news/articles/2026-05-11/south-korea-exploring-using-hyundai-robots-as-...
1•petethomas•41m ago•0 comments

Growling in a corner: Samuel Johnson's lost years

https://www.commonreader.co.uk/p/growling-in-a-corner-samuel-johnsons
1•pepys•43m ago•0 comments

Europe Is Losing Its Best Engineers – Not to Emigration, but to Management

https://andrulis.de/blog/20260429_management.html
1•taubek•44m ago•0 comments

Iran mulls taking control of all 7 cables passing through Strait of Hormuz

https://www.wionews.com/world/iran-to-take-full-control-of-all-7-undersea-internet-cables-passing...
5•jonah•47m ago•0 comments

The Trouble with Narrative History

https://thereader.mitpress.mit.edu/the-trouble-with-narrative-history/
2•Hooke•48m ago•0 comments

Geography Is Four-Dimensional

https://sive.rs/4d
1•Curiositry•48m ago•0 comments

Visual Generation Unlocks Human-Like Reasoning Through Multimodal World Models

https://arxiv.org/abs/2601.19834
2•felineflock•50m ago•0 comments

Blink – AI Assistant

https://blink-oi.vercel.app
1•Pascal1997•51m ago•0 comments

Neural Machine Perception

https://openstrate.com/
1•realitymatrixyz•52m ago•0 comments

A single 10,000 foot reel of digital microfilm: WAR.GOV/UFO

https://hypergrid.systems/war.gov-ufo-viewer/microfilm2?frame=12404&page=12404
1•keepamovin•53m ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•11mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•11mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•11mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•11mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.