frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why are neural networks and cryptographic ciphers so similar?

https://reiner.org/neural-net-ciphers
1•jxmorris12•3m ago•0 comments

Amazon.com (1999)

https://web.archive.org/web/19990828014913/http://www.amazon.com/
1•for_i_in_range•7m ago•0 comments

Show HN: Embed your Codex pets in React apps

https://github.com/backnotprop/codex-pets-react
1•ramoz•11m ago•0 comments

Isbell Duality (2022)

https://www.alphaxiv.org/abs/2212.11079
1•measurablefunc•14m ago•0 comments

You Have No Idea How Much You Still Use BlackBerry

https://www.wsj.com/tech/blackberry-qnx-software-cars-bf2a2280
1•thm•15m ago•0 comments

GrapheneOS: The Linux kernel is a fundamentally anti-security project

https://twitter.com/GrapheneOS/status/2035450069118296272
2•maxloh•15m ago•1 comments

Evenpairs – Curated Introductions for Professionals

https://evenpairs.com/
1•bharathkoyyedi•24m ago•1 comments

Why does it take so long to release black fan versions?

https://www.noctua.at/en/expertise/blog/how-can-it-take-so-long-to-release-black-fan-versions
3•buildbot•25m ago•0 comments

Texico: Learn the principles of programming without even touching a computer

https://www3.nhk.or.jp/nhkworld/en/shows/texico/
1•o4c•28m ago•0 comments

Dominant Resource Fairness: Fair Allocation of Multiple Resource Types [pdf]

https://amplab.cs.berkeley.edu/wp-content/uploads/2011/06/Dominant-Resource-Fairness-Fair-Allocat...
1•tibbar•32m ago•0 comments

Beware the Hats: A Warning from the Streets of Majorca

https://fshot.org/techzone/bewareofhats.php
1•victorkulla•33m ago•0 comments

AI-CLI – Generate anything from your terminal

https://ai-cli.dev/
2•nikolay•36m ago•0 comments

Show HN: Raptor – fast, energy efficient small file uploads to S3

https://github.com/proxylity/raptor
3•mlhpdx•36m ago•0 comments

LibreOffice 26.2.3 Released – What Is New and What Was Fixed?

https://tux.re/forum/viewtopic.php?t=210
2•tux033•45m ago•0 comments

We Tried Popular Linux Gaming Distros – LTT Labs

https://www.lttlabs.com/articles/2026/04/29/we-tried-popular-linux-gaming-distros
2•xbmcuser•47m ago•0 comments

Understanding the LLM Bubble

https://americanaffairsjournal.org/2026/02/understanding-the-llm-bubble/
1•georgecmu•48m ago•0 comments

GPT-5.5 matches hyped Mythos Preview

https://arstechnica.com/ai/2026/05/amid-mythos-hyped-cybersecurity-prowess-researchers-find-gpt-5...
3•y1n0•49m ago•0 comments

Ask.com has closed

https://www.ask.com/
89•supermdguy•50m ago•38 comments

The Hiddn Cost of AI Coding Tools: $12,000/Year

https://blog.devgenius.io/the-hidden-cost-of-ai-coding-tools-12-000-year-for-our-team-4b857f6a8636
2•freakynit•51m ago•0 comments

K3k: Kubernetes in Kubernetes

https://github.com/rancher/k3k
2•jzebedee•1h ago•0 comments

Hermitary – the hermit, hermits, eremitism, silence, and simplicity

https://www.hermitary.com/
1•marttt•1h ago•0 comments

AP-Quiz: Mobile AP Exam Practice for Spare Moments

https://ap-quiz.com
2•coolwulf•1h ago•1 comments

Hyperscalers are buying all the chips to then rent them to us later

4•adelks•1h ago•2 comments

The new Run dialog: faster, cleaner, and more capable

https://devblogs.microsoft.com/commandline/the-new-run-dialog-faster-cleaner-and-more-capable/
3•shscs911•1h ago•0 comments

A contribution to solving the existential anxiety problem of AI hallucinations

https://zenodo.org/records/19608960
1•M_Samir333•1h ago•0 comments

Primate 0.38: The route is the contract

https://primate.run/blog/primate-038
5•hansmighty•1h ago•0 comments

CSS Reboot Day (May 1, 2026)

https://holidaytoday.org/css-reboot-day/
2•bariumbitmap•1h ago•0 comments

Microsoft's Xbox mode is now available for all Windows 11 PCs

https://www.theverge.com/news/921582/microsoft-xbox-mode-windows-11
2•fortran77•1h ago•0 comments

Show HN: Open-Source FHIR –> OMOP Pipeline

https://forge.foxtrotcommunications.net/
2•brady_bastian•1h ago•0 comments

Wirken: Secure AI agent gateway. Encrypted vault. Single static binary

https://github.com/gebruder/wirken
3•thunderbong•1h ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•11mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•11mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•11mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•11mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.