frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

HN: AI File Sorter 1.3 – Add your own Local LLM for offline file organization

https://github.com/hyperfield/ai-file-sorter/releases/tag/v1.3.0
1•hyperfield•57s ago•1 comments

Data General – The Fair Bastards

http://www.teamfoster.com/billteamfostercom
1•rbanffy•1m ago•0 comments

Sum Types in Julia and Rust

https://andreaskroepelin.de/blog/sum_types/
1•fanf2•3m ago•0 comments

Why Building Agents Is Hard

https://artificialinvestment.substack.com/p/interview-sanjin-bicanic-on-why-building
1•fovc•3m ago•1 comments

Archaeologists think they've solved Peru's 'band of holes' mystery

https://www.cnn.com/2025/11/21/science/peru-band-of-holes-mystery
1•giuliomagnifico•5m ago•0 comments

Linux Patches Improve Intel Nested VM Memory Performance Up to ~2353x in Test

https://www.phoronix.com/news/Intel-Nested-VM-Faster-Memory
1•rbanffy•6m ago•0 comments

SC25: HACCing over 500 Petaflops on Frontier

https://chipsandcheese.com/p/sc25-haccing-over-500-petaflops-on
1•rbanffy•8m ago•0 comments

Safe haven to sanctions: how Jersey sheltered Roman

https://www.theguardian.com/world/2025/nov/23/jersey-london-safe-haven-sanctions-roman-abramovich...
1•zeristor•11m ago•0 comments

Call Center Batik Air

1•Batikolangan•12m ago•7 comments

Ask HN: Vim, Emacs and the time spent configuring

1•not-so-darkstar•12m ago•0 comments

We're entering 'stage two of AI' where bottlenecks are physical constraints

https://fortune.com/2025/11/23/google-ai-data-centers-serving-capacity-contraints-gemini-google-c...
2•pretext•15m ago•0 comments

AI Ignites the Return of Bezos the Inventor

https://www.wsj.com/tech/ai/ai-ignites-the-return-of-bezos-the-inventor-c42d0075
1•pretext•15m ago•0 comments

Mind-reading devices can now predict preconscious thoughts: is it time to worry?

https://www.nature.com/articles/d41586-025-03714-0
1•bookofjoe•17m ago•1 comments

Shaders: How to draw high fidelity graphics with just x and y coordinates

https://www.makingsoftware.com/chapters/shaders
1•Garbage•18m ago•0 comments

Show HN: Sidemail – Email platform for SaaS (email API, newsletters, automation)

https://sidemail.io/
1•slonik•20m ago•0 comments

Show HN: Qeltrix:PoC for content-derived,parallel,streaming encryption container

1•hejhdiss•20m ago•0 comments

What Does the AGPL Require?

https://runxiyu.org/blog/agpl/
1•todsacerdoti•23m ago•0 comments

IKEA Mastered Furniture [video]

https://www.youtube.com/watch?v=0h8vAGCiRX0
1•vinhnx•24m ago•0 comments

Show HN: Page Cast – A single-file HTML host that hosts itself

https://gist.githack.com/Romelium/7513fc74536c3be15edf769cc416d10f/raw/4cebb56717ae4637de40403991...
1•Romelium•25m ago•0 comments

Titanic passenger's pocket watch sold for record £1.78M at auction

https://www.theguardian.com/uk-news/2025/nov/23/titanic-passenger-pocket-watch-sold-record-auction
1•stevekemp•27m ago•0 comments

Show HN: The AI homepage – A news homepage for AI related articles

https://www.theaihomepage.com/
1•maverick98•31m ago•0 comments

Game Theory Explains How Algorithms Can Drive Up Prices

https://www.wired.com/story/game-theory-explains-how-algorithms-can-drive-up-prices/
4•quapster•43m ago•0 comments

Microsoft says it will run Windows 11 File Explorer in background to load faster

https://www.windowslatest.com/2025/11/22/microsoft-says-it-will-always-run-windows-11-file-explor...
2•tosh•44m ago•2 comments

US Department of Transportation unveils first female-modeled crash test dummy

https://www.theguardian.com/world/2025/nov/21/transportation-department-first-female-crash-dummy
1•binning•44m ago•0 comments

The Many – and Contradictory – Histories of Mt. Rushmore

https://lithub.com/on-the-many-and-contradictory-histories-of-mt-rushmore/
1•bryanrasmussen•44m ago•0 comments

The battle between science and postmodernism: from Boyle's air pump to Dawkins

https://susanpickard.substack.com/p/the-battle-between-science-and-postmodernism
1•binning•48m ago•0 comments

Like the New Yorker but Better

https://thelambsconduitreview.neocities.org
1•rishirulzeworld•49m ago•0 comments

South Africa declares gender-based violence and femicide a national disaster

https://www.theguardian.com/society/2025/nov/22/south-africa-g20-protests-gender-based-violence-n...
2•binning•50m ago•0 comments

Show HN: I made it fast and easy to launch your own RAG-powered AI chatbots

https://www.chatrag.ai
1•carlos_marcial•50m ago•1 comments

Ask HN: Why GenAI is immoral but vibe coding is ok?

1•jb_briant•55m ago•4 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•6mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•6mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•6mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•6mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.