frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Sites with a /Now Page

https://nownownow.com
1•zdw•2m ago•0 comments

Happy Map

https://pudding.cool/2026/02/happy-map/
1•latexr•4m ago•0 comments

Just two days of oatmeal cut bad cholesterol by 10%

https://www.sciencedaily.com/releases/2026/02/260225081217.htm
1•gradus_ad•10m ago•1 comments

Microgpt

http://karpathy.github.io/2026/02/12/microgpt/
1•tambourine_man•15m ago•0 comments

Blender iPad App Development Halted as Android Tablets Get Priority

https://www.macrumors.com/2026/02/27/blender-ipad-pro-app-development-halted/
2•mrkpdl•19m ago•0 comments

Reconstructing OPL: Joseph Weizenbaum's Online Programming Language

https://timereshared.com/reconstructing-joseph-weizenbaums-opl/
1•abrax3141•23m ago•0 comments

Running a One Trillion-Parameter LLM Locally on AMD Ryzen AI Max+ Cluster

https://www.amd.com/en/developer/resources/technical-articles/2026/how-to-run-a-one-trillion-para...
4•mindcrime•29m ago•0 comments

Banning children from VPNs and social media will erode adults' privacy

https://www.newscientist.com/article/2516996-banning-children-from-vpns-and-social-media-will-ero...
4•hn_acker•30m ago•1 comments

Agentation: Structured UI feedback for coding agents

https://agentation.dev/
1•firloop•31m ago•0 comments

AMA about our work with the Dow and our thinking over the past few days

https://twitter.com/sama/status/2027900042720498089
1•caaqil•34m ago•0 comments

Show HN: Cognitive architecture that hit #1 on LiveBench (68.5%)

https://truthagi.ai
1•felipemayamuniz•36m ago•1 comments

Show HN: Quizz MCP – Turn Claude Code Conversations into Quizzes

https://github.com/ThoBustos/quizz-mcp
1•ThoBustos•38m ago•0 comments

AI What Do: A framework for thinking about AI power and human agency

https://osh.works/posts/ai-what-do/
1•oshoma•41m ago•0 comments

Daily Tetonor- the Daily Math Logic Puzzle

https://dailytetonor.com/
1•H3d3s•42m ago•0 comments

How Awesome? annotates GitHub awesome lists with repo stats, stars, last commit

https://how-awesome.libklein.com/
1•zdw•42m ago•0 comments

Show HN: Integrate governance before your AI stack executes – COMMAND console

https://www.mos2es.io
1•Burnmydays•43m ago•0 comments

deleted

1•folkstack•43m ago•0 comments

Ubuntu 26.04 ends a 40-year old sudo tradition

https://www.omgubuntu.co.uk/2026/02/ubuntu-26-04-sudo-password-asterisks
2•campuscodi•43m ago•0 comments

Napkin Math Flashcards

https://chughes87.github.io/napkin-math-flashcards.html
1•archarios•44m ago•1 comments

Fast Autoscheduling for Sparse ML Frameworks

http://fredrikbk.com/cgo26scorch.html
1•matt_d•45m ago•0 comments

Sam Altman AMA about DoD deal

https://xcancel.com/i/status/2027900042720498089
8•marcuschong•45m ago•1 comments

TENSURE: Fuzzing Sparse Tensor Compilers (Registered Report)

https://www.ndss-symposium.org/ndss-paper/auto-draft-689/
1•matt_d•49m ago•0 comments

OpenAI has released Dow contract language, and it's as Anthropic claimed

https://twitter.com/justanotherlaw/status/2027855993921802484
1•erwald•49m ago•0 comments

A Day in the Life of an Enshittificator [video]

https://www.youtube.com/watch?v=T4Upf_B9RLQ
3•zahlman•50m ago•1 comments

Claude making me more productive every day usecases

1•joel_hainzl•53m ago•0 comments

DeepExplain: Interactive Guide to Dirac Notation and Quantum Mechanics

https://deepexplain.dev/dirac-notation/
2•crawde•54m ago•0 comments

Show HN: A live playground for Beautiful Mermaid

https://play.beautiful-mermaid.dev/
1•Justineo•54m ago•0 comments

Show HN: Atom – open-source AI agent with "visual" episodic memory

https://github.com/rush86999/atom
1•rush86999•55m ago•0 comments

A Reinforcement Learning Environment for Automatic Code Optimization in MLIR

https://arxiv.org/abs/2409.11068
1•matt_d•56m ago•0 comments

"Half the dads at this 7am swim practice have Codex or Claude Code fired up."

https://twitter.com/mattyglesias/status/2027724808406831604
6•jmeister•56m ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•9mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•9mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•9mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•9mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.