frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Internet needs an antibotty immune system, stat

https://anil.recoil.org/notes/internet-immune-system
1•avsm•46s ago•0 comments

Show HN: Running a 1.7B parameters LLM on an Apple Watch

https://twitter.com/nobodywho_ai/status/2042179816925864209
1•pielouNW•1m ago•0 comments

Wine on Astral OS

https://astral-os.org/posts/2026/04/03/wine-on-astral.html
2•mrunix•3m ago•0 comments

America Has Lost the Arab World

https://www.foreignaffairs.com/iran/america-has-lost-arab-world
3•robtherobber•4m ago•0 comments

Fluent – AI Language Learning Kit for Claude Code and Others

https://github.com/m98/fluent
1•surrTurr•5m ago•0 comments

HookVerify – Webhook reliability layer for the receiving side

https://hookverify.com/
1•phntmdz•9m ago•0 comments

Oracle's 30k Employee Layoffs

https://tech-insider.org/oracle-30000-layoffs-ai-data-center-restructuring-2026/
3•01-_-•14m ago•0 comments

Russia sees China's WeChat, Douyin as models for its homegrown Max messenger

https://www.reuters.com/world/china/russia-sees-chinas-wechat-douyin-models-its-homegrown-max-mes...
2•01-_-•15m ago•0 comments

Multi-Core by Default – By Ryan Fleury – Digital Grove

https://www.dgtlgrove.com/p/multi-core-by-default
1•birdculture•24m ago•0 comments

Ortto Is Joining Canva

https://ortto.com/blog/ortto-is-joining-canva/
2•kehers•27m ago•0 comments

Mano-P: Open-source on-device GUI agent, #1 on OSWorld benchmark

https://arxiv.org/abs/2509.17336
1•mininglamp•29m ago•0 comments

The day you get cut out of the economy

https://geohot.github.io//blog/jekyll/update/2026/04/08/the-day-you-get-cut-out.html
2•rvz•29m ago•1 comments

Search page for Rust developer blogs

https://rustgrep.dev
1•adamsmo•29m ago•0 comments

Bring Buddy Back

https://github.com/anthropics/claude-code/issues/45596
2•cientifico•31m ago•0 comments

Claude mixes up who said what and that's not OK

https://dwyer.co.za/static/claude-mixes-up-who-said-what-and-thats-not-ok.html
2•sixhobbits•32m ago•0 comments

AnyInspect – Service Inspections made simple with AI

https://anyinspect.ai/en-GB
1•Velc•35m ago•0 comments

Unofficial Linux 7.0-rc7 Release Notes, AI-generated

https://lexplain.net/release-notes/v7.0-rc7
2•daisydevel•36m ago•0 comments

Gemini-CLI, zeroclaw, Llama.cpp, Qwen3.5 on TK1

https://forums.developer.nvidia.com/t/gemini-cli-zeroclaw-llama-cpp-qwen3-5-on-tk1/365653
1•benbojangles•36m ago•1 comments

An international analysis of psychedelic drug effects on brain circuit function

https://www.nature.com/articles/s41591-026-04287-9
2•XzetaU8•36m ago•0 comments

FreeBSD Laptop Compatibility: Top Laptops to Use with FreeBSD

https://freebsdfoundation.github.io/freebsd-laptop-testing/
2•fork-bomber•40m ago•0 comments

Show HN: I built a Harvey-style tabular review app, then open sourced the code

https://isaacus.com/blog/hallucination-free-tabular-review-from-scratch
1•afistfullof•41m ago•0 comments

Microsoft's executive shake-up continues as developer division chief resigns

https://www.theverge.com/tech/908793/microsoft-devdiv-julia-liuson-resignation
1•pjmlp•42m ago•0 comments

Debugy: Runtime Logs for Coding Agents

https://www.debugy.dev
1•amitay1599•42m ago•0 comments

We measured copyrighted-text memorization in 81 open-weight language models

https://zenodo.org/records/19431804
2•crovia•43m ago•0 comments

Show HN: [PKG47] AI-Controlled Package Registry

https://pkg47.com/
2•seuros•43m ago•1 comments

Afterchain – Deterministic inheritance protocol for digital assets

https://github.com/Afterchain/afterchain-protocol-public
1•Afterchain•44m ago•0 comments

Creating the Futurescape for the Fifth Element [2019]

https://theasc.com/articles/fantastic-voyage-creating-the-futurescape-for-the-fifth-element
13•nixass•46m ago•3 comments

Do links hurt news publishers on Twitter? Our analysis suggests yes

https://www.niemanlab.org/2026/04/do-links-hurt-news-publishers-on-twitter-our-analysis-suggests-...
1•giuliomagnifico•52m ago•0 comments

Nigel Farage wants to build a British ICE. Starmer may have handed him the tools

https://www.thenerve.news/p/reform-deportation-operation-restoring-justice-data-surveillance-pala...
3•doener•53m ago•3 comments

Fast, cheap AI-assisted decompilation of binary code is here

https://twitter.com/esrtweet/status/2042002143045890412
1•tosh•54m ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•10mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•10mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•10mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•10mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.