frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Biases in the Blind Spot: Detecting What LLMs Fail to Mention

https://arxiv.org/abs/2602.10117
1•mpweiher•1m ago•0 comments

Free SERP Content Analyzer

https://kitful.ai/write-tools/serp-content-analyzer
1•eashish93•1m ago•0 comments

Why I'm Not Worried About My AI Dependency

https://boagworld.com/emails/ai-dependency/
1•cdrnsf•4m ago•0 comments

AI Agent Lands PRs in Major OSS Projects, Targets Maintainers via Cold Outreach

https://socket.dev/blog/ai-agent-lands-prs-in-major-oss-projects-targets-maintainers-via-cold-out...
1•cdrnsf•5m ago•0 comments

Internet Increasingly Becoming Unarchivable

https://www.niemanlab.org/2026/01/news-publishers-limit-internet-archive-access-due-to-ai-scrapin...
5•ninjagoo•7m ago•1 comments

Intent to Experiment: Ship Rust XML Parser to 1% stable for non XSLT scenarios

https://groups.google.com/a/chromium.org/g/blink-dev/c/D7BE4QPw0S4
1•justin-reeves•9m ago•0 comments

Google Search Isn't a Common Carrier–Richards vs. Google

https://blog.ericgoldman.org/archives/2026/02/google-search-isnt-a-common-carrier-richards-v-goog...
1•hn_acker•11m ago•0 comments

Rendering attractors at 200 megapixels on A100s

https://axisophy.com/collections/mersenne
2•scylx•11m ago•1 comments

First Ariane 6 with four boosters lifts off

https://www.esa.int/Enabling_Support/Space_Transportation/Ariane/More_power_first_Ariane_6_with_f...
3•belter•12m ago•0 comments

What If AI Isn't the Goal? – Living in a Post-AI Society

https://zias.be/blog/living-in-a-post-ai-society
1•ziasvannes•16m ago•2 comments

Putting economic theory to the test: Cutting local taxes cuts household income

https://phys.org/news/2026-02-economic-theory-local-taxes-household.html
2•bikenaga•16m ago•1 comments

How AI slop is causing a crisis in computer science

https://www.nature.com/articles/d41586-025-03967-9
3•gnabgib•20m ago•0 comments

Show HN: AuraSpend " Voice-first expense tracker using Gemini for NLU

https://play.google.com/store/apps/details?id=com.intrepid.auraspend&hl=en_US
1•subhanzg•23m ago•0 comments

Every App Needs Auth / Ory Helps / This Template Fixes It

https://github.com/Samuelk0nrad/docker-ory
1•samuel_kx0•24m ago•0 comments

Show HN: DryCast – Never run outside to save your laundry from rain again

https://drycast.app/
1•AwkwardPanda•24m ago•0 comments

Manage, freeze and restore GPU processes quickly

https://github.com/shayonj/gpusched
2•shayonj•24m ago•0 comments

Show HN: Tilth v0.3 – 17% cheaper AI code navigation (279 runs, 3 Claude models)

1•jahala•26m ago•0 comments

Tech leaders pour $50M into super PAC to elect AI-friendly candidates

https://www.latimes.com/business/story/2026-02-13/tech-titans-pour-50-million-into-super-pac-to-e...
3•geox•27m ago•0 comments

How Head Works in Git

https://jvns.ca/blog/2024/03/08/how-head-works-in-git/
3•b-man•27m ago•0 comments

I Visited the Future of AI Engineering – and Returned with a Warning

https://igor718185.substack.com/p/i-visited-the-future-of-ai-engineering
2•iggori•29m ago•2 comments

Dr. Oz pushes AI avatars as a fix for rural health care

https://www.npr.org/2026/02/14/nx-s1-5704189/dr-oz-ai-avatars-replace-rural-health-workers
4•toomuchtodo•30m ago•3 comments

TikTok

https://www.tiktok.com/explore
1•Hackersing•31m ago•0 comments

Bloom-Filter Art: Encode words in a heart; Send it to someone special

https://improbable-heart.com/
1•nait•31m ago•1 comments

Show HN: Clawsec - Open-source plugin for OpenClaw that blocks dangerous actions

https://www.clawsec.bot
1•subho007•35m ago•1 comments

The crucial first step for designing a successful enterprise AI system

https://www.technologyreview.com/2026/02/02/1131822/the-crucial-first-step-for-designing-a-succes...
1•gnabgib•35m ago•0 comments

Stop Drowning in Your Thoughts

https://curiositysink.substack.com/p/stop-drowning-in-your-thoughts
2•raptisj•35m ago•1 comments

Show HN: Solscan-CLI – Scan Solana wallets and audit DeFi from terminal

https://github.com/contactn8n410-del/solscan-cli
1•solscan_dev•36m ago•0 comments

Mozilla Readability

https://github.com/mozilla/readability
1•blenderob•36m ago•0 comments

ParipueiraBeberibeceara

http://ParipueiraBeberibeceara.com
1•Hackersing•36m ago•0 comments

Musk fires up SpaceX,Bezos pushes Blue Origin as billionaires race China to moon

https://www.reuters.com/business/aerospace-defense/musk-fires-up-spacex-bezos-pushes-blue-origin-...
2•tartoran•36m ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•8mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•8mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•8mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•8mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.