frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Partitioning a 17TB Table in PostgreSQL

https://www.tines.com/blog/futureproofing-tines-partitioning-a-17tb-table-in-postgresql/
1•shayonj•2m ago•0 comments

VS Code: Broken rendering on macOS after app resumed from idle state

https://github.com/microsoft/vscode/issues/284162
1•tosh•2m ago•0 comments

OpenAI Wants a Cut of Your Profits: Inside Its New Royalty-Based Plan

https://www.gizmochina.com/2026/01/21/openai-wants-a-cut-of-your-profits-inside-its-new-royalty-b...
1•thenaturalist•2m ago•0 comments

Shenzhou-20 Returns Safely After Historic In-Flight Debris Repairs

https://www.apollothirteen.com/article/orbital-resilience-shenzhou-20-returns-safely-following-hi...
1•darkmatternews•4m ago•0 comments

Alternatives to MinIO for single-node local S3

https://rmoff.net/2026/01/14/alternatives-to-minio-for-single-node-local-s3/
1•rymurr•4m ago•0 comments

Show HN: A verified foundation of mathematics in Coq (Theory of Systems)

1•Horsocrates•7m ago•0 comments

Heathrow's new scanners end dreaded rummage for liquids and laptops

https://www.reuters.com/world/heathrows-new-scanners-end-dreaded-rummage-liquids-laptops-2026-01-23/
1•comebhack•9m ago•0 comments

Can the prescription drug leucovorin treat autism? History says, probably not

https://www.npr.org/sections/shots-health-news/2026/01/22/nx-s1-5684294/leucovorin-autism-folic-f...
1•pseudolus•16m ago•0 comments

Davos Stops Pretending

https://messaging-custom-newsletters.nytimes.com/dynamic/render
1•doener•17m ago•0 comments

For the Children: A short story about the endgame of EU Chat Control

https://gigaprojects.online/post/1
1•giga_private•18m ago•1 comments

An Adversarial Coding Test

https://runjak.codes/posts/2026-01-21-adversarial-coding-test/
1•birdculture•20m ago•0 comments

Go Developer Survey 2025: How Gophers Use AI Tools, Editors, and Cloud Platforms

https://go.dev/blog/survey2025
1•Lwrless•20m ago•0 comments

Ask HN: What's the current best local/open speech-to-speech setup?

1•dsrtslnd23•22m ago•0 comments

A Multi-Entry Control Flow Graph Design Conundrum

https://bernsteinbear.com/blog/multiple-entry/
2•chunkles•25m ago•0 comments

Bernstein vs. United States

https://en.wikipedia.org/wiki/Bernstein_v._United_States
1•u1hcw9nx•27m ago•0 comments

Show HN: Workmux – Parallel development in tmux with Git worktrees

https://workmux.raine.dev/
1•rane•27m ago•0 comments

Show HN: 9 years building an open-source financial platform

https://github.com/finmars-platform/finmars-core
3•ogreshnev•28m ago•0 comments

Ask HN: What 'AI feature' created negative ROI in production?

1•kajolshah_bt•29m ago•0 comments

TigerBeetle's Stablecoin Mistake

https://www.news.alvaroduran.com/tigerbeetle-stablecoin-mistake/
2•ohduran•29m ago•0 comments

What Will You Do When AI runs Out of Money and Disappear?

https://louwrentius.com/what-will-you-do-when-ai-will-run-out-of-money-and-disappear.html
1•louwrentius•31m ago•0 comments

Why is software still built like billions don't exist in 2026?

5•yerushalayim•33m ago•2 comments

Is Polish Scrabble the most difficult in the world? [video]

https://www.youtube.com/watch?v=aTIOHwT0FnY
1•nathell•33m ago•0 comments

Post-Agentic Code Forges

https://sluongng.substack.com/p/post-agentic-code-forges
1•todsacerdoti•34m ago•0 comments

In-memory analog computing for non-negative matrix factorization

https://www.nature.com/articles/s41467-026-68609-8
1•martinlaz•39m ago•0 comments

RT Superconductivity at 298K in Ternary LaScH System at High-Pressure Conditions

https://arxiv.org/abs/2510.01273
1•fluffybuns•41m ago•0 comments

Show HN: Waifu2x.live – Free AI image upscaler (2x/4x) & video generation

1•Nancy1230•41m ago•1 comments

Campaigner launches £1.5B legal action in UK against Apple over wallet's ...

https://www.theguardian.com/technology/2026/jan/23/campaigner-launches-legal-action-against-apple...
1•chrisjj•43m ago•1 comments

Anthropic: AI Is Transforming Jobs, Not Replacing Them

https://www.forbes.com/sites/anishasircar/2026/01/23/ai-is-transforming-jobs-not-replacing-them-a...
1•hochmartinez•44m ago•1 comments

AI Boosts Research Careers but Flattens Scientific Discovery

https://spectrum.ieee.org/ai-science-research-flattens-discovery
1•pseudolus•44m ago•0 comments

Google must face consumer antitrust lawsuit over search dominance,US judge rules

https://www.reuters.com/legal/government/google-must-face-consumer-antitrust-lawsuit-over-search-...
2•pseudolus•45m ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•8mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•8mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•8mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•8mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.