frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Banning social media is the wrong conversation

https://substack.com/inbox/post/182273404
1•_phnd_•1m ago•1 comments

The Deadweight Loss of Entertainment

https://moultano.wordpress.com/2025/12/09/the-dead-weight-loss-of-entertainment/
1•Ariarule•3m ago•0 comments

A Room of One's Own: The Studiolo

https://www.italianrenaissanceresources.com/units/unit-4/essays/a-room-of-ones-own-the-studiolo/
1•foster_nyman•4m ago•0 comments

The Private-Credit Party Turns Ugly for Individual Investors

https://www.wsj.com/finance/investing/the-private-credit-party-turns-ugly-for-individual-investor...
1•zerosizedweasle•7m ago•0 comments

ONNX Runtime and CoreML May Silently Convert Your Model to FP16

https://ym2132.github.io/ONNX_MLProgram_NN_exploration
2•Two_hands•11m ago•0 comments

400-Mile-Long Layer of Fog Has Been Draped over California for 3 Weeks

https://weather.com/news/weather/news/2025-12-16-tule-fog-central-california-valley-november-dece...
2•geox•13m ago•0 comments

Germany's Christmas Markets Are Now Ringed with Security Barriers

https://www.nytimes.com/2025/12/19/world/europe/germany-christmas-market-security-bollard-attacks...
1•bookofjoe•14m ago•2 comments

Show HN: I automated forensic accounting for divorce cases (3 min vs. 4 weeks)

1•cd_mkdir•15m ago•0 comments

Foundations of LVM for mere mortals (2015)

https://storageapis.wordpress.com/2015/12/04/foundations-of-lvm-for-mere-mortals/
2•indigodaddy•17m ago•0 comments

What New Developers Need to Know About Working with AI

https://www.mooreds.com/wordpress/archives/3722
2•mooreds•19m ago•0 comments

All Things Wrapped (2025)

https://mtajchert.com/all-things-wrapped
1•tajchert•19m ago•1 comments

Show HN: Type-safe JSON-LD schema builder for Next.js

https://github.com/Aghefendi/nextjs-jsonld-schema
1•adas014•20m ago•0 comments

Remote: Terms of Distributed Collaboration

https://www.nakedinstinct.xyz/remote-work-classification/
1•mooreds•23m ago•0 comments

Text Rendering Hates You

https://faultlore.com/blah/text-hates-you/
1•andsoitis•23m ago•1 comments

Walmart and other US companies want to build a pipeline of skilled tradespeople

https://apnews.com/article/skilled-trades-labor-shortage-walmart-maintenance-5ab4bf643840a6a49660...
1•petethomas•31m ago•0 comments

Laid Off After 25 Years in Tech: Anxiety, Sacrifice, Reality No One Talks About [video]

https://www.youtube.com/watch?v=VeMA9WGKxOg
2•m348e912•31m ago•0 comments

Just click and see what happens

https://iamdinakar.github.io/simplest-project-ever/
1•DinakarS•35m ago•1 comments

A Case for Self-Hosted P2P Storage

https://carlosfelic.io/misc/self-hosted-p2p-storage-ledgerless/
2•cfelicio•35m ago•1 comments

Something Little on Group Testing

https://www.hermandaniel.com/blog/20251113-group-testing/
2•kekqqq•39m ago•0 comments

Holes in the Web - Generative AI has access to a small slice of human knowledge

https://aeon.co/essays/generative-ai-has-access-to-a-small-slice-of-human-knowledge
2•tartoran•43m ago•0 comments

I thought passkeys were confusing until I switched to this password manager

https://www.makeuseof.com/thought-passkeys-were-confusing-until-switched-to-password-manager/
2•RyeCombinator•43m ago•0 comments

Show HN: Agent Skill turns existing filesystem into Claude's memory

https://github.com/backnotprop/rg_history
2•ramoz•45m ago•0 comments

Primary time scale failure at NIST Boulder campus; impact on NTP services

https://groups.google.com/a/list.nist.gov/g/internet-time-service/c/o0dDDcr1a8I?pli=1
1•airhangerf15•51m ago•0 comments

Belated Liquid Glass on iPhone first impressions

https://lapcatsoftware.com/articles/2025/12/4.html
4•robenkleene•53m ago•0 comments

I built a tool that turns prompt into animation in 10 seconds

https://videoeffectvibe.com
1•bruuuuuuuuh•53m ago•2 comments

How Israel targeted Iran's nuclear scientists

https://www.washingtonpost.com/national-security/2025/12/17/iran-israel-war-nuclear-scientists-fr...
2•markus_zhang•55m ago•0 comments

I wish people were more public

https://borretti.me/article/i-wish-people-were-more-public
2•swah•56m ago•1 comments

Year Prediction Bingo Card

https://docs.google.com/spreadsheets/d/1XM5zEWHeK2EPZcq1fUsuVIN1aceAY8GUy1avWYOZve0/edit?gid=0#gid=0
1•mooreds•58m ago•0 comments

Show HN: Create Scrapers for Any Site with AI

https://chromewebstore.google.com/detail/lection/ddlpcandmdagknjmlmokglimgepcgpjo
1•jlauf•59m ago•0 comments

Power outage in Boulder area affects atomic clock

https://www.cbsnews.com/colorado/news/power-outage-boulder-atomic-clock-nist/
3•jonbaer•1h ago•1 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•7mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•7mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•7mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•7mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.