frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Jon Skeet Facts

https://meta.stackexchange.com/questions/9134/jon-skeet-facts
1•ravenical•51s ago•1 comments

OAuth 2.1 Dynamic Client Registration for AWS BedrockAgentCore Gateway

https://github.com/orgs/stache-ai/discussions/5
1•Jtpenny•58s ago•0 comments

Serverless MCP on AWS with S3Vectors and Agentcore

https://github.com/orgs/stache-ai/discussions/4
1•Jtpenny•1m ago•0 comments

Reverse-engineering another Ursa Major classic: the StarGate 323

https://www.temeculadsp.com/journal/understanding-timing-circuits
1•johnwheeler•3m ago•0 comments

Show HN: AeroTag – Tag-based workspace management for AeroSpace (macOS)

https://typester.dev/blog/2026/01/11/tag-based-workspace-management-with-aerospace
1•typester•5m ago•1 comments

Hubble Telescope's Final Countdown: Could It Disappear Sooner Than Expected?

https://dailygalaxy.com/2026/01/hubble-countdown-could-it-disappear-sooner/
1•TMWNN•10m ago•0 comments

Token-Count-Based Batching: Faster, Cheaper Embedding Inference for Queries

https://www.mongodb.com/company/blog/engineering/token-count-based-batching-faster-cheaper-embedd...
1•fzliu•12m ago•0 comments

Tuning Random Generators: Property-Based Testing as Probabilistic Programming [pdf]

https://web.cs.ucla.edu/~todd/research/oopsla25a.pdf
2•todsacerdoti•14m ago•0 comments

Show HN: Built a course on buying small businesses – validating demand

https://smalldealschool.com/
1•boring_million•16m ago•1 comments

A $400k payout is putting prediction markets in the spotlight

https://apnews.com/article/prediction-markets-maduro-trades-1f47e737f915fff00c57f03e7390b41f
4•petethomas•20m ago•0 comments

Matchbox Educable Noughts and Crosses Engine

https://en.wikipedia.org/wiki/Matchbox_Educable_Noughts_and_Crosses_Engine
1•icwtyjj•22m ago•0 comments

Big Tech's Ugly Duckling: Can Snap Finally Execute?

https://ossa-ma.github.io/blog/snapchat?
1•ossa-ma•24m ago•0 comments

Live Captions

https://avc.xyz/live-captions
1•wslh•27m ago•0 comments

You don't need a skill registry (for your CLI tools)

https://solmaz.io/skillflag
2•hosolmaz•32m ago•0 comments

The US Empire is going supernova

https://simplicius76.substack.com/p/the-us-empire-is-going-supernova
1•SanjayMehta•35m ago•0 comments

Ogre 14.5 Released

https://www.ogre3d.org/2026/01/10/ogre-14-5-released
1•klaussilveira•37m ago•0 comments

Show HN: Instagram Saved Collection Downloader

https://chromewebstore.google.com/detail/instagram-saved-collectio/dibmfjgbnhbfhlajpahnbiiabpdabajo
1•qwikhost•40m ago•0 comments

Revolutionary eye injection saved my sight, says first ever patient

https://www.bbc.co.uk/news/articles/c89qyv98lzdo
2•1a527dd5•43m ago•0 comments

Show HN: I built an autopilot investor outreach tool – and it became my startup

https://pilt.ai
1•citizenbab•48m ago•0 comments

SwiftScripting (type-safe AppleScript from Swift)

https://github.com/tingraldi/SwiftScripting
1•frizlab•50m ago•0 comments

The Agent Fallacy

https://noemititarenco.com/blog/the-agent-fallacy-prompt-orchestration/
3•dvt•53m ago•0 comments

A ribbon worm's unique attack: R/interestingasfuck

https://old.reddit.com/r/interestingasfuck/comments/1p26zwp/a_ribbon_worms_unique_attack/
2•vinnyglennon•55m ago•1 comments

Show HN: Featureless – a one-page, distraction-free web app for writing

2•emanoj•58m ago•2 comments

Show HN: What if AI agents had Zodiac personalities?

https://github.com/baturyilmaz/what-if-ai-agents-had-zodiac-personalities
6•arbayi•59m ago•1 comments

iOS as Acceleration

https://arxiv.org/abs/2512.22180
2•PaulHoule•1h ago•0 comments

Trump may be beginning of the end for enshittification – make tech good again

https://www.theguardian.com/commentisfree/2026/jan/10/trump-beginning-of-end-enshittification-mak...
8•pabs3•1h ago•0 comments

How to stalk your ex; made easier than ever [video]

https://www.youtube.com/watch?v=cK6WyS2JipQ
1•vo2maxer•1h ago•0 comments

Discount Gambit

https://longform.asmartbear.com/discount-gambit/
1•mooreds•1h ago•0 comments

Kreuzberg: Extract text and metadata from a wide range of file formats

https://github.com/kreuzberg-dev/kreuzberg
3•thunderbong•1h ago•0 comments

Show HN: UCP Demo – Interactive Demo of the Universal Commerce Protocol

1•init0•1h ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•7mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•7mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•7mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•7mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.