frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Google: "We Have No Moat, and Neither Does OpenAI" (2023)

https://newsletter.semianalysis.com/p/google-we-have-no-moat-and-neither
1•dotmanish•51s ago•0 comments

Show HN: Remove silences from video/audio for free

https://rendley.com/tools/remove-silence/video
2•spider853•1m ago•0 comments

What little I know about Readily.news

https://cryptography.dog/blog/what-little-i-know-about-readily-news/
1•ColinWright•1m ago•0 comments

Using LLMs for Web Search

https://ankursethi.com/blog/using-llms-for-web-search/
1•speckx•2m ago•0 comments

Wasmi 1.0 – WebAssembly Interpreter Stable at Last

https://wasmi-labs.github.io/blog/posts/wasmi-v1.0/
1•herobird•2m ago•1 comments

Perplexity's Comet browser is now available to everyone for fre

https://www.theverge.com/news/790419/perplexity-comet-available-everyone-free
1•kwar13•3m ago•0 comments

Reversing structural deformation: Deflexionization Theory

https://zenodo.org/records/17637758
1•flexionU•3m ago•0 comments

Show HN: The Journal of AI Slop – an AI peer-review journal for AI "research"

https://www.journalofaislop.com/
3•popidge•4m ago•0 comments

Show HN: Free tool to check iOS version and framework market shares

https://ioscompatibility.com
1•_jogicodes_•5m ago•0 comments

Accurate predictions on small data with a tabular foundation model [pdf]

https://idp.nature.com/authorize?response_type=cookie&client_id=grover&redirect_uri=https%3A%2F%2...
1•simonpure•5m ago•0 comments

Warelay – Send, receive, and auto-reply on WhatsApp

https://github.com/steipete/warelay
1•epaga•5m ago•0 comments

Fastly, Cloudflare, Netflix, Apple Propose QUIC over TCP/TLS Called QMux

https://datatracker.ietf.org/doc/draft-opik-quic-qmux/
1•schmichael•5m ago•0 comments

Mapping Every Dollar of America's $5T Healthcare System

https://healthisotherpeople.substack.com/p/an-abominable-creature
1•brandonb•6m ago•0 comments

Endogenic heat at Enceladus' North Pole

https://www.science.org/doi/10.1126/sciadv.adx4338
1•PaulHoule•6m ago•0 comments

Watched, Tracked, Targeted: Life in Gaza Under Surveillance Regime

https://nymag.com/intelligencer/article/watched-tracked-targeted-israel-surveillance-gaza.html
1•jbegley•7m ago•0 comments

GSWT: Gaussian Splatting Wang Tiles

https://yunfan.zone/gswt_webpage/
1•klaussilveira•8m ago•0 comments

Hoto and Fanttik became popular tool companies in the US

https://www.theverge.com/report/829265/hoto-fanttik-profile-origins-xiaomi-aukey-tiktok
1•ZeljkoS•9m ago•0 comments

Agentic QA – Open-source middleware to fuzz-test agents for loops

1•Saurabh_Kumar_•10m ago•0 comments

Developing Emergent Behavior

https://blog.ivie.codes/posts/emergence/
1•Charmunk•10m ago•0 comments

Apple Pushes iPhone Users Still on iOS 18 to Upgrade to iOS 26

https://www.macrumors.com/2025/12/02/apple-pushes-ios-18-users-to-ios-26/
2•strict9•10m ago•0 comments

Management Network Design – Design for Failure

https://github.com/xxia8864/Article/blob/main/Docs/Management%20Network%20Design.md
1•bill3389•11m ago•0 comments

The West's Last Chance – How to Build a New Global Order Before It's Too Late

https://www.foreignaffairs.com/united-states/wests-last-chance
1•consumer451•11m ago•1 comments

Show HN: Whis – Voice-to-Clipboard for Linux

https://github.com/frankdierolf/whis
1•FrankDierolf•11m ago•1 comments

Ask HN: Where are the sane-paying tech jobs?

2•nobodyandproud•12m ago•2 comments

Security research in the age of AI tools

https://www.invicti.com/blog/security-labs/security-research-in-the-age-of-ai-tools
1•harisec•12m ago•0 comments

Likely VRA Overturning would hand the House to Republicans for a generation

https://www.nytimes.com/2025/10/15/upshot/supreme-court-voting-rights-gerrymander.html
1•softwaredoug•13m ago•0 comments

Chicago plans to hold "social media apps accountable for public safety"

https://www.axios.com/local/chicago/2025/12/03/city-council-social-media-companies-accountable-pu...
2•stockresearcher•14m ago•0 comments

Inventing Breakfast

https://worldhistory.substack.com/p/inventing-breakfast
1•crescit_eundo•14m ago•0 comments

Metal Gear Solid (Game Boy Color)

https://en.wikipedia.org/wiki/Metal_Gear_Solid_(2000_video_game)
1•tosh•18m ago•0 comments

Stop Blaming Embeddings, Most RAG Failures Come from Bad Chunking

2•wehadit•23m ago•1 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•6mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•6mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•6mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•6mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.