frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Functional Flocking Quadtree in ClojureScript

https://www.lbjgruppen.com/en/posts/flocking-quadtrees
1•lbj•3m ago•0 comments

Algorithms Are No Longer Tools, They Are Decision-Makers

https://www.harvard.com/book/9781067112608
1•aiexpertuser•5m ago•1 comments

DHRUV64: India's First 1.0 GHz, 64-bit dual-core Microprocessor

https://www.pib.gov.in/PressNoteDetails.aspx?NoteId=156505&ModuleId=3&reg=3&lang=1
1•rilawa•5m ago•0 comments

Fighting Big Tech: Slack vs. Microsoft Teams [video]

https://www.youtube.com/watch?v=tO3SJiB8agI
1•binjo•7m ago•1 comments

US suspends technology deal with the UK

https://www.ft.com/content/afd45e58-5351-4379-8f7e-5788da3d2e20
1•robtherobber•7m ago•0 comments

Show HN: Visualize Meeting Transcripts into Flows and Mind Maps (Offline)

https://selfoss.app/
1•shobankr•8m ago•0 comments

Don't fear Python subprocess or Go codegen

https://benhoyt.com/writings/jubilant/
1•benhoyt•9m ago•0 comments

Show HN: Visualizing when you forget what you learn

https://flashmind-app.vercel.app/
1•rogimatt•13m ago•0 comments

Deep Dive in Java vs. C++ Performance

https://johnnysswlab.com/deep-dive-in-java-vs-c-performance/
2•ibobev•15m ago•0 comments

Aliasing

https://xania.org/202512/15-aliasing-in-general
1•ibobev•16m ago•0 comments

Experiment to train rats to play Doom reaches a new level: shooting enemies

https://www.tomshardware.com/virtual-reality/rats-are-still-being-taught-to-play-doom-now-with-a-...
2•rbanffy•16m ago•0 comments

Lightweight Cardinality Estimation with Density

https://buttondown.com/jaffray/archive/lightweight-cardinality-estimation-with-density/
1•ibobev•16m ago•0 comments

Cutting chatbot costs and latency by offloading queries to local guardrails

https://tanaos.com/blog/cut-guardrail-costs/
1•rlucato•17m ago•0 comments

Warper: Ultra-Fast React Virtualization

https://warper.tech/
1•handfuloflight•18m ago•0 comments

TPAC 2025 Breakouts Recap

https://www.w3.org/blog/2025/tpac-2025-breakouts-recap/
1•pentagrama•19m ago•0 comments

Image Translator – AI-Powered Photo Translation Tool

https://www.imagetranslatorai.app/
1•Irving-AI•21m ago•0 comments

Senators Investigate Role of A.I. Data Centers in Rising Electricity Costs

https://www.nytimes.com/2025/12/16/business/energy-environment/senate-democrats-electricity-price...
3•fleahunter•25m ago•0 comments

How Sustainable Is This Crazy Server Spending?

https://www.nextplatform.com/2025/12/15/how-sustainable-is-this-crazy-server-spending/
1•rbanffy•26m ago•0 comments

AI Ideas That Only Work Because It's 2026

1•suhaspatil101•30m ago•1 comments

Ask HN: Please, review wordoid2.com, a smart naming webapp inspired by original

https://wordoid2.com/
1•aleks5678•31m ago•1 comments

Show HN: Spec-AGENTS.md – A tiny Doc-Driven "spec" for AI coding tools

https://github.com/yibie/SPEC-AGENTS.md
1•oliverchan2024•33m ago•1 comments

Nvidia B200: Keeping the CUDA Juggernaut Rolling Ft. Verda (Formerly DataCrunch)

https://chipsandcheese.com/p/nvidias-b200-keeping-the-cuda-juggernaut
1•rbanffy•37m ago•0 comments

ArkhamMirror: Airgapped investigation platform with CIA-style hypothesis testing

https://github.com/mantisfury/ArkhamMirror
2•ArkhamMirror•38m ago•1 comments

Cloudflare Radar: The rise of AI, post-quantum, and DDoS attacks

https://blog.cloudflare.com/radar-2025-year-in-review/
1•furkansahin•39m ago•0 comments

Cloudflare Is Experiencing Increased Error Rates Accessing R2 from ENAM

https://www.cloudflarestatus.com/incidents/0z4xng0gllq7
1•ouked•39m ago•0 comments

I ported JustHTML from Python to JavaScript with LLMs in 4.5 hours

https://simonwillison.net/2025/Dec/15/porting-justhtml/
1•genericlemon24•40m ago•0 comments

King of Cannibal Island: Will the AI Bubble Burst?

https://www.lrb.co.uk/the-paper/v47/n23/john-lanchester/king-of-cannibal-island
2•ostacke•41m ago•1 comments

AI space datacenters are impossible

https://ulveon.net/p/2025-12-15-ai-space-datacenters-are-literally-impossible/
2•kevin061•41m ago•0 comments

The Specification Renaissance? Skills and Mindset for Spec Driven Development

https://blog.scottlogic.com/2025/12/15/the-specification-renaissance-skills-and-mindset-for-spec-...
2•furkansahin•41m ago•0 comments

Dispatches

https://rodgercuddington.substack.com/p/dispatches
1•freespirt•41m ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•6mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•6mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•6mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•6mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.