frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Epupp – Browser Extension to Tamper with Web Pages, Live and with Userscriptss

https://github.com/PEZ/epupp
1•TheWiggles•1m ago•0 comments

History and Science of the Hanta Virus

https://distressedscientists.substack.com/p/hantan-hondius
1•helsinkiandrew•3m ago•0 comments

Fusion's cost floor: what if the core were free?

https://1cfe.substack.com/p/fusions-cost-floor-what-if-the-core
1•helsinkiandrew•5m ago•0 comments

Multiple universities forced to reschedule final exams after Canvas incident

https://therecord.media/universities-forced-to-reschedule-exams-canvas-incident
1•jruohonen•8m ago•0 comments

Plants can 'hear' rain coming, spurring them into action

https://www.scientificamerican.com/article/plants-can-hear-rain-coming-spurring-them-into-action/
1•the-mitr•9m ago•0 comments

Tracing tokens through Llama 3.1 8B inference on H100s

https://krithik.xyz/what-is-inference-actually
1•krithik_7•10m ago•0 comments

Show HN: I audited my own back ends on 5 BaaS – leak in every one

https://github.com/Perufitlife/supabase-security-skill
1•renzom13•11m ago•0 comments

Notes on using GNU Emacs' Tramp system in an unusual shell environment

https://utcc.utoronto.ca/~cks/space/blog/programming/EmacsTrampNotes
1•susam•12m ago•0 comments

Best AI coding plan alternative to Claude and ChatGPT

2•Jsttan•17m ago•1 comments

Debian must ship reproducible packages

https://lists.debian.org/debian-devel-announce/2026/05/msg00001.html
3•robalni•22m ago•0 comments

Agent Harness Engineering

https://twitter.com/addyosmani/status/2053231239721885918
3•pretext•28m ago•0 comments

Late-interaction rerank made our F1 worse, not better – a negative result

https://sverklo.com/blog/late-interaction-rerank-made-our-f1-worse/
1•nike-17•32m ago•0 comments

A Field Study of Institutional Control in an AI-Staffed Prediction-Market Desk

https://github.com/wes-zheng/ai_institutions/blob/main/technical_report/paper.md
3•bbcf•43m ago•0 comments

When life gives you lemons, write better error messages

https://wix-ux.com/when-life-gives-you-lemons-write-better-error-messages-46c5223e1a2f
2•dnw•51m ago•1 comments

Zeta2.1: 3x Fewer Tokens, 50ms Faster

https://zed.dev/blog/zeta2-1
2•ms7892•1h ago•0 comments

Scouting's Real Crisis Is Not Marketing. It Is Decades of Neglect.

https://www.untendedfire.org/2026/05/09/scoutings-real-crisis-is-not-marketing-it-is-decades-of-n...
2•AuthorizedCust•1h ago•0 comments

Giant Virginia Data Center Project Upended by Clerical Error

https://www.bloomberg.com/news/articles/2026-05-08/giant-data-center-project-in-virginia-upended-...
1•1vuio0pswjnm7•1h ago•0 comments

NYC School District Hit by Malware Attack as Well as Canvas Hack

https://www.bloomberg.com/news/articles/2026-05-08/canvas-hack-on-nyc-schools-comes-amid-separate...
2•1vuio0pswjnm7•1h ago•0 comments

Student hackers revenge on final exams 'ShinyHunters' attacks nearly 9k schools

https://fortune.com/2026/05/08/student-hackers-get-revenge-on-final-exams-as-shinyhunters-takes-d...
1•1vuio0pswjnm7•1h ago•1 comments

Aurora: A Leverage-Aware Optimizer for Rectangular Matrices

https://blog.tilderesearch.com/blog/aurora
1•matt_d•1h ago•0 comments

What is Elon Musk's formula

https://www.economist.com/culture/2026/05/07/what-is-elon-musks-formula
1•andsoitis•1h ago•0 comments

Hantavirus tracker with Pandemic 2's UI

https://hantavirus.xetera.dev/
1•xetera•1h ago•1 comments

Atlas Mehs

https://darthcoder.github.io/2026/05/10/atlas-mehs/
1•basyt•1h ago•1 comments

The Title on Your Badge Is Becoming a Guess

https://priorcontext.substack.com/p/the-title-on-your-badge-is-becoming
2•contextwindow•1h ago•1 comments

Arrow Flight vs. JSON in Next.js: Benchmarking Python and Go

https://kayhan.dev/posts/012-arrow-flight-vs-json-nextjs-snowflake-benchmark/
1•keynha•1h ago•0 comments

The Hidden Reason Screwdriver Handles Look Like This [video]

https://www.youtube.com/watch?v=sGiRSA_GWK8
2•CharlesW•1h ago•0 comments

Global AI Diffusion Q1 2026 Trends and Insights

https://www.microsoft.com/en-us/corporate-responsibility/dmc/topics/ai-economy-institute/reports/...
2•igor_mart•1h ago•2 comments

Grinder12: 0.96-Bit Lossless Streaming KV-Cache (16.55x VRAM Savings

https://github.com/ggml-org/llama.cpp/discussions/22891
3•AMICLLC•2h ago•0 comments

Xs of Y – roguelike that names itself every run. Written in 4kLoC

https://github.com/nooga/xsofy
4•andsoitis•2h ago•0 comments

Gemini API File Search is now multimodal

https://blog.google/innovation-and-ai/technology/developers-tools/expanded-gemini-api-file-search...
42•gmays•2h ago•2 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•11mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•11mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•11mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•11mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.