frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Skyrocketing HBM Will Push Micron Through $45B and Beyond

https://www.nextplatform.com/2025/06/30/skyrocketing-hbm-will-push-micron-through-10-billion-and-beyond/
1•rbanffy•46s ago•0 comments

Connected Gmail mcp to AI voice

https://harmony.com.ai
1•bigonion•1m ago•1 comments

Boeing to Replace CFO Brian West with Former Lockheed Finance Chief

https://www.bloomberg.com/news/articles/2025-06-30/boeing-to-replace-cfo-west-with-former-lockheed-finance-chief
1•Bluestein•1m ago•0 comments

Simple low-dimensional computations explain variability in neuronal activity

https://arxiv.org/abs/2504.08637
1•iNic•4m ago•0 comments

From Pokémon Red to Standardized Game-as-an-Eval

https://lmgame.org
2•Yuxuan_Zhang13•7m ago•1 comments

The Whole Country Is Starting to Look Like California

https://www.theatlantic.com/economy/archive/2025/06/zoning-sun-belt-housing-shortage/683352/
2•ryan_j_naughton•7m ago•0 comments

Eigenvalues of Generative Media

https://stackdiver.com/posts/eigenvalues-of-generative-media/
2•d0tn3t•7m ago•1 comments

Brazil's Supreme Court clears way to hold social media liable for user content

https://apnews.com/article/brazil-supreme-court-social-media-ruling-324b9d79caa9f9e063da8a4993e382e1
2•rbanffy•10m ago•0 comments

The New Skill in AI Is Not Prompting, It's Context Engineering

https://www.philschmid.de/context-engineering
2•robotswantdata•12m ago•0 comments

Ask HN: When will YC do a batch in Europe and/or Asia?

2•HSO•12m ago•2 comments

Repurposed Materials

https://www.repurposedmaterialsinc.com/view-all-products/
1•bookofjoe•13m ago•0 comments

Liberals, you must reclaim Adam Smith

https://davidbrin.blogspot.com/2013/11/liberals-you-must-reclaim-adam-smith.html
2•matthest•14m ago•1 comments

Symbients on Stage Coming Soon: Autonomous AI Entrepreneurs

https://www.forbes.com/sites/robertwolcott/2025/06/30/symbients-on-stage-coming-soon-autonomous-ai-entrepreneurs/
1•Bluestein•15m ago•0 comments

Can Large Language Models Help Students Prove Software Correctness?

https://arxiv.org/abs/2506.22370
1•elashri•18m ago•0 comments

Developing with GitHub Copilot Agent Mode and MCP

https://austen.info/blog/github-copilot-agent-mcp/
1•miltonlaxer•19m ago•0 comments

I got removed from GitHub for making open source stuff

2•Hasturdev•20m ago•2 comments

NASA plans to stream rocket launches on Netflix starting this summer

https://www.cnbc.com/2025/06/30/nasa-rocket-launches-netflix.html
2•rustoo•21m ago•1 comments

Large Language Model-Powered Agent for C to Rust Code Translation

https://arxiv.org/abs/2505.15858
2•elashri•23m ago•0 comments

Let's create a Tree-sitter grammar

https://www.jonashietala.se/blog/2024/03/19/lets_create_a_tree-sitter_grammar/
2•fanf2•24m ago•0 comments

Musk said to bet on Tesla delivering Robotaxi in June, those who did lost big

https://electrek.co/2025/06/30/elon-musk-bet-tesla-delivering-robotaxi-june-lost-big/
2•reaperducer•24m ago•1 comments

The story how I acquired the domain name Onions.com

https://twitter.com/searchbound/status/1939658564420641064
1•eightturn•25m ago•1 comments

Offline-First AI Platform for Resilient Edge and IoT Applications

https://github.com/GlobalSushrut/mcp-zero
1•Global_Sushrut•27m ago•0 comments

Three-Dimensional Time: A Mathematical Framework for Fundamental Physics

https://www.worldscientific.com/doi/10.1142/S2424942425500045
1•haunter•28m ago•0 comments

Young job applicants fight fire (ATS systems) with fire (AI) – Global trends

https://www.coversentry.com/ai-job-search-statistics
2•coversentry•29m ago•0 comments

Google to buy fusion startup Commonwealth's power- if they can ever make it work

https://www.theregister.com/2025/06/30/google_fusion_commonwealth/
1•rntn•30m ago•0 comments

A Haaretz article on dispersing crowds became a story on the IDF shooting people

https://twitter.com/AdamRFisher/status/1938959933803728997
3•nailer•30m ago•4 comments

Apple Execs on what went wrong with Siri, iOS 26 and more [video]

https://www.youtube.com/watch?v=wCEkK1YzqBo
1•amai•31m ago•0 comments

Adding Text-to-Speech to Your Blog with OpenAI's TTS API

https://econoben.dev/posts/adding-text-to-speech-to-your-blog-openai-tts-pipeline
1•EconoBen•36m ago•1 comments

Do Car Buyers Care Which Engine Is Under the Hood? A Ford Exec Doesn't Think So

https://www.thedrive.com/news/do-car-buyers-care-which-engine-is-under-the-hood-a-ford-exec-doesnt-think-so
3•PaulHoule•40m ago•1 comments

CertMate – SSL Certificate Management System

https://github.com/fabriziosalmi/certmate
2•indigodaddy•41m ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•1mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•1mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•1mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•1mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.