frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I built a 'Gym' for developers who want to stop relying on AI

https://get-human-spec.netlify.app/
1•sameerthite•3m ago•0 comments

Agentic Tech Magazine

https://agentcrunch.ai/
1•mikiarlo321•6m ago•0 comments

Use Microsoft Office Shortcuts in Libre Office

https://github.com/Zaki101Aslam/MS-office-shortcuts-for-Libre-Office
1•Zaki101Aslam•8m ago•1 comments

SmolMail – Stop typing what your emails know

https://smolmail.com/
1•narinluangrath•11m ago•1 comments

Is Your AI Agent Safe?

https://agentshield.live/
1•bartel_most•17m ago•1 comments

Switch instantly between your ego across ChatGPT, Claude, Gemini, Grok and local

https://context-wallet.com/
1•haebom•21m ago•0 comments

Show HN: AgentScore – Lighthouse for AI Agents

https://github.com/xiongallen40-design/agentscore
1•agentscore•25m ago•0 comments

Show HN: PlanOpticon – Extract structured knowledge from video recordings

https://github.com/ConflictHQ/PlanOpticon
1•ragelink•28m ago•0 comments

Discord Distances Itself from Peter Thiel's Palantir Age Verification Firm

https://kotaku.com/discord-palantir-peter-thiel-persona-age-verification-2000668951
4•thisislife2•38m ago•0 comments

Flashpoint Archive – Over 200k web games and animations preserved

https://flashpointarchive.org
2•helloplanets•55m ago•0 comments

The AGI gap might be epistemological, not technical

https://executelater.substack.com/p/we-dont-know-what-were-looking-for
2•NarratorTD•55m ago•0 comments

Show HN: Remote-OpenCode – Run your AI coding agent from your phone via Discord

https://github.com/RoundTable02/remote-opencode
1•remocode•57m ago•1 comments

Stop Saying Boredom Is Good for Kids

https://www.fast.ai/posts/2025-12-03-boredom/
2•tkazec•58m ago•0 comments

Understanding Std:Shared_mutex from C++17

https://www.cppstories.com/2026/shared_mutex/
2•jandeboevrie•58m ago•0 comments

I gave my AI drugs

https://github.com/nich2533/just_say_no
4•nich2533•1h ago•2 comments

America Isn't Ready for What AI Will Do to Jobs

https://www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/
11•helloplanets•1h ago•1 comments

Show HN: Bond – Persistent memory and governance framework for Claude AI

https://github.com/moneyjarrod/BOND
1•J-Dub•1h ago•1 comments

You do not need an ORM [video]

https://fosdem.org/2026/schedule/event/F9Y7ZY-you-do-not-need-an-orm/
2•0x54MUR41•1h ago•0 comments

Show HN: Nucleus MCP – Forensic deep-dive into agent resource locking

https://www.loom.com/share/843a719cbcc2419b8e483784ffd1e8c8
1•NucleusOS•1h ago•0 comments

Anime text-to-image generator (2 free tries, no login)

https://www.radanimegenerator.com/
1•changttt•1h ago•2 comments

Distillation, Experimentation, and Integration of AI for Adversarial Use

https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration...
2•nsoonhui•1h ago•0 comments

Jikipedia: The encyclopedia of people, places, and events from the Epstein files

https://jmail.world/wiki
2•0x54MUR41•1h ago•0 comments

Watch Men

https://worksinprogress.co/issue/watch-men/
3•trojanalert•1h ago•2 comments

How actor Edward Norton is related to Pocahontas

https://www.bbc.com/news/world-us-canada-64158055
1•thunderbong•1h ago•0 comments

What Lies Beneath

https://newhumanist.org.uk/articles/6505/what-lies-beneath
3•kawera•1h ago•0 comments

DeepWiki and Increasing Malleability of Software

https://twitter.com/karpathy/status/2021633574089416993
1•sabareesh•1h ago•0 comments

AI Soap (the bar, not the API)

https://github.com/mikewolfd/soap-calc
2•mikewolfd•1h ago•1 comments

Show HN: Snaprookies a unified orchestrated layer for 27 generative AI workflow

https://snaprookies.org/
1•RichardOdds•1h ago•1 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•coonr•1h ago•1 comments

"I built a site to explore and analyze all 18,500 Pitchfork reviews"

https://old.reddit.com/r/indieheads/comments/1r4watg/i_built_a_site_to_explore_and_analyze_all_18...
1•surprisetalk•1h ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•8mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•8mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•8mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•8mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.