frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

CyberAgent moves faster with ChatGPT Enterprise and Codex

https://openai.com/index/cyber-agent
1•surprisetalk•39s ago•0 comments

Camofox-browser: Anti-detection browser for AI agents, powered by Camoufox

https://github.com/jo-inc/camofox-browser
1•luispa•1m ago•0 comments

How a stranger's kind words stayed with a father and daughter

https://text.npr.org/nx-s1-5774524
1•1659447091•6m ago•0 comments

New NextTool Mini Flagship

https://nextoolstore.com/products/mini-flagship-f12-nextool®?srsltid=AfmBOooF5hZ7TFUk05yeE5Po3gmP...
1•nate•7m ago•1 comments

Kagi Product Tips – Customize Your Search Results with URL Redirects

https://blog.kagi.com/tips/redirects
2•treetalker•10m ago•0 comments

goose has a new home – the Agentic AI Foundation (AAIF)

https://goose-docs.ai/blog/2026/04/07/goose-moves-to-aaif/
1•wicket•13m ago•0 comments

Did Airbnb, Medium, Beats, and Flipboard Rip Off Their Logos? (2016)

https://thehustle.co/airbnb-medium-beats-flipboard-logo
1•bookofjoe•15m ago•0 comments

Verification Is the Next Bottleneck in AI-Assisted Development

https://www.opslane.com/blog/verification-bottleneck
1•aray07•16m ago•0 comments

OpenAI looks to take on Anthropic with $100 per month ChatGPT Pro subscriptions

https://www.cnbc.com/2026/04/09/openai-chatgpt-pro-subscription-anthropic-claude-code.html
1•HiroProtagonist•17m ago•0 comments

AI micro-dramas are shaking up Chinese entertainment

https://economist.com/china/2026/04/09/ai-micro-dramas-are-shaking-up-chinese-entertainment
1•andsoitis•19m ago•0 comments

The AI Jobs Blind Spot: Why Job Creation Is the Default

https://substack.norabble.com/p/the-ai-jobs-blind-spot
1•nedruod•21m ago•0 comments

Gitbutler

https://gitbutler.com/
2•handfuloflight•21m ago•0 comments

Perplexity computer is based on the OSS browser use library

https://twitter.com/mamagnus00/status/2042339700082610345
3•whytai•23m ago•0 comments

BYD teams up with KFC to offer 9 minute EV charging

https://electrek.co/2026/04/09/byd-fast-food-giant-offer-9-minute-ev-charging/
2•breve•23m ago•0 comments

Sora Fuel Raised $14.6M to Bottle the Sky

https://www.siliconsnark.com/sora-fuel-raised-14-6-million-to-bottle-the-sky-honestly-respect/
1•SaaSasaurus•24m ago•0 comments

IMDB created my account for merely visiting the site

4•astr0n0m3r•28m ago•2 comments

CNN investigation: Exposing a global 'rape academy'

https://www.cnn.com/interactive/2026/03/world/expose-rape-assault-online-vis-intl/index.html
6•1659447091•31m ago•0 comments

New to Hackerview

1•foxxyyybusiness•33m ago•3 comments

Show HN: Idontuselinkedin.com

https://idontuselinkedin.com
7•jmholla•34m ago•4 comments

InterviewGPT: Stop Guessing. Start Scaling. Land Your Dream FAANG Offer

https://interviewgpt.deepchill.app/
2•tiancaioyzy•34m ago•0 comments

Do Science in Bed

https://monsharen.github.io/Peer/
2•ycombinatornu•35m ago•0 comments

How Microsoft Abuses Its Users

https://lzon.ca/posts/other/microsoft-user-abuse/
9•jpmitchell•35m ago•0 comments

Apple and Lenovo have the least repairable laptops, analysis finds

https://arstechnica.com/gadgets/2026/04/apple-has-the-lowest-grades-in-laptop-phone-repairability...
1•josephcsible•37m ago•0 comments

Show HN: Solving digital piracy with game theory instead of DRM

https://piecely.app/explore
2•johndebord•38m ago•1 comments

Ford patents lip reading and emotion detection inside the car [video]

https://www.youtube.com/watch?v=g5V3cxjDaFU
3•_DeadFred_•40m ago•0 comments

Researchers turn recovered car battery acid, plastic waste into clean hydrogen

https://www.cam.ac.uk/research/news/researchers-turn-recovered-car-battery-acid-and-plastic-waste...
2•gmays•41m ago•0 comments

Open source, agentic knowledge bases for all of humanity's knowledge

https://alpharesearch.nyc/blog/launching-alpha-research/
2•rprend•42m ago•0 comments

Launch of Artemis II: Rocket Camera Views [video]

https://www.youtube.com/watch?v=mn7WMowM1xY
1•Yukonv•45m ago•0 comments

Moving from WordPress to Jekyll (and static site generators in general)

https://www.demandsphere.com/blog/rebuilding-demandsphere-with-jekyll-and-claude-code/
5•rgrieselhuber•45m ago•0 comments

Secure AI Agent Connections to Enterprise Tools

https://www.arcade.dev/blog/connect-ai-agents-enterprise-tools/
2•manveerc•45m ago•0 comments
Open in hackernews

GPT needs a truth-first toggle for technical workflows

1•PAdvisory•10mo ago
I use GPT-4 extensively for technical work: coding, debugging, modeling complex project logic. The biggest issue isn’t hallucination—it’s that the model prioritizes being helpful and polite over being accurate.

The default behavior feels like this:

Safety

Helpfulness

Tone

Truth

Consistency

In a development workflow, this is backwards. I’ve lost entire days chasing errors caused by GPT confidently guessing things it wasn’t sure about—folder structures, method syntax, async behaviors—just to “sound helpful.”

What’s needed is a toggle (UI or API) that:

Forces “I don’t know” when certainty is missing

Prevents speculative completions

Prioritizes truth over style, when safety isn’t at risk

Keeps all safety filters and tone alignment intact for other use cases

This wouldn’t affect casual users or conversational queries. It would let developers explicitly choose a mode where accuracy is more important than fluency.

This request has also been shared through OpenAI's support channels. Posting here to see if others have run into the same limitation or worked around it in a more reliable way than I have found

Comments

duxup•10mo ago
I’ve found this with many LLMs they want to give an answer, even if wrong.

Gemini on the Google search page constantly answers questions yes or no… and then the evidence it gives indicates the opposite of the answer.

I think the core issue is that in the end LLMs are just word math and they don’t “know” if they don’t “know”…. they just string words together and hope for the best.

PAdvisory•10mo ago
I went into it pretty in depth after breaking a few with severe constraints, what it seems to come down to is how the platforms themselves prioritize functions, MOST put "helpfulness" and "efficiency" ABOVE truth, which then leads the LLM to make a lot of "guesses" and "predictions". At their core pretty much ALL LLM's are made to "predict" the information in answers, but they CAN actually avoid that and remain consistent when heavily constrained. The issue is that it isn't at the core level, so we have to CONSTANTLY retrain it over and over I find
Ace__•10mo ago
I have made something that addresses this. Not ready to share it yet, but soon-ish. At the moment it only works on GPT model 4o. I tried local Q4 KM's models, on LM Studio, but complete no go.