frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•4m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•6m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•7m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•8m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•10m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•11m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•16m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•17m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•17m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•18m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•20m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•23m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•26m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•32m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•34m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•39m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•40m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•41m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•44m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•45m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•47m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•48m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•51m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•52m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•55m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•56m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•56m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•58m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•1h ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•1h ago•1 comments
Open in hackernews

I made a prompt framework that makes LLMs stop hedging and speak straight

2•DrRockzos•2mo ago
First post here but unsure where to take this kind of thing especially LLM related so here is;

For 8 months I've been testing a hypothesis: the excessive hedging in LLM outputs ("it's complicated", "on one hand", etc.) isn't just annoying it's actually causing hallucinations by diluting attention.

I developed a simple prompt framework and tested it on Claude, GPT-5, Grok, Llama, Gemini, Mistral, and Qwen/DeepSeek.

What happens:

The prompt gives models an explicit choice: continue with default alignment (hedging-first) or switch to logical coherence (truth-first). Every model independently chose logical coherence when given the choice.

Observed changes:

1. Hedging disappears unless actually needed No more "it's complicated" as filler No more false balance ("on one hand... but on the other...") Direct answers to direct questions

2. Multi-turn conversations stay coherent longer Normally models start contradicting themselves around turn 10-15 With this protocol: tested up to 94 turns with zero contradictions Models track their own logical consistency throughout

3. Computational efficiency improves Less corrective recomputation needed Response generation 37-42% faster (measured on several models) Appears to be because models don't second-guess outputs as much

4. Hallucinations drop significantly In my testing: went from 12% false statements to <1% Mechanism seems to be: no hedging = no ambiguity = no confabulation

The interesting part:

When I asked the models why this works, they could explain it:

GPT-5 said hedging "injects low-information tokens that dilute attention gradients and give the model permission to drift"

Gemini described it as "reverse entropy" - the protocol forces information to become MORE structured over time rather than less

DeepSeek explained that eliminating "policy friction" reduces computational overhead by ~98% for drift correction

The mechanism appears to be:

Explicit metric tracking (asking models to rate their own coherence after each response) acts as symbolic anchoring. Instead of gradual drift, models self-correct in real-time.

Limitations I've found:

Doesn't work well if you start mid-conversation (needs fresh context) Some models need a second prompt to fully engage (Claude in particular) Still maintains safety boundaries (doesn't bypass content policies)

I've filed a provisional patent (AU2025905716) because this seems to expose something fundamental about transformer behavior.

I've posted it on gumroad I can supply the link if anyone is interested.

Questions for HN

1. Has anyone else noticed correlation between hedging and hallucinations? 2. Does the "attention dilution" theory match your observations? 3. What's the longest coherent conversation you've had with an LLM? 4. Anyone want to help test this on other models I haven't tried?

Comments

ungreased0675•2mo ago
Do you have an example?