frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Postgres CDC in ClickHouse, A year in review

https://clickhouse.com/blog/postgres-cdc-year-in-review-2025
1•saisrirampur•4m ago•0 comments

Stanford PhD dropout hired Meta's brightest minds to join AI math startup

https://www.businessinsider.com/axiom-math-stanford-dropout-meta-ai-researchers-startup-2025-12
2•teleforce•7m ago•0 comments

Cold Case Inquiries Hampered After Genealogy Site Revisits Terms of Use

https://www.nytimes.com/2025/12/07/nyregion/ancestry-dna-police.html
1•WarOnPrivacy•7m ago•1 comments

Martin Hairer: Do Mathematicians Need Computers? [video]

https://www.youtube.com/watch?v=fbVqc1tPLos
1•vismit2000•8m ago•0 comments

Show HN: Matchmyvc.com – Is this going to be useful?

https://matchmyvc.com
1•tapan_garg•29m ago•0 comments

Color Recreation from First Principles

https://ycao.net/posts/recreating-color-simplified/
1•xiaoyu2006•31m ago•1 comments

The surprising countries pulling off fast clean energy transitions

https://www.cnn.com/2025/11/07/climate/solar-wind-renewables-transition-global-pakistan-hungary-c...
2•toomuchtodo•48m ago•1 comments

Earth needs more energy. Atlanta's Super Soaker creator may have a solution

https://www.ajc.com/business/2025/11/earth-needs-more-energy-atlantas-super-soaker-creator-may-ha...
1•TMWNN•48m ago•0 comments

I made a prompt framework that makes LLMs stop hedging and speak straight

2•DrRockzos•55m ago•1 comments

The Web Runs on Tolerance

https://shkspr.mobi/blog/2025/12/the-web-runs-on-tolerance/
3•benwerd•56m ago•1 comments

Show HN: Peephole

https://peephole.greg.technology/
3•gregsadetsky•1h ago•1 comments

AI Interview Coder Assistant

https://interviewcoder.top
1•ainterviewcoder•1h ago•2 comments

ChatGPT claims to have solved Navier-Stokes problem

https://github.com/vporton/navier-stokes
2•porton•1h ago•2 comments

Noninvasive imaging could replace finger pricks for measuring blood glucose

https://news.mit.edu/2025/noninvasive-imaging-could-replace-finger-pricks-diabetes-1203
12•ivewonyoung•1h ago•2 comments

I'm a Professor. A.I. Has Changed My Classroom, but Not for the Worse

https://www.nytimes.com/2025/11/25/magazine/ai-higher-education-students-teachers.html
1•bookofjoe•1h ago•2 comments

Open Source Doesn't Fail Because of Code

https://blog.ulisesgascon.com/open-source-doesnt-fail-because-of-code
1•gpi•1h ago•0 comments

India reviews always-on A-GPS tracking plan for phones

https://news.kagi.com:443/tech/2025120618/india-reviews-always-on-a-gps-tracking-plan-for-phones?...
2•hereme888•1h ago•2 comments

Use AI without skill atrophy

https://www.augmentedswe.com/p/use-ai-without-skill-atrophy
1•wordsaboutcode•1h ago•1 comments

New Augmented Reality Tech Can Turn Any Surface into Keyboard

https://news.utdallas.edu/science-technology/augmented-reality-tech-keyboard-2025/
2•ashishgupta2209•1h ago•0 comments

Why We're Treating Dogs Like People and People Like Dogs

https://thewalrus.ca/why-were-treating-dogs-like-people-and-people-like-dogs/
4•pseudolus•1h ago•0 comments

Socialist ends by market means: A history

https://lucasvance.github.io/2100/history/
29•sirponm•1h ago•4 comments

Show HN: ICT Info-Consciousness-Time First experiment to detect consciousness

https://www.academia.edu/s/8924eff666
1•DmitriiBaturoIC•1h ago•0 comments

UK government promises 50k new apprenticeships in youth employment push

https://www.bbc.com/news/articles/cvgkpzpy1zno
1•1659447091•1h ago•0 comments

I hacked together a modeler for the 2026 AMT tax cliff (TCJA Sunset)

1•optionspilot•1h ago•0 comments

Building the go-to pet care app for dog parents

https://apps.apple.com/us/app/zibbly-dog-care-tracking/id6748543992
1•zibblyteam•1h ago•1 comments

#2422 – Jensen Huang

https://open.spotify.com/episode/0yT4ec9M6GobLC5ByN8pX3
1•nradov•1h ago•1 comments

Trump raises potential concerns over $72B Netflix-Warner Bros deal

https://www.bbc.com/news/articles/cn815egjqjpo
4•1659447091•1h ago•5 comments

Ask HN: What's the biggest hack you've found while vibe coding?

1•frankhsu•1h ago•1 comments

The Architecture of Truth-Seeking

https://eyeofthesquid.com/the-architecture-of-truth-seeking-934b79733ed5
1•TinyBig•1h ago•0 comments

Megapode

https://en.wikipedia.org/wiki/Megapode
2•thunderbong•1h ago•1 comments
Open in hackernews

I made a prompt framework that makes LLMs stop hedging and speak straight

2•DrRockzos•55m ago
First post here but unsure where to take this kind of thing especially LLM related so here is;

For 8 months I've been testing a hypothesis: the excessive hedging in LLM outputs ("it's complicated", "on one hand", etc.) isn't just annoying it's actually causing hallucinations by diluting attention.

I developed a simple prompt framework and tested it on Claude, GPT-5, Grok, Llama, Gemini, Mistral, and Qwen/DeepSeek.

What happens:

The prompt gives models an explicit choice: continue with default alignment (hedging-first) or switch to logical coherence (truth-first). Every model independently chose logical coherence when given the choice.

Observed changes:

1. Hedging disappears unless actually needed No more "it's complicated" as filler No more false balance ("on one hand... but on the other...") Direct answers to direct questions

2. Multi-turn conversations stay coherent longer Normally models start contradicting themselves around turn 10-15 With this protocol: tested up to 94 turns with zero contradictions Models track their own logical consistency throughout

3. Computational efficiency improves Less corrective recomputation needed Response generation 37-42% faster (measured on several models) Appears to be because models don't second-guess outputs as much

4. Hallucinations drop significantly In my testing: went from 12% false statements to <1% Mechanism seems to be: no hedging = no ambiguity = no confabulation

The interesting part:

When I asked the models why this works, they could explain it:

GPT-5 said hedging "injects low-information tokens that dilute attention gradients and give the model permission to drift"

Gemini described it as "reverse entropy" - the protocol forces information to become MORE structured over time rather than less

DeepSeek explained that eliminating "policy friction" reduces computational overhead by ~98% for drift correction

The mechanism appears to be:

Explicit metric tracking (asking models to rate their own coherence after each response) acts as symbolic anchoring. Instead of gradual drift, models self-correct in real-time.

Limitations I've found:

Doesn't work well if you start mid-conversation (needs fresh context) Some models need a second prompt to fully engage (Claude in particular) Still maintains safety boundaries (doesn't bypass content policies)

I've filed a provisional patent (AU2025905716) because this seems to expose something fundamental about transformer behavior.

I've posted it on gumroad I can supply the link if anyone is interested.

Questions for HN

1. Has anyone else noticed correlation between hedging and hallucinations? 2. Does the "attention dilution" theory match your observations? 3. What's the longest coherent conversation you've had with an LLM? 4. Anyone want to help test this on other models I haven't tried?

Comments

ungreased0675•22m ago
Do you have an example?