frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Inline Tracing in Dyalog [video]

https://www.youtube.com/watch?v=UM-ahvEpLew
1•pillowshift•43s ago•0 comments

The New Kindle Scribes Are Great, but Not Great Enough

https://www.wired.com/review/kindle-scribe-colorsoft-2025/
1•thm•1m ago•0 comments

Beyond the Bus Factor: Managing Tribal Knowledge

https://brihatijain.com/blog/beyond_the_bus_factor
1•brihati•3m ago•1 comments

China launches satellite 'super factory' in bid to rival Elon Musk's Starlink

https://www.scmp.com/economy/china-economy/article/3335926/china-launches-satellite-super-factory...
1•gscott•3m ago•0 comments

Computer Use 2025 Wrapped

https://www.onkernel.com/blog/computer-use-2025
1•masnwilliams•4m ago•0 comments

Chord: Open-Source Prototype for PBR Material Estimation Debuting at Siggraph

https://www.ubisoft.com/en-us/studio/laforge/news/1i3YOvQX2iArLlScBPqBZs/generative-base-material...
1•klaussilveira•6m ago•0 comments

Military's new AI: 'Hypothetical' boat strike scenario 'unambiguously illegal'

https://san.com/cc/the-militarys-new-ai-says-hypothetical-boat-strike-scenario-unambiguously-ille...
2•doener•7m ago•0 comments

LZ dark matter experiment spots neutrinos from the sun's core

https://www.llnl.gov/article/53711/lz-dark-matter-experiment-sets-worlds-best-spots-neutrinos-sun...
1•gmays•7m ago•0 comments

Meta's Pivot from Open Source to Money-Making AI Model

https://www.bloomberg.com/news/articles/2025-12-10/inside-meta-s-pivot-from-open-source-to-money-...
2•peterbonney•8m ago•0 comments

EU-US Data Transfers: Time to prepare for more trouble to come

https://noyb.eu/en/eu-us-data-transfers-time-prepare-more-trouble-come
5•tomwas54•8m ago•0 comments

Heuristics vs. RAG: Shrinkflation as a Policy Driver

https://www.unite.ai/heuristics-vs-rag-shrinkflation-as-a-policy-driver/
1•50kIters•9m ago•0 comments

Ask HN: Is there a "good" (non-privacy horror) aftermarket HUD for your car?

1•xrd•9m ago•0 comments

German unions call for French Dassault's expulsion from EU fighter jet program

https://www.reuters.com/business/aerospace-defense/powerful-german-union-calls-dassaults-expulsio...
2•alephnerd•10m ago•0 comments

Show HN: Wirebrowser – A JavaScript Debugger with Breakpoint-Driven Heap Search

https://github.com/fcavallarin/wirebrowser
2•fcavallarin•10m ago•0 comments

Explaining weird stuff via Python's compilation pipeline – UMich guest lecture [video]

https://www.youtube.com/watch?v=G2yPbg2fgQY
1•vismit2000•11m ago•0 comments

Why Tagged PDF Matters for AI

https://opendataloader.org/docs/tagged-pdf
1•Julia_Katash•12m ago•1 comments

Decide What's Human

https://kupajo.com/decide-whats-human/
1•kolyder•13m ago•0 comments

Preventing Resource Leaks in Go: How GoLand Helps You Write Safer Code

https://blog.jetbrains.com/go/2025/12/09/preventing-resource-leaks-in-go-how-goland-helps-you-wri...
1•Annprots•14m ago•1 comments

Pedantle

https://pedantle.certitudes.org/
1•knuckleheads•14m ago•0 comments

Storing OAuth Tokens

https://fusionauth.io/articles/oauth/oauth-token-storage
1•mooreds•15m ago•0 comments

Pompeii Time Capsule Reveals Secrets to Durable Ancient Roman Cement

https://www.scientificamerican.com/article/pompeii-house-frozen-mid-renovation-reveals-secrets-of...
2•Brajeshwar•15m ago•0 comments

Starlink Became the Internet Alternative

https://restofworld.org/2025/starlink-musk-internet-expansion/
2•Brajeshwar•16m ago•0 comments

James Webb Telescope detects 13B-year-old supernova with gamma-ray burst

https://www.space.com/astronomy/james-webb-space-telescope/the-james-webb-space-telescope-just-fo...
1•Brajeshwar•16m ago•0 comments

Calif. tech saddest invention has been bleeding cash

https://www.sfgate.com/food/article/california-tech-world-soylent-scrambling-adapt-21219237.php
1•deegles•16m ago•1 comments

201 Stories by Anton Chekhov

https://web.archive.org/web/20070630223838/http://chekhov2.tripod.com/
1•bookofjoe•16m ago•0 comments

Legacy Code, Live Risk: Empirical Evidence of Malware Detection Gaps

https://www.mdpi.com/2076-3417/15/22/11862
1•PaulHoule•16m ago•0 comments

39C3 Fahrplan 2025

https://fahrplan.events.ccc.de/congress/2025/fahrplan/
2•birdculture•16m ago•0 comments

Show HN: Stridewars – A team step competition with Mario Kart-style power-ups

https://www.stridewars.com
1•nugzbunny•16m ago•0 comments

Relational AI vs. Constitutional AI: Are we focusing on the right question?

1•buttersmoothAI•19m ago•0 comments

Former GitLab CEO raises money for Kilo to compete in crowded AI coding market

https://www.cnbc.com/2025/12/10/former-gitlab-ceo-raises-8-million-for-kilo-to-compete-in-vibe-co...
1•lngzl•20m ago•0 comments
Open in hackernews

The Unreasonable Effectiveness of Reasonless Intermediate Tokens

https://arxiv.org/abs/2505.13775
4•YeGoblynQueenne•6mo ago

Comments

tocs3•6mo ago
I asked ChatGPT to restate this in more laymen's terms (posted below) and I am not to surprised at the answer.

"Lately, some AI models have shown impressive abilities to solve complex problems, and many people credit this to a method called Chain of Thought (CoT), where the model is trained to think through steps like a human might. In this paper, we take a closer look at that idea to see if it's really what's driving better performance.

We focus on the model’s step-by-step thinking (the words it generates along the way) — often treated like human "thoughts" — and examine whether these actually help the model solve problems more accurately. To test this, we train AI models using clean, correct step-by-step reasoning paths and final answers, all based on a known solving method (A* search). This lets us check both the final answers and the reasoning steps to see how they relate.

Interestingly, we find that even when a model gives the right answer, its reasoning steps can still be wrong or messy. To go further, we even train models using completely random and incorrect reasoning steps — and surprisingly, they still perform about the same, and sometimes even better, than those trained on correct steps.

This suggests that the step-by-step "thoughts" the model shows aren’t as meaningful or reliable as many assume. In short, just because a model looks like it’s reasoning through a problem doesn’t mean it actually is — and we should be careful not to treat its outputs as if it thinks like a human or follows strict logic."