frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Benzene at 200

https://www.chemistryworld.com/opinion/benzene-at-200/4021504.article
27•Brajeshwar•49m ago•5 comments

Working on databases from prison

https://turso.tech/blog/working-on-databases-from-prison
343•dvektor•3h ago•218 comments

ZjsComponent: A Pragmatic Approach to Reusable UI Fragments for Web Development

https://arxiv.org/abs/2506.11016
16•lelanthran•58m ago•5 comments

Show HN: Zeekstd – Rust Implementation of the ZSTD Seekable Format

https://github.com/rorosen/zeekstd
110•rorosen•19h ago•17 comments

Show HN: dk – A script runner and cross-compiler, written in OCaml

https://diskuv.com/dk/help/latest/
15•beckford•1h ago•1 comments

Nanonets-OCR-s – OCR model that transforms documents into structured markdown

https://huggingface.co/nanonets/Nanonets-OCR-s
150•PixelPanda•9h ago•38 comments

Salesforce study finds LLM agents flunk CRM and confidentiality tests

https://www.theregister.com/2025/06/16/salesforce_llm_agents_benchmark/
68•rntn•2h ago•28 comments

How the first electric grid was built

https://www.worksinprogress.news/p/how-the-worlds-first-electric-grid
9•bensouthwood•1h ago•0 comments

Show HN: Socket-call – Call socket.io events like normal JavaScript functions

https://github.com/bperel/socket-call
22•bperel•4h ago•5 comments

Mathematical Illustrations: A Manual of Geometry and PostScript

https://personal.math.ubc.ca/~cass/graphics/text/www/
23•Bogdanp•1h ago•6 comments

Start your own Internet Resiliency Club

https://bowshock.nl/irc/
385•todsacerdoti•8h ago•216 comments

Maya Blue: Unlocking the Mysteries of an Ancient Pigment

https://www.mexicolore.co.uk/maya/home/maya-blue-unlocking-the-mysteries-of-an-ancient-pigment
36•DanielKehoe•2d ago•7 comments

Infracost (YC W21) is hiring software engineers (GMT+2 to GMT-6)

https://infracost.io/join-the-team
1•aliscott•4h ago

Is gravity just entropy rising? Long-shot idea gets another look

https://www.quantamagazine.org/is-gravity-just-entropy-rising-long-shot-idea-gets-another-look-20250613/
153•pseudolus•15h ago•156 comments

Jokes and Humour in the Public Android API

https://voxelmanip.se/2025/06/14/jokes-and-humour-in-the-public-android-api/
217•todsacerdoti•15h ago•125 comments

Object personification in autism: This paper will be sad if you don't read

https://pubmed.ncbi.nlm.nih.gov/30101594/
19•oliverkwebb•31m ago•4 comments

A Framework for Characterizing Emergent Conflict Between Non-Coordinating Agents [pdf]

https://paperclipmaximizer.ai/Unaware_Adversaries.pdf
11•ycombiredd•2d ago•2 comments

Why SSL was renamed to TLS in late 90s (2014)

https://tim.dierks.org/2014/05/security-standards-and-name-changes-in.html
419•Bogdanp•1d ago•196 comments

Occurences of swearing in the Linux kernel source code over time

https://www.vidarholen.net/contents/wordcount/#fuck*,shit*,damn*,idiot*,retard*,crap*
71•microsoftedging•2d ago•123 comments

Quantum mechanics provide truly random numbers on demand

https://phys.org/news/2025-06-quantum-mechanics-random-demand.html
3•bookofjoe•2d ago•0 comments

Mechanisms for Detection and Repair of Puncture Damage in Soft Robotics [pdf]

https://smr.unl.edu/papers/Krings_et_al-2025-ICRA.pdf
9•PaulHoule•2d ago•0 comments

Modifying an HDMI dummy plug's EDID using a Raspberry Pi

https://www.downtowndougbrown.com/2025/06/modifying-an-hdmi-dummy-plugs-edid-using-a-raspberry-pi/
266•zdw•1d ago•72 comments

How the BIC Cristal ballpoint pen became ubiquitous

https://www.openculture.com/2025/06/how-the-bic-cristal-ballpoint-pen-became-the-most-successful-product-in-history.html
44•janandonly•5h ago•84 comments

Real-time CO2 monitoring without batteries or external power

https://news.kaist.ac.kr/newsen/html/news/?mode=V&mng_no=47450
87•gnabgib•17h ago•23 comments

Childhood leukemia: how a deadly cancer became treatable

https://ourworldindata.org/childhood-leukemia-treatment-history
249•surprisetalk•1d ago•70 comments

Solving LinkedIn Queens with APL

https://pitr.ca/2025-06-14-queens
50•pitr•2d ago•16 comments

DARPA program sets distance record for power beaming

https://www.darpa.mil/news/2025/darpa-program-distance-record-power-beaming
121•gnabgib•17h ago•85 comments

Chemical knowledge and reasoning of large language models vs. chemist expertise

https://www.nature.com/articles/s41557-025-01815-x
89•bookofjoe•2d ago•55 comments

Twin – A Textmode WINdow Environment

https://github.com/cosmos72/twin
124•kim_rutherford•19h ago•25 comments

Simplest C++ Callback, from SumatraPDF

https://blog.kowalczyk.info/a-stsj/simplest-c-callback-from-sumatrapdf.html
148•jandeboevrie•22h ago•145 comments
Open in hackernews

The Illusion of Thinking: A Reality Check on AI Reasoning

https://leotsem.com/blog/the-illusion-of-thinking/
21•leotsem•6h ago

Comments

leotsem•6h ago
Apple’s recent paper on the limits of AI reasoning is an uncomfortable but important read.

Instead of relying on standard benchmarks, the authors designed controlled environments—like Tower of Hanoi and River Crossing puzzles—to test how models handle increasing compositional complexity. The results: performance doesn’t taper off, it collapses. And even when the models fail, they continue to produce fluent, structured reasoning traces that sound convincing but fall apart logically.

If you’re building on top of LLMs or reasoning-augmented models, it’s well worth a look.

salviati•6h ago
If you ask me to solve increasingly dififcult Tower of Hanoi problems, I don't expect to be good at it. Neither would I expect a fellow human to be. So based on this should we question our intelligence?

I heard about that paper through an "AI explained" video [0], so I might be biased, but I agree with that video that the Apple paper is "meh" at best: it points out LLM limitations that are hardly a surprise.

[0] https://www.youtube.com/watch?v=wPBD6wTap7g

vincnetas•5h ago
Probably the difference between you and AI is that you would acknowledge that it's too difficult for you, and not to bullshit your way through.
saithound•5h ago
That's _exactly_ what the LLM did: the article's authors decided to count that as a failure.
vincnetas•5h ago
Hm was reading only TFA not the research paper. But TFA mentions this :

  Perhaps the most unsettling finding is what failure looks like. Even when models are completely wrong, they sound persuasive. The reasoning is fluent, the explanations are structured, and the conclusions are confidently delivered. But the logic doesn’t hold.
rcarmo•3h ago
That sounds a lot like a salesperson. And yes, there is a human tendency to twist reasoning to make the written word look polished, and I don’t think LLM training has fixed that bias.
ForHackernews•6h ago
Curious about the use of the word "uncomfortable" -- for people working on AI who thought that LLM or L"R"Ms were a path to AGI?

To me, that paper was reassuring that I wasn't taking crazy pills. I've worked with these tools to produce code, and they routinely make mistakes that no thinking entity (yes, I've worked with some dimwitted junior devs) ever would. Yes, they are powerful and useful tools, but they're not "thinking" in any meaningful sense (defined here as a rigorously determining an algorithm and applying it correctly).

archon1410•6h ago
The blog itself reads as if it was written by an LLM. (e.g. "This isn't about X, it's about Y." "... is timely ..." "X isn't Y".)

Weird.

And it has been discussed to death already:

Beware General Claims about “Generalizable Reasoning Capabilities” (of Modern AI Systems) [https://www.lesswrong.com/posts/5uw26uDdFbFQgKzih/beware-gen...]

Seven replies to the viral Apple reasoning paper and why they fall short [https://news.ycombinator.com/item?id=44278403]

antirez•5h ago
The chain of thoughts is not where the reasoning capabilities of a model happens: models have reasoning capabilities that are part of the next token inference, what CoT does is searching/sampling the model space of representations and notions in order to "ground" the final reply, putting in the context window in an explicit way all the related knowledge and ideas the model possess about the question.

It is absolutely obvious that algorithmic problems like the Tower of Hanoi can't benefit from sampling. Also, algorithmic problems are domains that are comfortable for the paper authors to have a verifiable domain of puzzles, but are very far from what we want the models to do, and what they are good at. Models would solve this by implementing an algorithm in Python and calling a tool to execute it. This is how they can more easily solve such problems.

Moreover: in most benchmarks CoT improves LLMs performances a lot, because sampling helps immensely to provide a better reply. So this paper negative result is basically against a very vast experience of CoT being a powerful tool for LLMs, simply because most benchmarks operate on domains where sampling is very useful.

In short, the Apple paper mostly says things that were very obvious: it is like if they were trying to reach a negative result. It was a widespread vision that CoT can't help performing algorithmic work by concatenating tokens, if not in the most obvious ways. Yet, it helps a lot when there is to combine existing (inside the model) knowedge/ideas to provide a better reply.

pyman•5h ago
What they're saying is that pattern-matching isn't the path to AGI. Humans and AI can both solve the Tower of Hanoi, but once the number of disks goes up, we both struggle.

Apple's point is that if we want to build something smarter than us, we need to look at intelligence and reasoning from a different angle.

rcarmo•3h ago
Exploring how to consistently arrive at a negative result is still a valid research goal. I don’t think we’ve had enough of that kind of research regarding LLMs—-everything is so positive that it defies basic statistics…
jsnell•5h ago
This paper, rebuttals, and rebuttals to rebuttals have been on HN repeatedly over the last couple of weeks (including literally now). At this point a summary of the original paper doesn't seem like it's adding much.

E.g.

https://news.ycombinator.com/item?id=44203562

https://news.ycombinator.com/item?id=44221900

https://news.ycombinator.com/item?id=44234626

https://news.ycombinator.com/item?id=44278403

https://news.ycombinator.com/item?id=44286086

crowie•5h ago
This might be a dumb question, and will inevitably showcase my ignorance in this field to others, but I will risk that; Why can't AI at a certain level execute algorithms with solutions that have been proved to work for a very long time? What I mean is, the solution of the Hanoi towers problem is known. It does not take a lot of computational power to achieve the result. What is stopping an AI such as the objects of exam in the paper to execute such algorithms and gather the solutions, like a human programmer would? Do they get sidetracked in the process due to the amount of tokens? (edit: typo)
pyman•5h ago
If humanity moves to Mars one day and leaves behind all the AI servers running on solar power, then comes back a billion years later, the AI would still be saying the same things. Why? Because no matter how powerful it is, AI doesn't evolve or grow on its own.
crowie•5h ago
Gotcha, but I didn't mean it in that way. What I meant is, that problems like the case-study ones don't need a revolutionary nor an original answer which would require growth, they can be solved with old solutions which I would assume would be in some way embedded into the learning dataset of these models. Yeah, the scope of the problem is bigger, but the correct answer should come down in any case to a correct implementation of the known algorithm. The thing I'm asking is what causes the hindrance which prevents these AIs from performing in appropriate ways given old problems and old solutions.
ryandvm•3h ago
I like your thought experiment and I think you're correct, but that's because we never gave it the physical possibility of a feedback loop (a.k.a. evolution).

I think if you added a step where the LLMs tweak their own build process and redeploy, your experiment would have wildly different results.

Yizahi•2h ago
The so called "reasoning" of LLM programs is really a sham. And authors of those programs are sometimes expose it themselves. For example the article by Anthropic about Claude "reasoning". When they get to the math block they ask the program to add two numbers and then ask to write step by step flow how the LLM did it. LLM generates a human-based flow, because that's what it copied from the training data, while the real flow of LLM adds numbers is vastly different.

Basically so called "reasoning" is just generation of additional intermediary output, resembling real reasoning, but not being it.

https://transformer-circuits.pub/2025/attribution-graphs/bio...