frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•1m ago•1 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•2m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•4m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•6m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•8m ago•0 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•11m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•16m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•17m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•21m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•33m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•35m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•35m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•48m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•51m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•54m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•1h ago•2 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1h ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•1 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments
Open in hackernews

The Illusion of Thinking: A Reality Check on AI Reasoning

https://leotsem.com/blog/the-illusion-of-thinking/
21•leotsem•7mo ago

Comments

leotsem•7mo ago
Apple’s recent paper on the limits of AI reasoning is an uncomfortable but important read.

Instead of relying on standard benchmarks, the authors designed controlled environments—like Tower of Hanoi and River Crossing puzzles—to test how models handle increasing compositional complexity. The results: performance doesn’t taper off, it collapses. And even when the models fail, they continue to produce fluent, structured reasoning traces that sound convincing but fall apart logically.

If you’re building on top of LLMs or reasoning-augmented models, it’s well worth a look.

salviati•7mo ago
If you ask me to solve increasingly dififcult Tower of Hanoi problems, I don't expect to be good at it. Neither would I expect a fellow human to be. So based on this should we question our intelligence?

I heard about that paper through an "AI explained" video [0], so I might be biased, but I agree with that video that the Apple paper is "meh" at best: it points out LLM limitations that are hardly a surprise.

[0] https://www.youtube.com/watch?v=wPBD6wTap7g

vincnetas•7mo ago
Probably the difference between you and AI is that you would acknowledge that it's too difficult for you, and not to bullshit your way through.
saithound•7mo ago
That's _exactly_ what the LLM did: the article's authors decided to count that as a failure.
vincnetas•7mo ago
Hm was reading only TFA not the research paper. But TFA mentions this :

  Perhaps the most unsettling finding is what failure looks like. Even when models are completely wrong, they sound persuasive. The reasoning is fluent, the explanations are structured, and the conclusions are confidently delivered. But the logic doesn’t hold.
rcarmo•7mo ago
That sounds a lot like a salesperson. And yes, there is a human tendency to twist reasoning to make the written word look polished, and I don’t think LLM training has fixed that bias.
ForHackernews•7mo ago
Curious about the use of the word "uncomfortable" -- for people working on AI who thought that LLM or L"R"Ms were a path to AGI?

To me, that paper was reassuring that I wasn't taking crazy pills. I've worked with these tools to produce code, and they routinely make mistakes that no thinking entity (yes, I've worked with some dimwitted junior devs) ever would. Yes, they are powerful and useful tools, but they're not "thinking" in any meaningful sense (defined here as a rigorously determining an algorithm and applying it correctly).

archon1410•7mo ago
The blog itself reads as if it was written by an LLM. (e.g. "This isn't about X, it's about Y." "... is timely ..." "X isn't Y".)

Weird.

And it has been discussed to death already:

Beware General Claims about “Generalizable Reasoning Capabilities” (of Modern AI Systems) [https://www.lesswrong.com/posts/5uw26uDdFbFQgKzih/beware-gen...]

Seven replies to the viral Apple reasoning paper and why they fall short [https://news.ycombinator.com/item?id=44278403]

antirez•7mo ago
The chain of thoughts is not where the reasoning capabilities of a model happens: models have reasoning capabilities that are part of the next token inference, what CoT does is searching/sampling the model space of representations and notions in order to "ground" the final reply, putting in the context window in an explicit way all the related knowledge and ideas the model possess about the question.

It is absolutely obvious that algorithmic problems like the Tower of Hanoi can't benefit from sampling. Also, algorithmic problems are domains that are comfortable for the paper authors to have a verifiable domain of puzzles, but are very far from what we want the models to do, and what they are good at. Models would solve this by implementing an algorithm in Python and calling a tool to execute it. This is how they can more easily solve such problems.

Moreover: in most benchmarks CoT improves LLMs performances a lot, because sampling helps immensely to provide a better reply. So this paper negative result is basically against a very vast experience of CoT being a powerful tool for LLMs, simply because most benchmarks operate on domains where sampling is very useful.

In short, the Apple paper mostly says things that were very obvious: it is like if they were trying to reach a negative result. It was a widespread vision that CoT can't help performing algorithmic work by concatenating tokens, if not in the most obvious ways. Yet, it helps a lot when there is to combine existing (inside the model) knowedge/ideas to provide a better reply.

pyman•7mo ago
What they're saying is that pattern-matching isn't the path to AGI. Humans and AI can both solve the Tower of Hanoi, but once the number of disks goes up, we both struggle.

Apple's point is that if we want to build something smarter than us, we need to look at intelligence and reasoning from a different angle.

rcarmo•7mo ago
Exploring how to consistently arrive at a negative result is still a valid research goal. I don’t think we’ve had enough of that kind of research regarding LLMs—-everything is so positive that it defies basic statistics…
jsnell•7mo ago
This paper, rebuttals, and rebuttals to rebuttals have been on HN repeatedly over the last couple of weeks (including literally now). At this point a summary of the original paper doesn't seem like it's adding much.

E.g.

https://news.ycombinator.com/item?id=44203562

https://news.ycombinator.com/item?id=44221900

https://news.ycombinator.com/item?id=44234626

https://news.ycombinator.com/item?id=44278403

https://news.ycombinator.com/item?id=44286086

crowie•7mo ago
This might be a dumb question, and will inevitably showcase my ignorance in this field to others, but I will risk that; Why can't AI at a certain level execute algorithms with solutions that have been proved to work for a very long time? What I mean is, the solution of the Hanoi towers problem is known. It does not take a lot of computational power to achieve the result. What is stopping an AI such as the objects of exam in the paper to execute such algorithms and gather the solutions, like a human programmer would? Do they get sidetracked in the process due to the amount of tokens? (edit: typo)
pyman•7mo ago
If humanity moves to Mars one day and leaves behind all the AI servers running on solar power, then comes back a billion years later, the AI would still be saying the same things. Why? Because no matter how powerful it is, AI doesn't evolve or grow on its own.
crowie•7mo ago
Gotcha, but I didn't mean it in that way. What I meant is, that problems like the case-study ones don't need a revolutionary nor an original answer which would require growth, they can be solved with old solutions which I would assume would be in some way embedded into the learning dataset of these models. Yeah, the scope of the problem is bigger, but the correct answer should come down in any case to a correct implementation of the known algorithm. The thing I'm asking is what causes the hindrance which prevents these AIs from performing in appropriate ways given old problems and old solutions.
ryandvm•7mo ago
I like your thought experiment and I think you're correct, but that's because we never gave it the physical possibility of a feedback loop (a.k.a. evolution).

I think if you added a step where the LLMs tweak their own build process and redeploy, your experiment would have wildly different results.

Yizahi•7mo ago
The so called "reasoning" of LLM programs is really a sham. And authors of those programs are sometimes expose it themselves. For example the article by Anthropic about Claude "reasoning". When they get to the math block they ask the program to add two numbers and then ask to write step by step flow how the LLM did it. LLM generates a human-based flow, because that's what it copied from the training data, while the real flow of LLM adds numbers is vastly different.

Basically so called "reasoning" is just generation of additional intermediary output, resembling real reasoning, but not being it.

https://transformer-circuits.pub/2025/attribution-graphs/bio...

rsynnott•7mo ago
> Apple’s new paper, The Illusion of Thinking, quietly released ahead of WWDC 2025, challenges many of the assumptions we’ve come to rely on in the LLM space.

So... wait, were people _really_ assuming that these things were reasoning? Why? Like, because the marketing said so? I had the idea that that was generally viewed as puffery; obviously they're not reasoning.

It's an interesting paper, but its outcome is completely unsurprising. What would have been surprising is if it had shown something different.

> Perhaps the most unsettling finding is what failure looks like. Even when models are completely wrong, they sound persuasive.

Again... This has been a fairly well-known problem with LLMs since GPT-3 or so. I'm not sure why anyone would find it unsettling at this point; they're confident-sounding bullshit engines.