frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

The first cut is to ResearchOps, and how to avoid it

https://medium.com/researchops-community/the-first-cut-is-to-researchops-and-how-to-avoid-it-with-kate-towsey-4b86eaccb5b9
1•adrianhoward•1m ago•0 comments

Not Yet

https://anandsanwal.me/not-yet/
1•herbertl•1m ago•0 comments

Trump Organization announces mobile plan, $499 smartphone

https://www.cnbc.com/2025/06/16/trump-mobile-phone-plan.html
3•srueg•3m ago•1 comments

Mozilla frets about Google's push to build AI into Chrome

https://www.theregister.com/2025/06/11/mozilla_worries_googles_browser_ai/
1•Vinnl•4m ago•0 comments

Trump Mobile Launches a Bold New Wireless Service for Americans

https://www.trump.com/media/trump-mobile-launches-a-bold-new-wireless-service
2•TowerTall•7m ago•2 comments

How the first electric grid was built

https://www.worksinprogress.news/p/how-the-worlds-first-electric-grid
3•bensouthwood•7m ago•0 comments

Securing Microservices with C# Records: The Immutability Advantage

https://medium.com/devsecops-ai/securing-microservices-with-c-records-the-immutability-advantage-0a7a4f09adbf
1•herbertmoroni•7m ago•0 comments

How to build the best keyboard in the world

https://www.theverge.com/tech/686441/norbauer-seneca-keyboard-creator
1•cainxinth•8m ago•0 comments

New York Requiring Companies to Reveal If AI Caused Layoffs

https://www.entrepreneur.com/business-news/new-york-requiring-companies-to-reveal-if-ai-caused-layoffs/493267
2•speckx•9m ago•0 comments

Objex Link S3LW ultra-low-power ESP32-S3 LoRaWAN board takes up to 100W DC input

https://www.cnx-software.com/2025/05/29/objex-link-s3lw-ultra-low-power-esp32-s3-lorawan-board-takes-up-to-100w-dc-input/
2•PaulHoule•11m ago•1 comments

AI Supercomputers

https://epoch.ai/data/ai-supercomputers
1•gmays•14m ago•0 comments

Ceramic optical diamond turning machine: design and development 1999 [pdf]

https://pure.tue.nl/ws/files/1328821/9903022.pdf
1•nill0•14m ago•0 comments

The Model Context Protocol (MCP)

https://www.youtube.com/watch?v=CQywdSdi5iA
1•Brysonbw•15m ago•0 comments

WhatsApp adds ads to the status screen

https://techcrunch.com/2025/06/16/whatsapp-is-adding-ads-to-the-status-screen/
2•chiwilliams•15m ago•2 comments

Show HN: Pipo360 – Now a Full Dev Workspace (Was Just AI Back End Gen Before)

https://pipo360.xyz/try
1•the_plug•17m ago•1 comments

Class Action: Drivers Sour on Lemonade for Exposing License Numbers

https://www.insurancejournal.com/news/east/2025/06/12/827280.htm
1•crescit_eundo•17m ago•0 comments

GoTo Group migrates digital payments unit to Alibaba Cloud

https://www.datacenterdynamics.com/en/news/goto-group-migrates-digital-payments-unit-to-alibaba-cloud/
1•nanankcornering•17m ago•1 comments

Erie Insurance Reports 'Information Security Event' Caused Network Outage

https://www.insurancejournal.com/news/east/2025/06/11/827295.htm
1•crescit_eundo•18m ago•0 comments

Show HN: I built a social task app that lets users post task progress

https://questmatesapp.com
1•newToTown•20m ago•1 comments

A map, a myth and a pre-Incan lagoon: the man who brought water back

https://www.theguardian.com/global-development/2025/jun/13/ecuador-indigenous-map-pre-inca-myths-ancient-lagoon-water-drought-
1•dxs•21m ago•0 comments

Introduction to Bash

https://cs.lmu.edu/%7Eray/notes/bash/
1•Brysonbw•21m ago•0 comments

Show HN: Ariana – Check what (AI generated) code did at runtime with 0 effort

1•anougaret•22m ago•0 comments

21 Day Experiment #2 – No Sugar

https://rory.codes/experiment-1-no-sugar/
2•mrroryflint•25m ago•0 comments

DOGE's Chaotic Takeover of Social Security

https://www.nytimes.com/2025/06/16/us/politics/doge-social-security.html
3•danso•26m ago•1 comments

Mathematical Illustrations: A Manual of Geometry and PostScript

https://personal.math.ubc.ca/~cass/graphics/text/www/
3•Bogdanp•26m ago•0 comments

Show HN: Instantly see job and skill demand trends across industries -Rolemetric

https://www.rolemetric.com/marketing/home
1•davidjbabs•28m ago•0 comments

Software Is Made of People

https://www.june.kim/software-is-made-of-people
1•fside•29m ago•0 comments

Slint 1.12 Released with WGPU Support, iOS Port, and Figma Variables Integration

https://slint.dev/blog/slint-1.12-released
2•madnirua•29m ago•1 comments

The Herbicide Diquat Poisons the Gut, Leading to Multiple Organ Dysfunction

https://thehighwire.com/editorial/the-herbicide-diquat-poisons-the-gut-leading-to-multiple-organ-dysfunction/
6•rachkovsky•30m ago•0 comments

Apple On-Device OpenAI API

https://github.com/gety-ai/apple-on-device-openai
1•fside•31m ago•0 comments
Open in hackernews

The Illusion of Thinking: A Reality Check on AI Reasoning

https://leotsem.com/blog/the-illusion-of-thinking/
21•leotsem•4h ago

Comments

leotsem•4h ago
Apple’s recent paper on the limits of AI reasoning is an uncomfortable but important read.

Instead of relying on standard benchmarks, the authors designed controlled environments—like Tower of Hanoi and River Crossing puzzles—to test how models handle increasing compositional complexity. The results: performance doesn’t taper off, it collapses. And even when the models fail, they continue to produce fluent, structured reasoning traces that sound convincing but fall apart logically.

If you’re building on top of LLMs or reasoning-augmented models, it’s well worth a look.

salviati•4h ago
If you ask me to solve increasingly dififcult Tower of Hanoi problems, I don't expect to be good at it. Neither would I expect a fellow human to be. So based on this should we question our intelligence?

I heard about that paper through an "AI explained" video [0], so I might be biased, but I agree with that video that the Apple paper is "meh" at best: it points out LLM limitations that are hardly a surprise.

[0] https://www.youtube.com/watch?v=wPBD6wTap7g

vincnetas•4h ago
Probably the difference between you and AI is that you would acknowledge that it's too difficult for you, and not to bullshit your way through.
saithound•4h ago
That's _exactly_ what the LLM did: the article's authors decided to count that as a failure.
vincnetas•4h ago
Hm was reading only TFA not the research paper. But TFA mentions this :

  Perhaps the most unsettling finding is what failure looks like. Even when models are completely wrong, they sound persuasive. The reasoning is fluent, the explanations are structured, and the conclusions are confidently delivered. But the logic doesn’t hold.
rcarmo•2h ago
That sounds a lot like a salesperson. And yes, there is a human tendency to twist reasoning to make the written word look polished, and I don’t think LLM training has fixed that bias.
ForHackernews•4h ago
Curious about the use of the word "uncomfortable" -- for people working on AI who thought that LLM or L"R"Ms were a path to AGI?

To me, that paper was reassuring that I wasn't taking crazy pills. I've worked with these tools to produce code, and they routinely make mistakes that no thinking entity (yes, I've worked with some dimwitted junior devs) ever would. Yes, they are powerful and useful tools, but they're not "thinking" in any meaningful sense (defined here as a rigorously determining an algorithm and applying it correctly).

archon1410•4h ago
The blog itself reads as if it was written by an LLM. (e.g. "This isn't about X, it's about Y." "... is timely ..." "X isn't Y".)

Weird.

And it has been discussed to death already:

Beware General Claims about “Generalizable Reasoning Capabilities” (of Modern AI Systems) [https://www.lesswrong.com/posts/5uw26uDdFbFQgKzih/beware-gen...]

Seven replies to the viral Apple reasoning paper and why they fall short [https://news.ycombinator.com/item?id=44278403]

antirez•4h ago
The chain of thoughts is not where the reasoning capabilities of a model happens: models have reasoning capabilities that are part of the next token inference, what CoT does is searching/sampling the model space of representations and notions in order to "ground" the final reply, putting in the context window in an explicit way all the related knowledge and ideas the model possess about the question.

It is absolutely obvious that algorithmic problems like the Tower of Hanoi can't benefit from sampling. Also, algorithmic problems are domains that are comfortable for the paper authors to have a verifiable domain of puzzles, but are very far from what we want the models to do, and what they are good at. Models would solve this by implementing an algorithm in Python and calling a tool to execute it. This is how they can more easily solve such problems.

Moreover: in most benchmarks CoT improves LLMs performances a lot, because sampling helps immensely to provide a better reply. So this paper negative result is basically against a very vast experience of CoT being a powerful tool for LLMs, simply because most benchmarks operate on domains where sampling is very useful.

In short, the Apple paper mostly says things that were very obvious: it is like if they were trying to reach a negative result. It was a widespread vision that CoT can't help performing algorithmic work by concatenating tokens, if not in the most obvious ways. Yet, it helps a lot when there is to combine existing (inside the model) knowedge/ideas to provide a better reply.

pyman•4h ago
What they're saying is that pattern-matching isn't the path to AGI. Humans and AI can both solve the Tower of Hanoi, but once the number of disks goes up, we both struggle.

Apple's point is that if we want to build something smarter than us, we need to look at intelligence and reasoning from a different angle.

rcarmo•2h ago
Exploring how to consistently arrive at a negative result is still a valid research goal. I don’t think we’ve had enough of that kind of research regarding LLMs—-everything is so positive that it defies basic statistics…
jsnell•4h ago
This paper, rebuttals, and rebuttals to rebuttals have been on HN repeatedly over the last couple of weeks (including literally now). At this point a summary of the original paper doesn't seem like it's adding much.

E.g.

https://news.ycombinator.com/item?id=44203562

https://news.ycombinator.com/item?id=44221900

https://news.ycombinator.com/item?id=44234626

https://news.ycombinator.com/item?id=44278403

https://news.ycombinator.com/item?id=44286086

crowie•4h ago
This might be a dumb question, and will inevitably showcase my ignorance in this field to others, but I will risk that; Why can't AI at a certain level execute algorithms with solutions that have been proved to work for a very long time? What I mean is, the solution of the Hanoi towers problem is known. It does not take a lot of computational power to achieve the result. What is stopping an AI such as the objects of exam in the paper to execute such algorithms and gather the solutions, like a human programmer would? Do they get sidetracked in the process due to the amount of tokens? (edit: typo)
pyman•4h ago
If humanity moves to Mars one day and leaves behind all the AI servers running on solar power, then comes back a billion years later, the AI would still be saying the same things. Why? Because no matter how powerful it is, AI doesn't evolve or grow on its own.
crowie•4h ago
Gotcha, but I didn't mean it in that way. What I meant is, that problems like the case-study ones don't need a revolutionary nor an original answer which would require growth, they can be solved with old solutions which I would assume would be in some way embedded into the learning dataset of these models. Yeah, the scope of the problem is bigger, but the correct answer should come down in any case to a correct implementation of the known algorithm. The thing I'm asking is what causes the hindrance which prevents these AIs from performing in appropriate ways given old problems and old solutions.
ryandvm•1h ago
I like your thought experiment and I think you're correct, but that's because we never gave it the physical possibility of a feedback loop (a.k.a. evolution).

I think if you added a step where the LLMs tweak their own build process and redeploy, your experiment would have wildly different results.

Yizahi•56m ago
The so called "reasoning" of LLM programs is really a sham. And authors of those programs are sometimes expose it themselves. For example the article by Anthropic about Claude "reasoning". When they get to the math block they ask the program to add two numbers and then ask to write step by step flow how the LLM did it. LLM generates a human-based flow, because that's what it copied from the training data, while the real flow of LLM adds numbers is vastly different.

Basically so called "reasoning" is just generation of additional intermediary output, resembling real reasoning, but not being it.

https://transformer-circuits.pub/2025/attribution-graphs/bio...