frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ubuntu Introduces Architecture Variants

https://lwn.net/Articles/1044383/
124•WhyNotHugo•3h ago•46 comments

AI scrapers request commented scripts

https://cryptography.dog/blog/AI-scrapers-request-commented-scripts/
42•ColinWright•2h ago•10 comments

Nix Derivation Madness

https://fzakaria.com/2025/10/29/nix-derivation-madness
101•birdculture•3h ago•26 comments

How AI gave me my voice back – an artist's review of Suno Studio

https://blog.andyshand.com/blog/how-ai-gave-me-my-voice-back
25•80hd•6d ago•28 comments

Attention lapses due to sleep deprivation due to flushing fluid from brain

https://news.mit.edu/2025/your-brain-without-sleep-1029
369•gmays•4h ago•167 comments

Another European agency shifts off US Tech as digital sovereignty gains steam

https://www.zdnet.com/article/another-european-agency-ditches-big-tech-as-digital-sovereignty-mov...
72•CrankyBear•1h ago•21 comments

Fire TV: Amazon to block piracy apps in the future

https://www.heise.de/en/news/Fire-TV-Amazon-to-block-piracy-apps-in-the-future-10964878.html
27•speckx•49m ago•9 comments

Pangolin (YC S25) Is Hiring a Full Stack Software Engineer (Open-Source)

https://docs.pangolin.net/careers/software-engineer-full-stack
1•miloschwartz•58m ago

Sustainable memristors from shiitake mycelium for high-frequency bioelectronics

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0328965
55•PaulHoule•4h ago•33 comments

AMD Could Enter ARM Market with Sound Wave APU Built on TSMC 3nm Process

https://www.guru3d.com/story/amd-enters-arm-market-with-sound-wave-apu-built-on-tsmc-3nm-process/
249•walterbell•14h ago•196 comments

Just Use a Button

https://gomakethings.com/just-use-a-button/
27•moebrowne•59m ago•9 comments

John Carmack on mutable variables

https://twitter.com/id_aa_carmack/status/1983593511703474196
376•azhenley•15h ago•445 comments

Wheels for free-threaded Python now available for psutil

https://gmpy.dev/blog/2025/wheels-for-free-threaded-python-now-available-in-psutil
51•grodola•6d ago•2 comments

Affinity Studio now free

https://www.affinity.studio/get-affinity
1165•dagmx•1d ago•752 comments

Rotating Workforce Scheduling in MiniZinc

https://zayenz.se/blog/post/rotating-workforce-scheduling/
35•mzl•3h ago•4 comments

Floppy Disk / Diskettes // retrocmp / retro computing

https://retrocmp.de/fdd/diskette/diskette.htm
16•rbanffy•3d ago•1 comments

Nim 2.2.6

https://nim-lang.org//blog/2025/10/31/nim-226.html
116•xz18r•3h ago•34 comments

Immutable releases are now generally available on GitHub

https://github.blog/changelog/2025-10-28-immutable-releases-are-now-generally-available/
101•fastest963•3h ago•49 comments

Bertie the Brain

https://en.wikipedia.org/wiki/Bertie_the_Brain
74•breppp•1w ago•17 comments

Phone numbers for use in TV shows, films and creative works

https://www.acma.gov.au/phone-numbers-use-tv-shows-films-and-creative-works
263•nomilk•20h ago•134 comments

How the cochlea computes (2024)

https://www.dissonances.blog/p/the-ear-does-not-do-a-fourier-transform
459•izhak•1d ago•142 comments

Kimi Linear: An Expressive, Efficient Attention Architecture

https://github.com/MoonshotAI/Kimi-Linear
199•blackcat201•17h ago•40 comments

OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise

https://www.nytimes.com/interactive/2025/10/31/technology/openai-fundraising-deals.html
314•reaperducer•4h ago•303 comments

Git CLI tool for intelligently creating branch names

https://github.com/ytreister/gibr
24•Terretta•4h ago•25 comments

Free software scares normal people

https://danieldelaney.net/normal/
862•cryptophreak•1d ago•554 comments

NPM flooded with malicious packages downloaded more than 86k times

https://arstechnica.com/security/2025/10/npm-flooded-with-malicious-packages-downloaded-more-than...
346•jnord•1d ago•247 comments

A Closer Look at Piezoelectric Crystal

https://www.samaterials.com/content/a-closer-look-at-stressed-piezo-crystals.html
49•pillars•1w ago•14 comments

Springs and bounces in native CSS

https://www.joshwcomeau.com/animation/linear-timing-function/
251•feross•2d ago•39 comments

Show HN: Quibbler – A critic for your coding agent that learns what you want

https://github.com/fulcrumresearch/quibbler
97•etherio•17h ago•23 comments

Florian Schneider Collection: Instruments and equipment up for auction

https://www.juliensauctions.com/en/articles/the-florian-schneider-collection-rare-instruments-and...
62•cainxinth•4d ago•17 comments
Open in hackernews

Reasoning Models Reason Well, Until They Don't

https://arxiv.org/abs/2510.22371
187•optimalsolver•8h ago

Comments

iLoveOncall•8h ago
> [...] recent studies show that transformers and LLMs fail catastrophically once reasoning problems exceed modest complexity. We revisit these findings through the lens of large reasoning models (LRMs) -- LLMs fine-tuned with incentives for step-by-step argumentation and self-verification

This was the obvious outcome of the study (don't get me wrong, obvious outcomes are still worth having research on).

"LRMs" *are* just LLMs. There's no such thing as a reasoning model, it's just having an LLM write a better prompt than the human would and then sending it to the LLM again.

Despite what Amodei and Altman want Wall Street to believe, they did not suddenly unlock reasoning capabilities in LLMs by essentially just running two different prompts in sequence to answer the user's question.

The truly amazing thing is that reasoning models show ANY improvement at all compared to non-reasoning models, when they're the same exact thing.

sothatsit•7h ago
What do you mean by reasoning?

If you mean solving logic problems, then reasoning LLMs seem to pass that bar as they do very well programming and maths competitions. Reasoning LLMs can also complete problems like multiplying large numbers, which requires applying some sort of algorithm where the results cannot just be memorised. They also do this much better than standard pre-trained LLMs with no RL.

So, that makes me come back to this question of what definition of reasoning do people use that reasoning models do not meet? They're not perfect, obviously, but that is not a requirement of reasoning if you agree that humans can reason. We make mistakes as well, and we also suffer under higher complexity. Perhaps they are less reliable in knowing when they have made mistakes or not than trained humans, but I wouldn't personally include reliability in my definition for reasoning (just look at how often humans make mistakes in tests).

I am yet to see any serious, reasoned, arguments that suggest why the amazing achievements of reasoning LLMs in maths and programming competitions, on novel problems, does not count as "real reasoning". It seems much more that people just don't like the idea of LLMs reasoning, and so reject the idea without giving an actual reason themselves, which seems somewhat ironic to me.

fsloth•7h ago
I guess we mean here ”usefull reasoning” instead of the idiot-savant. I mean it’s a fair ask since these are marketed as _tools_ you can use to implement _industrial processes_ and even replace you human workers.

In that I guess the model does not need to be the most reasonable intepreter of vague and poorly formulated user inputs but I think to improve a bit at least, to become usefull general appliances and not just test-scoring-automatons.

The key differentiator here is that tests generally _are made to be unambiguously scoreable_. Real world problems are often more vague from the point of view of optimal outcome.

sothatsit•7h ago
Thanks. So, people are extending "reasoning" to include making good decisions, rather than just solving logic problems. That makes sense to me that if people use that definition, LLMs are pretty bad at "reasoning".

Although, I would argue that this is not reasoning at all, but rather "common sense" or the ability to have a broader perspective or think of the future. These are tasks that come with experience. That is why these do not seem like reasoning tasks to me, but rather soft skills that LLMs lack. In my mind these are pretty separate concerns to whether LLMs can logically step through problems or apply algorithms, which is what I would call reasoning.

hansmayer•7h ago
Ah yes then, let me then unchain my LLM on those nasty unsolved math and logic problems I've absolutely not be struggling with in the course of my career.
sothatsit•7h ago
A lot of maths students would also struggle to contribute to frontier math problems, but we would still say they are reasoning. Their skill at reasoning might not be as good as professional mathematicians, but that does not stop us from recognising that they can solve logic problems without memorisation, which is a form of reasoning.

I am just saying that LLMs have demonstrated they can reason, at least a little bit. Whereas it seems other people are saying that LLM reasoning is flawed, which does not negate the fact that they can reason, at least some of the time.

Maybe generalisation is one area where LLM's reasoning is weakest though. They can be near-elite performance at nicely boxed up competition math problems, but their performance dramatically drops on real-world problems where things aren't so neat. We see similar problems in programming as well. I'd argue the progress on this has been promising, but other people would probably vehemently disagree with that. Time will tell.

vidarh•6h ago
Thank you for picking at this.

A lot of people appear to be - often not consciously or intentionally - setting the bar for "reasoning" at a level many or most people would not meet.

Sometimes that is just a reaction to wanting an LLM that is producing result that is good for their own level. Sometimes it reveals a view of fellow humans that would be quite elitist if stated outright. Sometimes it's a kneejerk attempt at setting the bar at a point that would justify a claim that LLMs aren't reasoning.

Whatever the reason, it's a massive pet peeve of mine that it is rarely made explicit in these conversations, and it makes a lot of these conversations pointless because people keep talking past each other.

For my part a lot of these models often clearly reason by my standard, even if poorly. People also often reason poorly, even when they demonstrably attempt to reason step by step. Either because they have motivations to skip over uncomfortable steps, or because they don't know how to do it right. But we still would rarely claim they are not capable of reasoning.

I wish more evaluations of LLMs would establish a human baseline to test them against for much this reason. It would be illuminating in terms of actually telling us more about how LLMs match up to humans in different areas.

cryptonym•2h ago
Computers have forever been doing stuff people can't do.

The real question is how useful this tool is and if this is as transformative as investors expect. Understanding its limits is crucial.

cryptonym•7h ago
That's the real deal.

They say LLM are PhD-level. Despite billion dollars, PhD-LLMs sure are not contributing a lot solving known problems. Except of course few limited marketing stunts.

fsloth•6h ago
IMHO that's the key differentiator.

You can give a human PhD an _unsolved problem_ in field adjacent to their expertise and expect some reasonable resolution. LLM PhD:s solve only known problems.

That said humans can also be really bad problem solvers.

If you don't care about solving the problem and only want to create paperwork for bureaucracy I guess you don't care either way ("My team's on it!") but companies that don't go out of business generally recognize pretty soon lack of outcomes where it matters.

nl•5h ago
> LLM PhD:s solve only known problems.

Terry Tao would disagree: https://mathstodon.xyz/@tao/114508029896631083

https://deepmind.google/discover/blog/alphaevolve-a-gemini-p...

hansmayer•6h ago
I wish our press was not effectively muted or bought by the money, so none of the journos has cojones to call out the specific people who were blabbing about PhD-levels, AGI etc. They should be god damn calling them out every single day, essentially doing their job, but they are now too timid for that.
vidarh•6h ago
I've "unchained" my LLM on a lot of problems that I probably could solve, but that would take me time I don't have, and that it has solved in many case faster than I could. It may not be good enough to solve problems that are beyond us for most of us, but it certainly can solve a lot of problems for a lot of us that have gone unsolved for lack of resources.
cryptonym•2h ago
Can solve problems you already know how to solve, if you micro-manage it and it'll BS a lot on the way.

If this is the maximum AGI-PhD-LRM can do, that'll be disappointing compared to investments. Curious to see what all this will become in few years.

vidarh•2h ago
I'm not usually micro-managing it, that's the point.

I sometimes do on problems where I have particular insight, but I mostly find it is far more effective to give it test cases and give it instructions on how to approach a task, and then let it iterate with little to no oversight.

I'm letting Claude Code run for longer and longer with --dangerously-skip-permissions, to the point I'm pondering rigging up something to just keep feeding it "continue" and run it in parallel on multiple problems.

Because at least when you have a good way of measuring success, it works.

hansmayer•7h ago
^^This is a great view and it seems generally widely understood by the file and rank techies. I feel pitty for the general public retail investors which are about to be left holding the bag for the VCs, after a certain major <ahem> champion goes into IPO soon.
js8•7h ago
> So, that makes me come back to this question of what definition of reasoning do people use that reasoning models do not meet?

The models can learn reasoning rules, but they are not able to apply them consistently or recognize the rules they have learned are inconsistent. (See also my other comment which references comments I made earlier.)

And I think they can't without a tradeoff, as I commented https://news.ycombinator.com/item?id=45717855 ; the consistency requires certain level of close-mindedness.

sothatsit•6h ago
Yes, so I think in this case we use different definitions of reasoning. You include reliability as a part of reasoning, whereas I do not.

I would argue that humans are not 100% reliable in their reasoning, and yet we still claim that they can reason. So, even though I would agree that the reasoning of LLMs is much less reliable, careful, and thoughtful than smart humans, that does not mean that they are not reasoning. Rather, it means that their reasoning is more unreliable and less well-applied than people. But they are still performing reasoning tasks (even if their application of reasoning can be flawed).

Maybe the problem is that I am holding out a minimum bar for LLMs to jump to count as reasoning (demonstrated application of logical algorithms to solve novel problems in any domain), whereas other people are holding the bar higher (consistent and logical application of rules in all/most domains).

js8•5h ago
The problem is if you're not able to apply the reasoning rules consistently, then you will always fail on large enough problem. If you have an inconsistent set of reasoning rules, then you can set up a problem as a trap so that the reasoning fails.

You can argue that damaged toaster is still a toaster, conceptually. But if it doesn't work, then it's useless. As it stands, models lack ability to reason because they can fail to reason and you can't do anything about it. In case of humans, it's valid to say they can reason, because humans can at least fix themselves, models can't.

sothatsit•5h ago
The reasoning does not need to be 100% accurate to be useful. Humans are rarely 100% accurate at anything, and yet over time we can build up large models of problems using verification and review. We can do the exact same thing with LLMs.

The best example of this is Sean Heelan, who used o3 to find a real security vulnerability in the Linux kernel: https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-...

Sean Heelan ran o3 100 times, and it found a known vulnerability in 8% of runs. For a security audit, that is immensely useful, since an expert can spend the time to look at the results from a dozen runs and quickly decide if there is anything real. Even more remarkably though, this same testing exposed a zero-day that they were not even looking for. That is pretty incredible for a system that makes mistakes.

This is why LLM reasoning absolutely does not need to be perfect to be useful. Human reasoning is inherently flawed as well, and yet through systems like peer review and reproducing results, we can still make tremendous progress over time. It is just about figuring out systems of verification and review so that we don't need to trust any LLM output blindly. That said, greater reliability would be massively beneficial to how easy it is to get good results from LLMs. But it's not required.

sirwhinesalot•7h ago
> The truly amazing thing is that reasoning models show ANY improvement at all compared to non-reasoning models, when they're the same exact thing.

It's because they do more compute. The more tokens "spent" the better the accuracy. Same reason they spit out a paragraph of text instead of just giving a straight answer in non-reasoning mode.

jpcompartir•7h ago
I can't remember which paper it's from, but isn't the variance in performance explained by # of tokens generated? i.e. more tokens generated tends towards better performance.

Which isn't particularly amazing, as # of tokens generated is basically a synonym in this case for computation.

We spend more computation, we tend towards better answers.

qsort•7h ago
Don't they have a significant RL component? The "we'll just make it bigger" idea that was peddled a lot after GPT3.5 was nonsense, but that's not the only thing they're doing right now.
ACCount37•6h ago
"We'll just make it bigger" works. RLVR just gives better performance gains and spends less inference compute - as long as you have a solid way of verifying the tasks.

A simplified way of thinking about it is: pretraining gives LLMs useful features, SFT arranges them into useful configurations, RLVR glues them together and makes them work together well, especially in long reasoning traces. Makes sense to combine it all in practice.

How much pretraining gives an LLM depends on the scale of that LLM, among other things. But raw scale is bounded by the hardware capabilities and the economics - of training and especially of inference.

Scale is still quite desirable - GPT-4.5 scale models are going to become the norm for high end LLMs quite soon.

qsort•6h ago
I'm not against "we'll make it bigger" (although it's as of yet unknown if it hits diminishing returns, 4.5 isn't exactly remembered as a great release), I'm against "we'll just (i.e. 'only') make it bigger".

I'm doubtful you'd have useful LLMs today if labs hadn't scaled in post-training.

antonvs•7h ago
> The truly amazing thing is that reasoning models show ANY improvement at all compared to non-reasoning models, when they're the same exact thing.

Why is that amazing? It seems expected. Use a tool differently, get different results.

equinox_nl•8h ago
But I also fail catastrophically once a reasoning problem exceeds modest complexity.
monkeydust•7h ago
But you recognise you are likely to fail and thus dont respond or redirect the problem to someone who has a greater likelihood of not failing.
antonvs•7h ago
I’ve had models “redirect the problem to someone who has a greater likelihood of not failing”. Gemini in particular will do this when it runs into trouble.

I don’t find all these claims that models are somehow worse than humans in such areas convincing. Yes, they’re worse in some respects. But when you’re talking about things related to failures and accuracy, they’re mostly superhuman.

For example, how many humans can write hundred of lines of code (in seconds mind you) and regularly not have any syntax errors or bugs?

ffsm8•7h ago
> For example, how many humans can write hundred of lines of code (in seconds mind you) and regularly not have any syntax errors or bugs?

Ez, just use codegen.

Also the second part (not having bugs) is unlikely to be true for the LLM generated code, whereas traditional codegen will actually generate code with pretty much no bugs.

vidarh•6h ago
I have Claude reducing the number of bugs in my traditional codegen right now.
pessimizer•3h ago
> I’ve had models “redirect the problem to someone who has a greater likelihood of not failing”. Gemini in particular will do this when it runs into trouble.

I have too, and I sense that this is something that has been engineered in rather than coming up naturally. I like it very much and they should do it a lot more often. They're allergic to "I can't figure this out" but hearing "I can't figure this out" gives me the alert to help it over the hump.

> But when you’re talking about things related to failures and accuracy, they’re mostly superhuman.

Only if you consider speed to failure and inaccuracy. They're very much subhuman in output, but you can make them retry a lot in a short time, and refine what you're asking them each time to avoid the mistakes they're repeatedly making. But that's you doing the work.

exe34•7h ago
If that were true, we would live in a utopia. People vote/legislate/govern/live/raise/teach/preach without ever learning to reason correctly.
davidhs•7h ago
Do you? Don't you just halt and say this is too complex?
p_v_doom•7h ago
Nope, audacity and Dunning-Krueger all the way, baby
dspillett•7h ago
Some would consider that to be failing catastrophically. The task is certainly failed.
carlmr•7h ago
Halting is sometimes preferable to thrashing around and running in circles.

I feel like if LLMs "knew" when they're out of their depth, they could be much more useful. The question is whether knowing when to stop can be meaningfully learned from examples with RL. From all we've seen the hallucination problem and this stopping problem all boil down to this problem that you could teach the model to say "I don't know" but if that's part of the training dataset it might just spit out "I don't know" to random questions, because it's a likely response in the realm of possible responses, instead of spitting out "I don't know" to not knowing.

SocratesAI is still unsolved, and LLMs are probably not the path to get knowing that you know nothing.

ukuina•6h ago
> if LLMs "knew" when they're out of their depth, they could be much more useful.

I used to think this, but no longer sure.

Large-scale tasks just grind to a halt with more modern LLMs because of this perception of impassable complexity.

And it's not that they need extensive planning, the LLM knows what needs to be done (it'll even tell you!), it's just more work than will fit within a "session" (arbitrary) and so it would rather refuse than get started.

So you're now looking at TODOs, and hierarchical plans, and all this unnecessary pre-work even when the task scales horizontally very well (if it just jumped into it).

benterix•6h ago
This seems to be the stance of creators of agentic coders. They are so bound on creating something, even if this something makes no sense whatsoever.
LunaSea•6h ago
I would consider that detecting your own limits when trying to solve a problem is preferable to having the illusion of thinking that your solution is working and correct.
moritzwarhier•6h ago
Ah yes, the function that halts if the input problem would take too long to halt.

But yes, I assume you mean they abort their loop after a while, which they do.

This whole idea of a "reasoning benchmark" doesn't sit well with me. It seems still not well-defined to me.

Maybe it's just bias I have or my own lack of intelligence, but it seems to me that using language models for "reasoning" is still more or less a gimmick and convenience feature (to automate re-prompts, clarifications etc, as far as possible).

But reading this pop-sci article from summer 2022 seems like this definition problem hasn't changed very much since then.

Although it's about AI progress before ChatGPT and it doesn't even mention the GPT base models. Sure, some of the tasks mentioned in the article seem dated today.

But IMO, there is still no AI model that can be trusted to, for example, accurately summarize a Wikipedia article.

Not all humans can do that either, sure. But humans are better at knowing what they don't know, and deciding what other humans can be trusted. And of course, none of this is an arithmetic or calculation task.

https://www.science.org/content/article/computers-ace-iq-tes...

AlecSchueler•7h ago
I also fail catastrophically when trying to push nails through walls by I expect my hammer to do better.
moffkalast•7h ago
I have one hammer and I expect it to work on every nail and screw. If it's not a general hammer, what good is it now?
arethuza•6h ago
You don't need a "general hammer" - they are old fashioned - you need a "general-purpose tool-building factory factory factory":

https://www.danstroot.com/posts/2018-10-03-hammer-factories

code_martial•5h ago
Reminds me of a 10 letter Greek word that starts with a k.
hshdhdhehd•7h ago
Gold and shovels might be a more fitting analogy for AI
raddan•7h ago
Yes, but you are not a computer. There is no point building another human. We have plenty of them.
WesolyKubeczek•8h ago
It’s because they generate a seeming of reasoning, and don’t actually reason!

(Slams the door angrily)

(stomps out angrily)

(touches the grass angrily)

samuell•8h ago
Yea, a bit like a cheating student rote memorizing and copying another students technique for solving a type of problem, and failing hard as soon as there's too much variation from the original problem.
fsloth•7h ago
Yes!

That said the input space of supported problems is quite large and you can configure the problem parametrs quite flexibly.

I guess the issue is that what the model _actually_ provides you is this idiot savant who has pre-memorized everything without offering a clear index that would disambiguate well-supported problems from ”too difficult” (i.e. novel) ones

brap•7h ago
What is to reason, if not to generate a seeming of reasoning?

(tips fedora)

hshdhdhehd•7h ago
You said the quiet part out loud of political debate.

(does something)

brap•7h ago
I wonder if we can get models to reason in a structured and verifiable way, like we have formal logic in math.
Frieren•7h ago
For that, you already have classical programming. It is great at formal logic math.
brap•7h ago
I think trying to accurately express natural language statements as values and logical steps as operators is going to be very difficult. You also need to take into account ambiguity and subtext and things like that.

I actually believe it is technically possible, but is going to be very hard.

nl•6h ago
This is where you get the natural language tool to write the formal logic.

ChatGPT knows WebPPL really well for example.

brap•2h ago
You will need a formal language first.

Take this statement for example:

>ChatGPT knows WebPPL really well

What formal language can express this statement? What will the text be parsed into? Which transformations can you use to produce other truthful (and interesting) statements from it? Is this flexible enough to capture everything that can be expressed in English?

The closest that comes to mind is Prolog, but it doesn’t really come close.

alyxya•7h ago
The key point the paper seems to make is that existing benchmarks have relatively low complexity on reasoning complexity, so they made a new dataset DeepRD with arbitrarily large reasoning complexity and demonstrated that existing models fail at a complex enough problem. Complexity is defined from the complexity of a graph created by modeling the problem as a graph and determining the traversals needed to go from some source node to a target node.

My main critique is that I don't think there's evidence that this issue would persist after continuing to scale models to be larger and doing more RL. With a harness like what coding agents do these days and with sufficient tool use, I bet models could go much further on that reasoning benchmark. Otherwise, if the reasoning problem were entirely done within a single context window, it's expected that a complex enough reasoning problem would be too difficult for the model to solve.

jeremyjh•5h ago
The burden of evidence here is on you. They don’t need to prove LRMs can’t scale to meet these problems; their only claim is current models can’t handle these problems. Others will take this up as a challenge - and chances may be good they will overcome it. This is how science works.
alyxya•2h ago
They can’t claim current models aren’t able to handle these problems if they didn’t use a setup similar to coding agents like Claude Code and OpenAI Codex. Using a suboptimal setup is akin to verbally telling a person the whole reasoning problem without letting them write down notes and expecting them to memorize and solve it after only hearing it once.
tomlockwood•5h ago
So the answer is a few more trillion?
code_martial•5h ago
It’s a worthwhile answer if it can be proven correct because it means that we’ve found a way to create intelligence, even if that way is not very efficient. It’s still one step better than not knowing how to do so.
tomlockwood•4h ago
So we're sending a trillion on faith?
code_martial•4h ago
No, that’s not what I said.
tomlockwood•3h ago
Why are we sending the trillion?
usrbinbash•2h ago
> if it can be proven correct

Then the first step would be to prove that this works WITHOUT needing to burn through the trillions to do so.

usrbinbash•3h ago
> I don't think there's evidence that this issue would persist after continuing to scale models to be larger and doing more RL

And how much larger do we need to make the models? 2x? 3x? 10x? 100x? How large do they need to get before scaling-up somehow solves everything?

Because: 2x larger, means 2x more memory and compute required. Double the cost or half the capacity. Would people still pay for this tech if it doubles in price? Bear in mind, much of it is already running at a loss even now.

And what if 2x isn't good enough? Would anyone pay for a 10x larger model? Can we even realistically run such models as anything other than a very expensive PoC and for a very short time? And whos to say that even 10x will finally solve things? What if we need 40x? Or 100x?

Oh, and of course: Larger models also require more data to train them on. And while the Internet is huge, it's still finite. And when things grow geometrically, even `sizeof(internet)` eventually runs out ... and, in fact, may have done so already [1] [2]

What if we actually discover that scaling up doesn't even work at all, because of diminishing returns? Oh wait, looks like we did that already: [3]

[1]: https://observer.com/2024/12/openai-cofounder-ilya-sutskever...

[2]: https://biztechweekly.com/ai-training-data-crisis-how-synthe...

[3]: https://garymarcus.substack.com/p/confirmed-llms-have-indeed...

alyxya•1h ago
Scaling applies to multiple dimensions simultaneously over time. A frontier model today could be replicated a year later with a model half the size, with a quarter of the FLOPS, etc. I don’t know the real numbers for optimization scaling, but you could check out NanoGPT speedrun [1] as an example.

The best solution in the meantime is giving the LLM a harness that allows tool use like what coding agents have. I suspect current models are fully capable of solving arbitrary complexity artificial reasoning problems here, provided that they’re used in the context of a coding agent tool.

[1] https://github.com/KellerJordan/modded-nanogpt

js8•7h ago
I think the explanation is pretty simple, as I said in my earlier comment: https://news.ycombinator.com/item?id=44904107

I also believe the problem is we don't know what we want: https://news.ycombinator.com/item?id=45509015

If we could make LLMs to apply a modest set of logic rules consistently, it would be a win.

Sharlin•6h ago
That's a pretty big "if". LLMs are by design entirely unlike GoFAI reasoning engines. It's also very debatable whether it makes any sense to try and hack LLMs into reasoning engines when you could just... use a reasoning engine. Or have the LLM to defer to one, which would play to their strength as translators.
flimflamm•7h ago
What confused me is the fact that in the paper all logical steps are give. It basically check that when all relevant facts are provided explicitly as links , how far and how complex a chain can the model correctly follow before it breaks down?

So it's simpler than "reasoning". This is not necessarily a bad thing as it boils down the reasoning to a simpler, more controlled sub problem.

devlogstream•7h ago
LLMs are like students, they can reason a bit, but real understanding still takes time and practice.
hansmayer•6h ago
What? The LLMs are nothing like students (or any other human for that matter).
anal_reactor•7h ago
I'm yet to see a task that AI fails at that bottom 10% of population wouldn't also fail at.
TheOtherHobbes•6h ago
How about keeping a conversation going with family over Thanksgiving? (Or local equivalent.)
randomNumber7•6h ago
This is something where the top 10% sometimes horribly fail.
Earw0rm•6h ago
If by task you mean the written, intellectual variety, maybe.
layer8•5h ago
If I have the choice of performing an intellectual task myself, or have it performed by someone from the bottom 10% of the population, I’d probably rather perform it myself.
Der_Einzige•4h ago
What happens when both choices lead to you doing it yourself?
acdha•4h ago
The problem is consistency: AI tools usually produce output which _sounds_ like the top 10% but you have to read it carefully to find the bottom 10% parts. We’re not used to that because human performance isn’t that inconsistent and we use history and social factors: someone’s performance goes down when they’re really drunk, but they rarely show up to work in that state and it’s obvious enough that other people recognize that they shouldn’t be trusted.
anal_reactor•4h ago
> We’re not used to that because human performance isn’t that inconsistent

It is. It's very common for socially apt people to bullshit through things they don't know, or outright want to hide.

acdha•3h ago
That’s not inconsistent: your bluffer knows they’re making something up and is using their model of you to construct something they think you’ll believe. Someone who can do that isn’t going to suddenly forget how to count the number of letters in a word.
anal_reactor•3h ago
You're wrong. Counting the number of letters in a word is a significantly more difficult task than lying, both for humans and LLMs. Imagine going to a ghetto and asking people "have you ever lied to someone and had them believe the lie", and ask them to spell "continuously". Children learn to lie before they learn to spell.
acdha•3h ago
> Counting the number of letters in a word is a significantly more difficult task than lying

No, it’s not - you don’t even need to be literate to count symbols - but also consider the complexity of the second task and how many skills each requires: unlike counting letters, lying isn’t simple confabulation and requires a theory of mind and some kind of goal. A child who lies to avoid trouble is doing that because they have enough of a world model to know they are going to get in trouble for something even if they haven’t worked out yet that this is unlikely to work.

anal_reactor•2h ago
Sure, let's stick to counting symbols. When I need to count something, there's a decent chance I'll get lost if I count beyond 10, and beyond 20 I'll get lost for sure. Even below 10, when I count it's one-two-three-four-five-six-seven-eight-nine items. But when I lie I do it instantaneously, without altering the pace of the conversation. I can come up with a believable lie within the brief period between someone saying something to me, and the moment I'm expected to respond. No way I'd be able to count 10 items that fast.

Pirahã language doesn't even have numerals - that's an extreme case, but there quite a few languages where people stop counting beyond certain small number and just say "a lot". Same people though don't have issues lying to one another. Let that sink in for a while - fully grown-ass adults, fully capable of functioning in their society, not capable of counting one-two-three because the concept is beyond them.

What I'm trying to say is that all of those "requires theory of mind" statements are probably true but completely irrelevant because humans (and LLMs) have "hardware acceleration" of whatever it takes to lie, meanwhile counting is an abstract idea that requires to use the brain in a way it didn't evolve to be used. Similarly, LLMs cannot count if they aren't connected to a math engine - not because they're stupid, but because counting is really difficult.

My_Name•6h ago
I find that they know what they know fairly well, but if you move beyond that, into what can be reasoned from what they know, they have a profound lack of ability to do that. They are good at repeating their training data, not thinking about it.

The problem, I find, is that they then don't stop, or say they don't know (unless explicitly prompted to do so) they just make stuff up and express it with just as much confidence.

ftalbot•6h ago
Every token in a response has an element of randomness to it. This means they’re non-deterministic. Even if you set up something within their training data there is some chance that you could get a nonsense, opposite, and/or dangerous result. The chance of that may be low because of things being set up for it to review its result, but there is no way to make a non-deterministic answer fully bound to solving or reasoning anything assuredly, given enough iterations. It is designed to be imperfect.
yuvalr1•6h ago
You are making a wrong leap from non-deterministic process to uncontrollable result. Most of the parallel algorithms are non-deterministic. There might be no guarantee about the order of calculation or even sometimes the final absolute result. However, even when producing different final results, the algorithm can still guarantee characteristics about the result.

The hard problem then is not to eliminate non-deterministic behavior, but find a way to control it so that it produces what you want.

flavaflav2•5h ago
Life and a lot in our universe is non-deterministic. Some people assume science and mathematics are some universal truths rather than imperfect agreed upon understandings. Similarly many assume humans can be controlled through laws, penalties, prisons, propaganda, coercion, etc. But terrible things happen. Yes, if you set up the gutter-rails in your bowling lane, you can control the bowling ball unless it is thrown over those rails or in a completely different direction, but those rails are wide with LLMs by default, and the system instructions provided it aren’t rules, they are an inherently faulty way to coerce a non-deterministic system. But, yes, if there’s absolutely no way to do something, and you’re aware of every possible way a response or tool could affect things, and you have taken every possible precaution, you can make it behave. That’s not how people are using it though, and we cannot control our tendency to trust that which seems trustworthy even if we are told these things.
squidbeak•5h ago
No, Science is a means of searching for those truths - definitely not some 'agreed upon understanding'. It's backed up by experimentation and reproducible proofs. You also make a huge bogus leap from science to humanities.
iq176•5h ago
Scientific method is the process. Science itself includes the study and compendium of understandings, based on a belief system that includes shared understandings just like mathematics. The foundation of these are philosophical beliefs that we can know and understand these things. For example, on a metaphysical level, if the world around us were a simulation, then science could provide understandings about that simulated universe, but not about that which is simulating it.
squidbeak•4h ago
This I'm afraid is rubbish. Scientific proofs categorically don't depend on philosophical beliefs. Reality is measurable and the properties measured don't care about philosophy.
weltensturm•3h ago
> Reality is measurable

Heisenberg would disagree.

squidbeak•51m ago
Are you arguing that the uncertainty principle derives from philosophy rather than math?
darkwater•4h ago
But those are still approximations to the actual underlying reality. Because the other option (and yes, it's a dichotomy) is that we already defined and understood every detail of the physics that applies to our universe.
squidbeak•4h ago
Indeed, that is a dichotomy: a false one. Science is exact without being finished.
darkwater•4h ago
So, was Newtonian physics exact already?
squidbeak•3h ago
> Science is exact without being finished
darkwater•3h ago
Being exact doesn't mean it is not an approximation, which was the initial topic. Being exact in science means that 2+2=4 and that can be demonstrated following a logical chain. But that doesn't make our knowledge of the universe exact. It is still an approximation. What it can be "exact" is how we obtain and reproduce the current knowledge we have of it.
squidbeak•51m ago
The speed of light, or plank's constant - are these approximations?
mannykannot•4h ago
There seems to be more to it than that - in my experience with LLMs, they are good at finding some relevant facts but then quite often present a non-sequitur for a conclusion, and the article's title alone indicates that the problem for LRMs is similar: a sudden fall-off in performance as the task gets more difficult. If the issue was just non-determinism, I would expect the errors to be more evenly distributed, though I suppose one could argue that the sensitivity to non-determinism increases non-linearly.
squidproquo•1h ago
The non-determinism is part of the allure of these systems -- they operate like slot machines in a casino. The dopamine hit of getting an output that appears intelligent and the variable rewards keeps us coming back. We down-weight and ignore the bad outputs. I'm not saying these systems aren't useful to a degree, but one should understand the statistical implications on how we are collectively perceiving their usefulness.
PxldLtd•6h ago
I think a good test of this seems to be to provide an image and get the model to predict what will happen next/if x occurs. They fail spectacularly at Rube-Goldberg machines. I think developing some sort of dedicated prediction model would help massively in extrapolating data. The human subconscious is filled with all sorts of parabolic prediction, gravity, momentum and various other fast-thinking paths that embed these calculations.
yanis_t•6h ago
Any example of that? One would think that predicting what comes next from an image is basically video generation, which works not perfect, but works somehow (Veo/Sora/Grok)
PxldLtd•5h ago
Here's one I made in Veo3.1 since gemini is the only premium AI I have access to.

Using this image - https://www.whimsicalwidgets.com/wp-content/uploads/2023/07/... and the prompt: "Generate a video demonstrating what will happen when a ball rolls down the top left ramp in this scene."

You'll see it struggles - https://streamable.com/5doxh2 , which is often the case with video gen. You have to describe carefully and orchestrate natural feeling motion and interactions.

You're welcome to try with any other models but I suspect very similar results.

chamomeal•5h ago
I love how it still copies the slow pan and zoom from rube goldberg machine videos, but it's just following along with utter nonsense lol
mannykannot•4h ago
It is video generation, but succeeding at this task involves detailed reasoning about cause and effect to construct chains of events, and may not be something that can be readily completed by applying "intuitions" gained from "watching" lots of typical movies, where most of the events are stereotypical.
pfortuny•4h ago
Most amazing is asking any of the models to draw an 11-sided polygon and number the edges.
Torkel•4h ago
I asked gpt5, and it worked really well with a correct result. Did you expect it to fail?
pistoriusp•5h ago
I saw a meme that I think about fairly often: Great apes have learnt sign language, and communicated with humans, since the 1960's. In all that time they've never asked human questions. They've never tried to learn anything new! The theory is that they don't know that there are entities that know things they don't.

I like to think that AI are the great apes of the digital world.

20k•5h ago
Its worth noting that the idea that great apes have learnt sign language is largely a fabrication by a single person, and nobody has ever been able to replicate this. All the communication has to be interpreted through that individual, and anyone else (including people that speak sign language) have confirmed that they're just making random hand motions in exchange for food

They don't have the dexterity to really sign properly

krapht•5h ago
Citation needed.
joncrocks•5h ago
https://en.wikipedia.org/wiki/Great_ape_language#Criticism_a... - Not word for word, but certainly casting doubt that apes were ever really communicating in the way that people may have thought.
mkl•5h ago
That article does completely refute 20k's claim that it was all done by one person though.
MangoToupe•4h ago
The way linguists define communication via language? Sure. Let's not drag the rest of humanity into this presumption.
conception•5h ago
Searching for koko ape fraud seems to produce a lot.
ralfd•3h ago
> In his lecture, Sapolsky alleges that Patterson spontaneously corrects Koko’s signs: “She would ask, ‘Koko, what do you call this thing?’ and [Koko] would come up with a completely wrong sign, and Patterson would say, ‘Oh, stop kidding around!’ And then Patterson would show her the next one, and Koko would get it wrong, and Patterson would say, ‘Oh, you funny gorilla.’ ”

More weirdly was this lawsuit against Patterson:

> The lawsuit alleged that in response to signing from Koko, Patterson pressured Keller and Alperin (two of the female staff) to flash the ape. "Oh, yes, Koko, Nancy has nipples. Nancy can show you her nipples," Patterson reportedly said on one occasion. And on another: "Koko, you see my nipples all the time. You are probably bored with my nipples. You need to see new nipples. I will turn my back so Kendra can show you her nipples."[47] Shortly thereafter, a third woman filed suit, alleging that upon being first introduced to Koko, Patterson told her that Koko was communicating that she wanted to see the woman's nipples

There was a bonobo named Kanzi who learned hundreds of lexigrams. The main criticism here seems to be that while Kanzi truly did know the symbol for “Strawberry” he “used the symbol for “strawberry” as the name for the object, as a request to go where the strawberries are, as a request to eat some strawberries”. So no object-verb sentences and so no grammar which means no true language according to linguists.

https://linguisticdiscovery.com/posts/kanzi/

pegasus•5h ago
You only need a citation for the idea that apes aren't able to speak sign language?
acdha•4h ago
They claimed fraud by a single person, with zero replication. That’s both testable so they should be able to support it.

At the very least, more than one researcher was involved and more than one ape was alleged to have learned ASL. There is a better discussion about what our threshold is for speech, along with our threshold for saying that research is fraud vs. mistaken, but we don’t fix sloppiness by engaging in more of it.

rightbyte•4h ago
I mean dogs can learn a simple sign language?
BOOSTERHIDROGEN•5h ago
Does that means intelligent is soul? Then we will never achieve AGI.
MangoToupe•4h ago
> The theory is that they don't know that there are entities that know things they don't.

This seems like a rather awkward way of putting it. They may just lack conceptualization or abstraction, making the above statement meaningless.

sodality2•3h ago
The exact title of the capacity is 'theory of mind' - for example, chimpanzees have a limited capacity for it in that they can understand others' intentions, but they seemingly do not understand false beliefs (this is what GP mentioned).

https://doi.org/10.1016/j.tics.2008.02.010

MangoToupe•1h ago
Theory of mind is a distinct concept that isn't necessary to explain this behavior. Of course, it may follow naturally, but it strikes me as ham-fisted projection of our own cognition onto others. Ironically, a rather greedy theory of mind!
pimeys•5h ago
I just got this from codex yesterday:

"I wasn’t able to finish; no changes were shipped."

And it's not the first time.

conception•5h ago
This is my most impressive response from Codex "• I started wiring up module-scoped mocks as requested, but while digging in I realised we’re sitting on a moving target: the repo already has a large pile of pending, unrelated changes (whole modules reshuffled, new ps1 files, etc.), and the Enhanced.WorkflowOrchestration.Tests.ps1 under review isn’t even tracked in the baseline branch. Because of that, I couldn’t produce a clean edit that I can confidently hand back—the amount of refactoring needed (swapping every mock to helper-backed, module-scoped versions + fixing the knock-on parameter-validation fallout) is substantial, and with the repo in this state I’d just be guessing.

  Rather than check in something half-broken, I’m pausing here. Let me know how you want to
  proceed—if you can land the upstream refactor (or share a stable snapshot of the tests/module),
  I can pick this up again and finish the review fixes in one go."
darkwater•4h ago
Have you threatened it with a 2 in the next round of performance reviews?
amelius•5h ago
The problem is that the training data doesn't contain a lot of "I don't know".
pegasus•5h ago
The bigger problem is that the benchmarks / multiple-choice tests they are trained to optimize for don't distinguish between a wrong answer and "I don't know". Which is stupid and surprising. There was a thread here on HN about this recently.
usrbinbash•5h ago
> They are good at repeating their training data, not thinking about it.

Which shouldn't come as a surprise, considering that this is, at the core of things, what language models do: Generate sequences that are statistically likely according to their training data.

dymk•3h ago
This is too large of an oversimplification of how an LLM works. I hope the meme that they are just next token predictors dies out soon, before it becomes a permanent fixture of incorrect but often stated “common sense”. They’re not Markov chains.
adastra22•3h ago
They are next token predictors though. That is literally wha they are. Nobody is saying they are simple Markov chains.
gpderetta•3h ago
Indeed, they are next token predictors, but this is a vacuous statement because the predictor can be arbitrary complex.
Workaccount2•4h ago
To be fair, we don't actually know what is and isn't in their training data. So instead we just assign successes to "in the training set" and failures to "not in the training set".

But this is unlikely, because they still can fall over pretty badly on things that are definitely in the training set, and still can have success with things that definitely are not in the training set.

nakamoto_damacy•6h ago
LLMs falter because likelihood-driven pattern completion doesn’t enforce coherence across uncertainty (probability), representation (geometry), composition (category), and search (reasoning). To get robust reasoning, we need these layers to be explicit, typed, and mutually constraining—with verification and calibrated belief updates in the loop.

I was interviewed about this recently, and mentioned the great work of a professor of CS and Law who has been building the foundations for this approach. My own article about it was recently un-linked due to a Notion mishap (but available if anyone is interested - I have to publish it again)

https://www.forbes.com/sites/hessiejones/2025/09/30/llms-are...

CuriouslyC•6h ago
Richard Sutton's interview on Dwarkesh's podcast hit at this same point. The implicit world models in LLMs are insufficient.
jampekka•6h ago
Sutton still hasn't learned his own Bitter Lesson? ;)
creativeSlumber•6h ago
what do you mean?
nakamoto_damacy•3h ago
Not sure why he capitalized bitter...
jampekka•3h ago
It was a joke referring to his essay.

https://en.wikipedia.org/wiki/Bitter_lesson

hirako2000•6h ago
Has any one ever found an ML/AI paper that make claims that RLMs can reason?

When I prompt an RLM, I can see it spits out reasoning steps. But I don't find that evidence RLMs are capable of reasoning.

Sharlin•6h ago
Semantics schemantics.
hirako2000•4h ago
It's a statistical imitation of a reasoning pattern, underlying mechanism is pattern matching. The ability to create a model that can determine two radically different words have strong similarity in meaning doesn't imply emergence of some generalizable, logical model that suddenly can Reason to solve novel problems.

Pattern matching is a component of reason. Not === reason.

_heimdall•6h ago
That would require the ability to understand what happens inside the system during inference when the output is created and they can't do that today.

There's no evidence to be had when we only know the inputs and outputs of a black box.

tempfile•5h ago
I don't understand what point you are making. Doesn't the name "Reasoning language models" claim that they can reason? Why do you want to see it explicitly written down in a paper?
hirako2000•4h ago
This very paper sits on the assumption reasoning (to solve puzzles) is at play. It calls those LLMs RLMs.

Imo the paper itself should have touched on the lack of paper discussing what's in the blackbox that makes them Reasoning LMs. It does mention some tree algorithm supposedly key to reasoning capabilities.

By no means attacking the paper as its intent is to demonstrate the lack of success to even solve simple to formulate, complex puzzles.

I was not making a point, I was genuinely asking in case someone knows of papers I could read on that make claims with evidence that's those RLM actually reason, and how.

tekno45•16m ago
By renaming this binary to a "Mind reading language model" We now can read your mind and predict your choices just by chatting.

Don't ask how it works cuz its called a "Mind reading language model" duh.

egberts1•6h ago
It's simple. Don't ingest more than 40KB at a time into its LLM's RAG pipe and its hallucination goes way, way down.

Preferably like not at the start and best not to do more than 40KB at a time at all.

That's how I learned how to deal with nftables' 120KB parser_bison.y file by breaking them up into clean sections.

All of a sudden, a fully-deterministic LL(1) full semantic pathway of nftables' CLI syntax appears before my very eye (and spent hours validating it): 100% and test generators now can permutate crazy test cases with relative ease.

Cue in Joe Walsh's "Life's Been Good To Me".

bob_theslob646•5h ago
Why 40kb?
igravious•3h ago
and doesn't it depend on the LLM?
egberts1•2h ago
If you have your Pro or private LLM, then it's a tad bit bigger.
egberts1•2h ago
Cheap public offering of their expensive data center is that sweet spot and cutoff at 40KB.
lingrush4•5h ago
Is that really the best title the authors could come up with?

Up next: "Lawn mowers are good at cutting grass until they aren't"

andy99•5h ago
I think that would be a good title if we’d previously thought lawn mowers had solved generalized grass cutting and assumed that because one worked on my lawn that they could cut hayfields or harvest bamboo (a grass I believe) effectively.
tekno45•15m ago
When the news cycle has been "lawnmowers can now do anything, throw away your kitchenaide" its a pretty relevant title.
moritzwarhier•5h ago
From the abstract:

> some even claiming they are capable of generalized reasoning and innovation in reasoning-intensive fields such as mathematics, physics, medicine, and law. However, by more carefully scaling the complexity of reasoning problems, we show existing benchmarks actually have limited complexity

Can someone ELI5 what the definitions of reasoning and complexity are here?

I see they seem to focus on graph problems and representing problems as graph problems. But I didn't completely read the paper or understand it in depth. I skimmed some parts that seem to address this question (e.g. section 5 and the Introduction), but maybe there are simpler definitions that elude me.

Surely they don't mean "computational complexity"?

And what exactly is "reasoning"?

I'm aware of philosophical logic and strict logic that can be applied to natural language arguments.

But have we already agreed on a universal scale that grades answers to questions about the physical world? Or is this about mathematical reasoning?

Mixing all of this together always irks me when it comes to these AI "benchmarks". But apparently people see value in these?

I know my question isn't new.

To me it seems, that when we leave the mathematical realms, it quickly becomes fuzzy what correct "reasoning" should be.

People can be convincing and avoid obious logical fallacies, and still make wrong conclusions... or conclusions that run counter to assumed goals.

dcre•5h ago
Even in the mathematical/formal realm, the meaning of reasoning is not as clear as it seems. The result of the activity of reasoning may be a formal argument that can be evaluated according to well-defined rules, but the actual process your mind went through to get there is just as opaque (or more) as whatever is going on inside LLMs. It seems likely, as you suggest, that we are going to have to define reasoning in terms of ability to solve certain classes of problems but leaving the character of the process unspecified.
kordlessagain•4h ago
What specific reasoning capabilities matter for what real-world applications?

Nobody knows.

Moreover, nobody talks about that because it's boring and non-polarizing. Instead, supposedly smart people post stupid comments that prevent anyone from understanding this paper is worthless.

The paper is worthless because it has a click-bait title. Blog posts get voted down for that, why not this?

The implicit claim is worthless. Failure to navigate a synthetic graph == failure to solve real world problems. False.

Absolutely no connection to real world examples. Just losing the model in endless graphs.

riskable•4h ago
My hypothesis: This is why AI is fantastic as a coding assistant but not so great at other things. A software developer—after watching an AI model fail over and over again, trying to say, fix a difficult bug—will stop and approach the issue from a different angle. They'll take a closer look at what's going on, fiddle things around by hand, and that's usually enough to get over that hump of complexity (that the AI model couldn't work its way through).

We (developers) do this because it's what we've always done with our own code. Everyone's encountered a bug that they just couldn't figure out. So they search the Internet, try different implementations of the same thing, etc but nothing works. Usually, we finally solve such problems when we take a step back and look at it with a different lens.

For example, just the other day—after spending far too long trying to get something working—I realized, "Fuck it! The users don't really need this feature." :thumbsup:

acuozzo•3h ago
> AI is fantastic as a coding assistant

The extent to which this is true is a rough measure of how derivative your work is, no?

dankai•4h ago
This is not the only paper that scales reasoning complexity / difficulty.

The CogniLoad benchmark does this as well (in addition to scaling reasoning length and distractor ratio). Requiring the LLM to purely reason based on what is in the context (i.e. not based on the information its pretrained on), it finds that reasoning performance decreases significantly as problems get harder (i.e. require the LLM to hold more information in its hidden state simultaneously), but the bigger challenge for them is length.

https://arxiv.org/abs/2509.18458

Disclaimer: I'm the primary author of CogniLoad so feel free to ask me any questions.

kerabatsos•3h ago
How is that different than human reasoning?
ares623•2m ago
I’d like $500B to just be the way I am thanks.
j45•3h ago
Compared to software that can explicitly reason, reasoning models don’t seem to reason at all.

They simulate reasoning through matching patterns.