Yeah, not all humans do it. It's too energy expensive, biological efficiency wins.
As of ML.. Maybe next time, when someone figures out how to combine deductive with inductive, in zillion small steps, with falsifying built-in.. (instead of confronting them 100% one against 100% the other)
The problem with the AI discourse is that the language games are all mixed up and confused. We're not just talking about capability, we're talking about significance too.
The author defines American style intelligence as "the ability to adapt to new situations, and learn from experience".
Then argues that the current type of machine-learning driven AI is American style-intelligent because it is inductive, which is not what was supposedly (?) being argued for.
Of course current AI/ML models cannot adapt to new situations and learn from experience, outside the scope of its context window, without a retraining or fine-tuning step.
Intelligence, in the real world, is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence. Probabilistic prediction is inherently incompatible with deterministic deduction. We're years into being told AGI is here (for whatever squirmy value of AGI the hype huckster wants to shill), and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call. How is it that we can go about ignoring reality for so long?
The calculations are internal but they happen due to the orchestration of specific parts of the brain. That is to ask, why can't we consider our brains to be using their own internal tools?
I certainly don't think about multiplying two-digit numbers in my head in the same manner as when playing a Dm to a G7 chord that begs to resolve to a C!
The key thing is modeling. You must model a situation in a useful way in order to apply logic to it. And then there is intention, which guides the process.
> With recent advances in AI, it becomes ever harder for proponents of intelligence-as-understanding to continue asserting that those tools have no clue and “just” perform statistical next-token prediction.
??????? No, that is still exactly what they do. The article then lists a bunch of examples in which this in trivially exactly what is happening.
> “The cat chased the . . .” (multiple connections are plausible, so how is that not understanding probability?)
It doesn't need to "understand" probability. "The cat chased the mouse" shows up in the distribution 10 times. "The cat chased the bird" shows up in the distribution 5 times. Absent any other context, with the simplest possible model, it now has a probability of 2/3 for the mouse and 1/3 for the bird. You can make the probability calculations as complex as you want, but how could you possibly trout this out as an example that an LLM completing this sentence isn't a matter of trivial statistical prediction? Academia needs an asteroid, holy hell.
[I originally edited this into my post, but two people had replied by then, so I've split it off into its own comment.]
Prove that humans do it.
"Intelligence in AI" lacks any existential dynamic, our LLMs are literally linguistic mirrors of human literature and activity tracks. They are not intelligent, but for the most part we can imagine they are, while maintaining sharp critical analysis because they are idiot savants in the truest sense.
For example, we all have an internal physics model in our heads that's build up through our continuous interaction with our environment. That acts as our shared context. That's why if I tell you to bring me a cup of tea, I have a reasonable expectation that you understand what I requested and can execute this action intelligently. You have a conception of a table, of a cup, of tea, and critically our conception is similar enough that we can both be reasonably sure we understand each other.
Incidentally, when humans end up talking about abstract topics, they often run into exact same problem as LLMs, where the context is missing and we can be talking past each other.
The key problem with LLMs is that they currently lack this reinforcement loop. The system merely strings tokens together in a statistically likely fashion, but it doesn't really have a model of the domain it's working in to anchor them to.
In my opinion, stuff like agentic coding or embodiment with robotics moves us towards genuine intelligence. Here we have AI systems that have to interact with the world, and they get feedback on when they do things wrong, so they can adjust their behavior based on that.
barishnamazov•1h ago
Day 1: Fed. (Inductive confidence rises)
Day 100: Fed. (Inductive confidence is near 100%)
Day 250: The farmer comes at 9 AM... and cuts its throat. Happy thanksgiving.
The Turkey was an LLM. It predicted the future based entirely on the distribution of the past. It had no "understanding" of the purpose of the farmer.
This is why Meyer's "American/Inductive" view is dangerous for critical software. An LLM coding agent is the Inductive Turkey example. It writes perfect code for 1000 days because the tasks match the training data. On Day 1001, you ask for something slightly out of distribution, and it confidently deletes your production database because it added a piece of code that cleans your tables.
Humans are inductive machines, for the most part, too. The difference is that, fortunately, fine-tuning them is extremely easy.
usgroup•1h ago
data: T T T T T T F
rule1: for all i: T
rule2: for i < 7: T else F
p-e-w•59m ago
usgroup•31m ago
p-e-w•1h ago
But we already know that LLMs can do much better than that. See the famous “grokking” paper[1], which demonstrates that with sufficient training, a transformer can learn a deep generalization of its training data that isn’t just a probabilistic interpolation or extrapolation from previous inputs.
Many of the supposed “fundamental limitations” of LLMs have already been disproven in research. And this is a standard transformer architecture; it doesn’t even require any theoretical innovation.
[1] https://arxiv.org/abs/2301.02679
encyclopedism•8m ago
LLM's are known properties in that they are an algorithm! Humans are not. PLEASE at the very least grant that the jury is STILL out on what humans actually are in terms of their intelligence, that is after all what neuroscience is still figuring out.
barishnamazov•6m ago
Not that humans can't make these mistakes (in fact, I have nuked my home directory myself before), but I don't think it's a specific problem some guardrails can solve currently. I'm looking for innovations (either model-wise or engineering-wise) that'd do better than letting an agent run code until a goal is seemingly achieved.
glemion43•48m ago
Security is my only concern and for that we have a team doing only this but that's also just a question of time.
Whatever LLMs ca do today doesn't matter. It matters how fast it progresses and we will see if we still use LLMs in 5 years or agi or some kind of world models.
bdbdbdb•28m ago
"Humans aren't perfect"
This argument always comes up. The existence of stupid / careless / illiterate people in the workplace doesn't excuse spending trillions on computer systems which use more energy than entire countries and are yet unreliable
naveen99•44m ago