> Language doesn't just describe reality; it creates it.
I wonder if this is a statement from the discussed paper or from the blog author. Haven't found the original paper yet, but this blog post very much makes me want to read it.
I never under stand these kinds of statements.
Does the sun not exist until we have a word for it, did "under the rock" not exist for dinosaurs?
Think of it this way, though: the divisions that humans make between objects in the world are largely linguistic ones. For example, we say that the Earth is such-and-such an ecosystem with certain species occupying it. But this is more like a convenient shorthand, not a totally accurate description of reality. A more accurate description would be something like, ever-changing organisms undergo this complex process that we call evolution, and are all continually changing, so much so that the species concept is not really that clear, once you dig into it.
https://plato.stanford.edu/entries/species/
Where it really gets interesting, IMO, is when these divisions (which originally were mostly just linguistic categories) start shaping what's actually in the world. The concept of property is a good example. Originally it's just a legal term, but over time, it ends up reshaping the actual face of the earth, ecosystems, wars, migrations, on and on.
Melanie Mitchell (2021) "Why AI is Harder Than We Think." https://arxiv.org/abs/2104.12871
That sentence is not from this paper.
I partially agree, but the idea about AI is that you need to bump into things and hurt yourself only once. Then you have a good driver you can replicate at will
> Mitchell in her paper compares modern AI to alchemy. It produces dazzling, impressive results but it often lacks a deep, foundational theory of intelligence.
> It’s a powerful metaphor, but I think a more pragmatic conclusion is slightly different. The challenge isn't to abandon our powerful alchemy in search of a pure science of intelligence.
But alchemy was wrong and chasing after the illusions created by the frauds who promoted alchemy held back the advancement of science for a long time.
We absolutely should have abandoned alchemy as soon as we saw that it didn't work, and moved to figuring out the science of what worked.
https://home.cern/news/news/physics/alice-detects-conversion...
Yet alchemists developed and refined many important chemical processes including crystallization, distillation, evaporation, synthesis of acids/bases/salts, etc., as well as many useful substances and compounds from gunpowder to aqua regia. Also various dyes, drugs, and poisons. Their ranks included the likes of Paracelsus, Tycho Brahe, Boyle, and Newton.
This reminds me Douglas Hofstadter, of the Gödel, Escher, Bach fame. He rejected all of this statistical approaches towards creating intelligence and dug deep into the workings of human mind [1]. Often, in the most eccentric ways possible.
> ... he has bookshelves full of these notebooks. He pulls one down—it’s from the late 1950s. It’s full of speech errors. Ever since he was a teenager, he has captured some 10,000 examples of swapped syllables (“hypodeemic nerdle”), malapropisms (“runs the gambit”), “malaphors” (“easy-go-lucky”), and so on, about half of them committed by Hofstadter himself.
>
> For Hofstadter, they’re clues. “Nobody is a very reliable guide concerning activities in their mind that are, by definition, subconscious,” he once wrote. “This is what makes vast collections of errors so important. In an isolated error, the mechanisms involved yield only slight traces of themselves; however, in a large collection, vast numbers of such slight traces exist, collectively adding up to strong evidence for (and against) particular mechanisms.”
I don't know when, where, and how the next leap in AGI will come through, but it's just very likely, it will be through brute-force computation (unfortunately). So much for fifty years of observing Freudian slips.
[1]: https://www.theatlantic.com/magazine/archive/2013/11/the-man...
A much better framework for thinking about intelligence is simply as the ability to make predictions about the world (including conditional ones like "what will happen if we take this action"). Whether it's achieved through "true understanding" (however you define it; I personally doubt you can) or "mimicking" bears no relevance for most of the questions about the impact of AI we are trying to answer.
The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI.
It's clear that humans consider humans as intelligent. Is a monkey intelligent? A dolphin? A crow? An ant?
So I ask you, what is the lowest form of intelligence to you?
(I'm also a huge David Lynch fan by the way :D)
I don't know what "the lowest form of intelligence" is, nobody has a clue what cognition means in lampreys and hagfish.
Reductive arguments may not give us an immediate forward path to reproducing these emergent phenomena in artificial brains, but it's also the case that emergent phenomena are by definition impossible to predict - I don't think anyone predicted the current behaviours of LLMs for example.
He made it because he predicted that it will have some effects enjoyable to him. Without knowing David Lynch personally I can assume that he made it because he predicted other people will like it. Although of course, it might have been some other goal. But unless he was completely unlike anyone I've ever met, it's safe to assume that before he started he had a picture of a world with Mullholland Drive existing in it that is somehow better than the current world without. He might or might not have been aware of it though.
Anyway, that's too much analysis of Mr. Lynch. The implicit question is how soon an AI will be able to make a movie that you, AIPedant, will enjoy as much as you've enjoyed Mulholland Drive. And I stand that how similar AI is to human intelligence or how much "true understanding" it has is completely irrelevant to answering that question.
I mean ... he is David Lynch.
We seem to be defining "predicted" to mean "any vision or idea I have of the future". Hopefully film directors have _some_ idea of what their film should look like, but that seems distinct from what they expect that it will end up.
Currently many of our legal systems are set up this way, if in a fairly arbitrary fashion. Consider for example how sentience is used as a metric for whether an animal ought to receive additional rights. Or how murder (which requires deliberate, conscious thought) is punished more harshly than manslaughter (which can be accidental or careless.)
If we just treat intelligence as a descriptive quality and apply it to LLMs, we quickly realize the absurdity of saying a chatbot is somehow equivalent, consciously, to a human being. At least, to me it seems absurd. And it indicates the flaws of grafting human consciousness onto machines without analyzing why.
The only real and measurable thing is performance. And the performance of AI systems only goes up.
Question for the author: how are SOTA LLM models not common sense machines?
> Its [Large Language Models] ability to write code and summarize text feels like a qualitative leap in generality that the monkey-and-moon analogy doesn't quite capture. This leaves us with a forward-looking question: How do recent advances in multimodality and agentic AI test the boundaries of this fallacy? Does a model that can see and act begin to bridge the gap toward common sense, or is it just a more sophisticated version of the same narrow intelligence? Are world models a true step towards AGI or just a higher branch in a tree of narrow linguistic intelligence?
I'd put the expression common sense on the same level as having causal connections, and would also assume that SOTA LLMs do not create an understanding based on causality. AFAICS this is known as the "reversal curse"[0].
The core misconception here is that LLMs are autonomous agents parroting away. No, they are connected to humans, tools, reference data, and validation systems. They are in a dialogue, and in a dialogue you quickly get into a place where nobody has ever been before. Take any 10 consecutive words from a human or LLM and chances are nobody on the internet stringed those words the same way before.
LLMs are more like pianos than parrots, or better yet, like another musician jamming together with you, creating something together that none would do individually. We play our prompts on the keyboard and they play their "music" back to us. Good or bad - depends on the player at the keyboard, they retain most control. To say LLMs are Stochastic Parrots is to discount the contribution of the human using it.
Related to intelligence, I think we have a misconception that it comes from the brain. No, it comes from the feedback loop between brain and environment. The environment plays a huge role in exploration, learning, testing ideas, and discovery. The social aspect also plays a big role, parallelizing exploration and streamlining exploitation of discoveries. We are not individually intelligent, it is a social, environment based process, not a pure-brain process.
Searching for intelligence in the brain is like searching for art in the paint pigments and canvas cloth.
However, almost all models (worst is ChatGPT) are made virtually useless in this respect, since they are basically sycophantic yesmen - why on earth does an ”autocorrect on steroids” pretend to laugh at my jokes?
Next step is not to built faster models or throw more computing power at them, bit to learn to play the piano.
This is not the case for LLMs. We don't know what the full state space looks like. Just because the state space that LLMs (lossily) compress, is unimaginably huge, doesn't mean that you can assume that the state you want is one of them. So yeah, you might get a string of symbols that nobody has seen before, but you still have no way of knowing whether A) it's the string of symbols you wanted, and B) if it isn't, whether the string of symbols you wanted can ever be generated by the network at all.
I don't get the problem with this really. I think LLM's "reasoning" is a very fair and proper way to call it. It takes time and spits out tokens that it recursively uses to get a much better output than it otherwise would have. Is it actually really reasoning using a brain like a human would? No. But it is close enough so I don't see the problem calling it "reasoning". What's the fuss about?
I'd say, no, they aren't, and there is value in understanding the different processes (and labeling them as such), even if they have outputs that look similar/identical.
Reasoning models are simply answering the same question twice with a different system prompt. It's a normal LLM with an extra technical step. Nothing else.
they don't need to reach equal human intelligence, the just need to reach an acceptable of intelligence so corporation can reduce labor cost
sure it bad at certain things but you know what ??? most of real world job didn't need a genius either
That is also a fallacy from being too immersed in a professional environment filled with deep reasoning and a deep rooted tradition of logic.
In the greater human civilization you will find an abundance of individuals lacking both reasoning and common sense.
warkdarrior•5h ago
Someone should let Waymo, Zoox, Pony.ai, Apollo Go, and even Tesla know!
belZaah•4h ago
enos_feedler•3h ago
forgetfreeman•3h ago
1718627440•1h ago
another_twist•4h ago
I honestly didnt understand the arguments. Could someone TLDR please ?
kristjansson•4h ago
joshribakoff•3h ago
Re: Tesla, this company paid me nearly $250,000 under multiple lemon law claims for their “self driving” software issues i identified that affected safety.
We all know what happened with Cruise, which was after i declared myself constructively dismissed.
I think the characterization in the article is fair, “self driving” is not quite there yet.
Cthulhu_•2h ago
lovecg•2h ago
1718627440•1h ago
kortilla•2h ago
samtp•2h ago