The M or B game breaks down when you play with someone who knows obscure people you've never heard of. Either you can't recognize their references, or your sense of "semantic distance" differs from theirs. The solution is to match knowledge levels: experts play with experts, generalists with generalists.
The same applies to decoding ancient texts, if ancient civilizations focused on completely different concepts than we do today, our modern semantic models won't help us understand their writing.
_And that's the actual reason they work._ Undefit models don't just approximate, they interpolate, extrapolate, generalize a bit, and ideally smooth out the occasional total garbage mixed in with your data. In fact, diffusion models work so well because they can correct their own garbage! If extra fingers start to show up in step 5, then steps 6 and 7 still have a chance to reinterpret that as noise and correct back into distribution.
And then there's all the stuff you can do with diffusion models. In my research I hack into the model and use it to decompose images into the surface material properties and lighting! That doesn't make much sense as averaging of memorized patches.
Given all that, it is a very useful interpretation. But I wouldn't take it too literally.
The paper was published in December last year and addresses your concerns head-on. For example, from the introduction:
"if the network can learn this ideal score function exactly, then they will implement a perfect reversal of the forward process. This, in turn, will only be able to turn Gaussian noise into memorized training examples. Thus, any originality in the outputs of diffusion models must lie in their failure to achieve the very objective they are trained on: learning the ideal score function. But how can they fail in intelligent ways that lead to many sensible new examples far from the training set?"
Their answers to these questions are very good and also cover things like correcting the output of previous steps. But the proof is in the pudding: the outputs of their alternative procedure match the models they're explaining very well.
I encourage you to read it; maybe you'll even find a new way to decompose images into surface material properties and lighting as a result.
And I was impressed by the close fit to real CNNs/ResNets and even to UNets. But what that shows is that the real models are heavily overfit. The datasets they are using for evaluation here are _tiny_.
Edit: oh the talk is here btw, if anyone is curious https://youtu.be/c-eIa8QuB24
It just assumes that your answers are going to be reasonably bread-like or reasonably mussolini-like, and doesn't think laterally at all.
It just kept asking me about varieties of baked goods.
edit: It did much better after I added some extra explanation -- that it could be anything that it may be very unlike either choice, and not to try and narrow down too quickly
If you used word2vec directly it's the exact right thing to play this game with. Those embeddings exist in an LLM but it's trained to respond like text found online not play this game.
I agree with the gist of the article (which IMO is basically that universal computation is universal regardless of how you perform it), but there are two big issues that prevent this observation from helping us in a practical sense:
1. Not all models are equally efficient. We already have many methods to perform universal search (e.g., Levin's, Hutter's, and Schmidhuber's versions), but they are painfully slow despite being optimal in a narrow sense that doesn't extrapolate well to real world performance.
2. Solomonoff induction is only optimal for infinite data (i.e., it can be used to create a predictor that asymptotically dominates any other algorithmic predictor). As far as I can tell, the problem remains totally unsolved for finite data, due to the additive constant that results from the question: which universal model of computation should be applied to finite data? You can easily construct a Turing machine that is universal and perfectly reproduces the training data, yet nevertheless dramatically fails to generalize. No one has made a strong case for any specific natural prior over universal Turing machines (and if you try to define some measure to quantify the "size" of a Turing machine you realize this method starts to fail once the number of transition tables becomes large enough to start exhibiting redundancy).
But the second case is that you encounter some phenomenon here in our ordinary world. And in that case I think you can do way better by reasoning about the phenomenon and trying to guess at plausible mechanics based on your preexisting knowledge of how the world works. In particular, I think guessing that "there is some short natural language description of how the phenomenon works, based on a language grounded in the corpus of human writing" is a very reasonable prior.
Consider how every chair you've seen is different. Yet they are all chairs. What is 'chair-ness'? The Forms theory is that there is a single ideal chair, and this truly exists - not materially, but non-materially as a concept or way of understanding chairs, but real. That is, the 'essence' of a chair exists. All chairs are imperfect representations of the essence of a chair, being perfect chairs or representing the essence of a chair to a greater or lesser degree.
You might find Neal Stephenson's bool Anathem a wonderful read. It's one of my favorites. I want to tell you why without spoilers but I can't (it won't be what you expect) -- but if you read it perhaps you'll find it interesting re this topic.
> One explanation for why this game works is that there is only one way in which things are related
There is not, this is a completely non transitive relationship.
On another point, suppose you keep the same vocabulary, but permute the signification of the words, the neural network will still learn relationships, completely different ones, but it's representation may converge toward a better compression for that set of words, but I'm dubious that this new compression scheme will ressemble the previous one (?)
I would say that given an optimal encoding of the relationships, we can achieve an extreme compression, but not all encodings lead to the same compression at the end.
If I add 'bla' between every words in a text, that is easy to compress, but now, if I add an increasing sequence of words between each words, the meaning is still there, but the compression will not be the same, as the network will try to generate the words in-between.
(thinking out loud)
There is billions of human-written texts, grounded in shared experience that makes our AI good at language. We don't have that for a whale.
Thus, to your point, assuming communication, because "there's nothing really special about speech", does that mean we would be able to understand a lion, if the lion could speak? Wittgenstein would say probably not. At least not initially and not until we had built shared lived experiences.
I mean who knows, maybe their perception of these shared experiences would be different enough to make communication difficult, but still, I think it's undeniably shared experience.
I think that's the core question being asked and that's the one I have a hard time seeing how it'd work.
My thinking is that if something is capable of human-style speech, then we'd be able to communicate with them. We'd be able to talk about our shared experiences of the planet, and, if we're capable of human-style speech, likely also talk about more abstract concepts of what it means to be a human or lion. And potentially create new words for concepts that don't exist in each language.
I think the fact that human speech is capable of abstract concepts, not just concrete concepts, means that shared experience isn't necessary to have meaningful communication? It's a bit handwavy, depends a bit on how we're defining "understand" and "communicate".
I don't follow that line of reasoning. To me, in that example, you're still communicating with a human, who regardless of culture, or geographic location, still shares an immense amount of shared life experiences with you.
Or, they're not. For example, an intentionally extreme example, I bet we'd have a super hard time talking about homotopy type theory with a member of the amazon rain forest. Similarly, I'd bet they had their own abstract concepts that they would not be able to easily explain to us.
And if we're saying the lion can speak human, then I think it follows that they're capable of this abstract thought, which is what I think is making the premise confusing for me. Maybe if I change my thinking and let's just say the lion is speaking... But if they're speaking a "language" that's capable of communicating concrete and abstract concepts, then that's a human-style language! And because we share many concrete concepts in our shared life experience, I think we would be able to communicate concrete concepts, and then use those as proxies to communicate abstract concepts and hence all concepts?
Obviously it's impossible to communicate even 90% of human experience with lions or people with mental disabilities. But if a translation model increases communication even 1%, brings everybody up to the level of a Kevin Richardson it's a huge win E.g. A pair of smart glasses that labeling the mood of the cat. Nobody cares about explaining why humans wear hats to a lion and of course no explanation is better than being a old human who has worn hats for a variety of reasons.
I think it's unlikely you could make a LLM that gives a lion knowledge via audio only, but very possibly other animals
Which isn’t saying much, it still couldn’t explain Lion Language to us, it could just generate statistically plausible examples or recognize examples.
To translate Lion speech you’d need to train a transformer on a parallel corpus of Lion to English, the existence of which would require that you already understand Lion.
Who knows, we don't really have good insight into how this information loss, or disparity grows. Is it linear? exponential? Presumably there is a threshold beyond which we simply have no ability to translate while retaining a meaningful amount of original meaning.
Would we know it when we tried to go over that threshold?
Sorry, I know I'm rambling. But it has always been regularly on my mind and it's easy for me to get on a roll. All this LLM stuff only kicked it all into overdrive.
For example, given thousands of English sentences with the word "sun", the vector embedding encodes the meaning. Assuming the lion word for "sun" is used in much the same context (near lion words for "hot", "heat", etc), it would likely end up in a similar spot near the English word for sun. And because of our shared context living in earth/being animals, I reckon many words likely will be used in similar contexts.
That's my guess though, note I don't know a ton about the internals of LLMs.
The reason I think this is from evidence in human language. Spend time with any translator and they'll tell you that some things just don't really translate. The main concepts might, but there's subtleties and nuances that really change the feel. You probably notice this with friends who have a different native language than you.
Even same language same language communication is noisy. You even misunderstand your friends and partners, right? The people who have the greatest chance of understanding you. It's because the words you say don't convey all the things in your head. It's heavily compressed. Then the listener has to decompress from those lossy words. I mean you can go to any Internet forum and see this in action. That there's more than one way to interpret anything. Seems most internet fights start this way. So it's good to remember that there isn't an objective communication. We improperly encode as well as improperly decode. It's on us to try to find out what the speaker means, which may be very different from the words they say (take any story or song to see the more extreme versions of this. This feature is heavily used in art)
Really, that comes down to the idea of universal language[0]. I'm not a linguist (I'm an AI researcher), but my understanding is most people don't believe it exists and I buy the arguments. Hard to decouple due to shared origins and experiences.
But I think those ambiguous cases can still be understood/defined. You can describe how this one word in lion doesn't neatly map to a single word in English, and is used like a few different ways. Some of which we might not have a word for in English, in which case we would likely adopt the lion word.
Although note I do think I was wrong about embedding a multilingual corpus into a single space. The example I was thinking of was word2vec, and that appears to only work with one language. Although I did find some papers showing that you can unsupervised align between the two spaces, but don't know how successful that is, or how that would treat these ambiguous cases.
> I don't think a universal language is implied by being able to translate without a rosetta stone.
Depends what you mean. If you want a 1-to-1 translation then your languages need to be isomorphic. For lossy translation you still need some intersection within the embedding space. The intersection will determine how good you can translate. It isn't unreasonable to assume that there are some universal traits here as any being lives in this universe and we're all subject to these experiences at some level, right? But that could result in some very lossy translations that are effectively impossible to translate, right?Another way you can think about it, though, is that language might not be dependent on experience. If it is completely divorced, we may be able to understand anyone regardless of experience. If it is mixed, then results can be mixed.
> The example I was thinking of was word2vec
Be careful with this. If you haven't actually gone deep into the math (more than 3Blue1Brown) you'll find some serious limitations to this. Play around with it and you'll experience these too. Distances in high dimensions are not well defined. There also aren't smooth embeddings here. You have a lot of similar problems to embedding methods like t-SNE. Certainly has uses but it is far too easy to draw the wrong conclusions from them. Unfortunately, both of these are often spoken about incorrectly (think as incorrect as most peoples understandings of things like Schrodinger's Cat or the Double Slit experiment, or really most of QM. There's some elements of truth but it's communicated through a game of telephone).Apparently one thing you could do is train a word2vec on each corpus and then align them based on proximity/distances. Apparently this is called "unsupervised" alignment and there's a tool by Facebook called MUSE to do it. (TIL, Thanks ChatGPT!) https://github.com/facebookresearch/MUSE?tab=readme-ov-file
Although I wonder if there are better embedding approaches now as well. Word2Vec is what I've played around with from a few years ago, I'm sure it's ancient now!
Edit: that's what I get for posting before finishing the article! The whole point of their researh is to try to build such a mapping, ve2vec!
Its also pretty much how humans acquire language. No one is born knowing English or Spanish or Mandarin.
Reminds me of the quote:
“But people have an unfortunate habit of assuming they understand the reality just because they understood the analogy. You dumb down brain surgery enough for a preschooler to think he understands it, the little tyke’s liable to grab a microwave scalpel and start cutting when no one’s looking.”
― Peter Watts, Echopraxia
> In broad terms, the Hypothesis claims that the limits of the language one speaks are the limits of the world one inhabits (also in Wittgenstein), that the grammatical categories of that language define the ontological categories of the word, and that combinatory potentials of that language delimit the complexity of that world (this may be Jim Brown's addition to the complex Hypothesis.) The test then is to see what changes happen in these areas when a person learns a language with a new structure, are they broadened in ways that correspond to the ways the structure of the new language differs from that of the old?
I'd expect incomprehensible language from something that is wildly different from us, e.g. sentient space crystals that eat radiation.
On the other hand, we still haven't figured out dolphin language (the most interesting guess was that they shout 3D images at each other).
Would you care to expound?
- the % of shared experience/context with a mammal is > than % shared experience with a mollusk
- the gradient starts in communicating with other humans
and that Wittgenstein wasn't wrong in trying to use technology/science to bridge the context gap, he was just early.
I train my cat and while I can't always understand her I think one of the most impressive features of the human mind is to be able to have such great understanding of others. We have theory of mind, joint attention, triadic awareness, and much more. My cat can understand me a bit but it's definitely asymmetric.
It's definitely not easy to understand other animals. As Wittgenstein suggests, their minds are alien to us. But we seem to be able to adapt. I'm much better at understanding my cat than my girlfriend (all the local street cats love me, and I teach many of them tricks) but I'm also nothing compared to experts I've seen.
Honestly, I think everyone studying AI could benefit by spending some more time studying animal cognition. While not like computer minds these are testable "alien minds" and can help us better understand the general nature of intelligence
But Lion is not just animal, it is not just mammal, it is something more. Something which I have no idea how we would communicate with.
> But Lion is not just animal, it is not just mammal, it is something more.
Are you saying "lion" is a stand-in for "an arbitrary creature"? If so, yes, that is how I understand Wittgenstein and it doesn't change my comment.But lions, and us, are not just animals + mammals. Being a lion or a human means more. Ultimately, there is a uniquely human or lion element. Wittgenstein is saying we cannot communicate this.
You probably didn't adapt to understanding cats as much as cats have adapted over millennia to be understood by humans. Working with and being understood by the dominant specie that is humans is a big evolutionary advantage.
Understanding a wild animal like a lion is a different story. There is a reason why most specialists will say that keeping wild animals as pets is a bad idea, they tend to be unpredictable, which, in other words, mean we don't understand them.
You or I, yeah, probably not going to understand a lion pretty well. But someone who works at the zoo? A lion tamer? Someone studying lion cognition? Hell, people have figured out how to train hippos so that they can clean their teeth[0], and these are one of, if not the, most aggressive animals in the world. Humans have gotten impressively good at communicating with many different animals and training them. There's plenty of Steve Irwin types who have strong understandings about many creatures who would be quite alien to the rest of us. Which, that requires at least one side to have a strong understanding of the other's desires and how they perceive the world. But me? I have no doubt that hippo would murder me.
My point isn't so much about would we understand the lion, but rather could we. Wittgenstein implied we wouldn't be able to. I'm pointing to evidence that we, to at least some degree, can. How much we ultimately will be able to, is still unknown. But I certainly don't think it is an impossible task.
OTOH feral cats are known for being highly social compared to other cats, forming large semi-collaborative colonies. And adult cats have much more difficulty socializing to humans than adult dogs, even if they don't have trauma/etc. I suspect the real story of cat domestication goes both ways: an unusually gregarious subspecies of African wildcat started forming colonies near human settlements and forming cross-carnivore collaborations with the humans who lived there. This was also true for dogs - it likely started with unusually peaceful Siberian wolves - but I believe cats were more "accidental." Humans have been deliberately creating dog breeds since antiquity, but with a tiny number of exceptions cat breeds are modern. I doubt ancient humans ever "bred" cats like they did dogs, it seems closer to natural selection.
But yes, both accidental domestication happens as well as non human cross species collaboration. Another famous example is with the cleaner fish and sharks. Animals also frequently collaborate with plants. Ants even have farms, both fungi and other insects
Huh. Apparently attention isn't all we need in order to parse that sentence.
Also, your comment still made me laugh. Women can be mysterious...
Ps. I am excited about Google’s Gemma dolphin project (https://blog.google/technology/ai/dolphingemma/), but I would prefer if they chose elephants instead of dolphins as the subject, since we live on land, not in water. This way, more emphasis could be placed on core research, and immediate communication feedback would be possible.
I don't care that it might be better, and I shall hold you personally responsible should this come to pass.
--— After looking up “Universal Grammar,” here are my thoughts: --—
Until we identify these features and prove that other species lack them, the concept of Universal Grammar remains uncertain. Only if we pinpoint what’s unique to humans and fail to help other species develop it can we confidently claim Universal Grammar is correct.
Personally, I think these innate features might not be entirely unique to humans. Even humans need sufficient social interaction and language exposure to truly behave like 'humans'. Feral children(https://en.wikipedia.org/wiki/Feral_child), for example, behave much like animals. I also imagine early humans had only a few words and symbols and acted similarly. However, if feral children or ancient humans were exposed to today’s social environment, they could likely learn our complex language quite quickly, because those words, symbols, and pronunciations are well-suited to humans for expressing our wills.
Perhaps, we humans are just lucky to have found effective ways to express ourselves —- symbols and words as we know them now -— and to pass them down through generations. This has allowed language and cognition to grow exponentially, giving us an edge over other species.
P.S. Our failure so far to teach animals structured language is exactly why I admire projects like Google’s Dolphin Gemma. To succeed, we need to tailor teaching methods to their species-specific traits—starting by understanding how they express their intentions/wills, and then hopefully helping enrich their naming systems and overall communication. We might even adapt this to help them learn from human contexts, much like large language models do.
The thing that makes humans nearly unique in the animal kingdom is structured language. I don't need separate words for "leopard in the bush", "snake in the bush", "leopard in the tree", and "snake in the tree", because we have grammar rules that allow me to combine the words for "leopard" and "tree" with a distinction of how the leopard relates to the tree.
I say nearly unique, because there is some evidence that whale songs are structured.
But what is “structured language”? I would argue it’s just another layer of naming—naming applied to already named names.
If animals have the ability to map leopard, snake, bush, tree in reality to names in their naming system, then why can’t they apply that ability again—create a new name like ‘leopard-bush’ or ‘snake-leopard’, simply by combining the existing solid names in their system? (Similarly, words like “in” could also be added.)
You might say they can’t—because they lack grammar rules. And I’d argue the issue runs even deeper: they don’t have a valid naming system to begin with, nor the ecological fitness or incentive needed to develop further naming.
If they can name ‘leopard’ within a valid system, that name should be reusable. They should be able to work on those names—as objects themselves. The process is the same. It requires the same abstraction: treat a thing as a nameable unit, then name it again. And that already starts to look like compositionality—or grammar.
The real difference is that humans ended up with a robust naming system. We got lucky—we gradually invented stable symbols that let us name on top of previous names. We could write them down, save them, pass them on—and then build new names on top of that naming recursively, iteratively, and divisively.
And that’s likely where the fitness gain came from. It’s a lucky, genuine self-bootstrap story.
The main difference seems to be in having sentences structured into verbs and their associated nouns (which may encode various roles, e.g. patients, agents, instruments, beneficiaries, subjects of intransitive or copulative verbs, nominal predicates, results etc.).
As far as we could determine until now, the language of most animals consists only of a set of nouns, which typically have an associated implicit action.
For example:
"Lion!" (meaning "Climb up the tree!");
"Eagle!" (meaning "Take cover under branches!");
"Children!" (meaning "Come here, to mother!");
"Bananas!" (meaning "Come here to eat good food!");
and so on.
Some apes have been trained to compose very simple sentences, as complex as agent-verb-patient, but it is not known whether a similar form of communication exists between wild apes.
All you need to make it unsurprising you need to find an alternative hypothesis, that can explain observed facts.
How about this one. Humans got the ability to abstract deeper than others. While some animals could deal with concepts closely associated with real phenomena, and therefore they can use words to name things or maybe even actions, they cannot go further and use abstractions over abstractions. As the result they cannot deal with a recursive grammar.
Or another hypothesis. Humans got an ability to reflect on themselves, so they came to concepts like "motive (of action)", it made it possible for them to talk about actions, inventing words-categories, that categorize actions by supposed motive of action. But it is not just that, people started to feel that simple words do not describe the reality good enough, because a) actions have more than one motive, b) they started to see actions everywhere (the stone is lying flat? it is the action of the stone, that strictly speaking cannot act.). All this led them to invent complex grammars to describe the world where they live. Animals from other hand can't get it, because their world is much simplier. They don't see the stone "lying": it is a stone, it cannot do anything.
These hypotheses don't seem explaining everything, but I have just invented them. I didn't really tried to explain everything. But thinking about it, I'm in favor of the second one: all people are schizophrenics if you compare them with animals. They believe in things that are not exist, they see inanimate objects as animate, they are talking with themselves (inner dialog). It just happens that the particular kind of schizophrenia most people are struggling with is a condition that allows them to not tear apart all the connections with the Reality. It is more like borderline schizophrenia.
I prefer to think that animals simply haven’t been lucky enough to invent a suitable naming system that could serve as the foundation of their civilization. If they were fortunate, they might bootstrap their own form of civilization through recursive or iterative divisive naming-—naming the act of naming itself. Naming is the foundation, and everything else naturally follows from there.
As Laozi said over 2,500 years ago in the Dao De Jing:
“The Tao that can be spoken is not the eternal Tao.
The name that can be named is not the eternal name.
The nameless is the origin of Heaven and Earth;
The named is the mother of all things.”
(Just days ago, AnthropicAI even mentioned Laozi in a tweet:
https://x.com/AnthropicAI/status/1925926102725202163 )P.S. By the way, the entire Dao De Jing advocates thinking beyond symbols, transcending naming—not being constrained by it—and thus connect the reality and following the Tao, which embodies the will of love, frugality, and humility. Yet, few truly understand it fairly, as Laozi emphasized in his book. Perhaps we humans can transcend naming altogether eventually, with advancements like Musk’s brain-communication chip, we might soon discover how human wills are encoded and potentially move beyond the limitations of naming.
Do we have hardware for playing Super Mario or dancing to pop music? Animals can't do that either.
If you read the criticism about the Koko project, it's that Patterson prompted her to make certain signs. I watched some clips of the signs, and it's very obvious that that is exactly what she did.
Animals can only communicate in a limited fashion. They do not have some "hidden" intelligence that you are trying to find.
I really doubt both their methods and your conclusion. The project tried to teach animals to adapt to the human naming system, and more importantly, it wasn’t a true social-level experiment with animals.
(That said, in my opinion, Koko was smart. Critics of Patterson’s claims have acknowledged that Koko learned a number of signs and used them to communicate her wants and needs.)
Imagine an alien taking an one- or two-year-old human baby and trying to teach him to communicate with alien using vocal signals that humans have difficulty producing or perceiving—and, most importantly, where naming is not a social-level communication that boosts ecological fitness. That’s basically what they did with Koko. It’s like training an LLM model on text corpora where the loss function (fitness measure) is inconsistent or even meaningless—would that produce a useful model?
We first need to understand how animals naturally name things, and then enrich that naming system in a way that fits their minds—not force them to learn human naming. Most importantly, this naming should improve their ecological fitness in a consistent way, so that they can ‘feel’ the fitness of certain naming and evolve their minds toward it.
That’s why Google DolphinGemma is so remarkable. If they succeed, they might mimic dolphin-like communication in a way other dolphins truly understand—gradually introducing naming and concepts that improve evolutionary fitness: finding food, recognizing others, being happy, and identifying suitable mates. If recursive naming develops, I believe it could even lead to real cognitive evolution.
This is just a rough idea, but I truly believe in it-—based on self-meta-cognition about how my own mind evolved and works, as well as other observations. Unfortunately, I don’t have the power or energy to explore it deeply, but I hope someone working on animal cognition research will take a closer look at using large language models like DolphinGemma and dive deeper.
What matters is if there is a shared representation space across languages. If there is, you can then (theoretically, there might be a PhDs and a Nobel or two to be had :) separate underlying structure and the translation from underlying structure to language.
The latter - what they call the universal embedding inverter - is likely much more easily trainable. There's a good chance that certain structures are unique enough you can map them to underlying representation, and then lever that. But even if that's not viable, you can certainly run unsupervised training on raw material, and see if that same underlying "universal" structure pops out.
There's a lot of hope and conjecture in that last paragraph, but the whole point of the article is that maybe, just maybe, you don't need context to translate.
the world around us is a very large part of that shared experience. It is shared among humans, shared among whales, and shared among whales and humans as well.
Is it closer to Mussolini or David Beckham? Uhh, I guess Mussolini. (Ok, they’re definitely thinking of a person.)
That reasoning doesn't follow. Many things besides people would have the same answers, for instance any animal that seems more like Mussolini than Beckham.
But that should also explain us why pure embeddings search is not sufficient for RAG.
I recently gave the "Veeam Intelligence" a spin.
Veeam is a backup system spanning quite a lot of IT systems with a lot of options - it is quite complicated but it is also a bounded domain - the app does as the app does. It is very mature and has extremely good technical documentation and a massive amount of technical information docs (TIDs) and a vibrant and very well informed set of web forums, staffed by ... staff and even the likes of Anton Gostev - https://www.veeam.com/company/management-team.html
Surely they have close to the perfect data set to train on?
I asked a question about moving existing VMware replicas from one datastore to another and how to keep my replication jobs working correctly. In this field, you may not be familiar with my particular requirements but this is not a niche issue.
The "VI" came up with a reasonable sounding answer involving a wizard. I hunted around the GUI looking for it (I had actually used that wizard a while back). So I asked where it was and was given directions. It wasn't there. The wizard was genuine but its usage here was a hallucination.
A human might have done the same thing with some half remembered knowledge but would soon fix that with the docs or the app itself.
I will stick to reading the docs. They are really well written and I am reasonably proficient in this field so actually - a decent index is all I need to get a job done. I might get some of my staff to play with this thing when given a few tasks that they are unfamiliar with and see what it comes up with.
I am sure that domain specific LLMs are where it is at but we need some sort of efficient "fact checker" system.
LLM ”training” is just brute forcing the same function into existence. ”Human brain output X, llm output Y, mutate it times billion until X and Y start matching”
So, what is the Dao? Personally, I see it as will — something we humans could express through words. For any given will, even though we use different words in different languages — Chinese, Japanese, English — these are simply different representations of the same will.
Large language models learn from word tokens and begin to grasp these wills — and in doing so, they become the Dao.
In that sense, I agree: “All AI models might be the same.”
An infinite dimensional model with just one dim per concept would be sorta useless, but you need things tied together?
There isn't anything core to reality about Kentucky, its Derby, the Gregorian calendar, America, horse breeds, etc. These are all cultural inventions that happen to have particular importance in global human culture because of accidents of history, and are well-attested in training sets. At best we are seeing some statistical convergence on training sets because everyone is training on the same pile and scraping the barrel for any differences.
I'd say our current largest LLMs probably contain sufficient detail to explain a concept like a named race horse starting from QCD+gravity and ending up at cultural human events, given a foothold of some common ground to translate into a new unknown language. In a sense, that's what a model of reality is. I think it's possible because LLMs figure out translation between human languages by default with enough pretraining.
What? By substitution, this means you can translate it. As long as we're assuming a large enough basis of concept vectors of course it works.
> I'd say our current largest LLMs probably contain sufficient detail to explain a concept like a named race horse starting from QCD+gravity and ending up at cultural human events
What? I'm curious how you'd propose to move from gravity to culture. This is like TFAs assertion that the M+B game might be as expressive as 20 questions / universal. M or B is just (bad,sentient) or (good,object). Sure, entangling a few concepts is slightly more expressive than playing with a completely flattened world of 1/0 due to some increase in dimensionality. But trying to pinpoint anything like (neutral,concept) fails because the basis set isn't fundamentally large enough. Any explanation of how this could work will have to cheat, like when TFA smuggles new information in by communicating details about distance-from-basis. For example to really get to the word or concept of "neutral" from inferred good/bad dichotomy of bread/mussolini, you would have to answer "Hmmmmmm, closer to bread I guess" in one iteration and then "Umm.. closer to Mussolini I guess" when asked again, and then have the interrogator notice the uncertainty/hesitation/contradiction and then infer neutrality. This is just the simple case.. physics to culture seems much harder
Also, why QCD? Quantum chromodynamics, the quantized theory of the nuclear strong force? There is also QED, quantum electrodynamics, which is the quantized field theory for electrodynamics, and then also QFD (quantum flavordynamics) for the weak force. Does OP seriously mean to imply that the quantum field theory corresponding to ONLY the strong force, plus gravity, explains every emergent phenomena from there to culture? Fully half of the fundamental forces we account for, in two disparate theoretical frameworks?
OP's comment is not serious speculation.
Just replace QCD with "known/understood quantum theories" and move on with your life. Thats not the important part of the comment you're replying to.
Forgive me if I insist somebody show the most minute amount of competence before entertaining their absolutely wild speculation regarding whether the corpus of our species can explain physics-to-culture.
I mean expressing the relationships and abstractions between the different levels at which we model the world. If you need to explain horses to whales, you probably need to drop down to a biological level to at least explain keratin for hooves to e.g. the baleen whales. Other than that, common experiences of mammals probably suffices to explain social and mating differences (assuming sufficient abstractions in this hypothetical whale languages)
If you need to explain horses to aliens, you'd drop all the way down to mathematics and logic and go back up through particle physics to make sure both parties were grounded in the way we talk about objects and systems before explaining Earth biology and evolutionary history. Behavioral biology would have to be the base for introducing cultural topics and expressing any differences in where we lump behaviors in biology vs. culture, etc.
My basic claim is that if we had a pretrained model over human languages and one hypothetical alien language then either a human or an alien could learn to speak the other's language and understand the internal world model used by the other, because the amount of information contained in large LLMs covers enough of our human world model that it can translate between human languages and also answer questions about how our world-models at various levels of abstraction are related to and built upon each other via definitions.
I am less certain if that same hypothetical LLM could accurately translate between an alien language and human language; I think that the depth required to translate across potentially several layers of abstractions might not fit in the context windows and attention-span of LLMs. I think ~current LLMs will be able to accurately translate inter-species on earth if we can get enough animal language+behavior data into them.
If we somehow discover LLMs right after Newton discovered the theory of gravity, and then a while later Einstein discovers General Relativity, then GR would not be in the training set of the neural net. That doesn't make GR any less of a description of reality! You also can't convert General Relativity into whalesong!
But you CAN explain General Relativity in English, or in Chinese to a person in china. So the fact that we can create a mapping from the concept of General Relativity in the neural network of the brain of a human in the USA using english, to someone in china using chinese, to a ML model, is what makes it a shared statistical model of reality.
You also can't convert General Relativity to the language of "infant babble", does it make general relativity any less real?
Let's look at two examples of cultural reality:
Fan death in South Korea. Where people believe that a fan running while you sleep can kill you.
The book "Pure, White and Deadly". Where we discredited the author and his findings and spent decades blaming fat, while packing on the pounds with high fructose corn syrup.
An LLM isn't going to find some intrinsic truth, that we are ignoring, in its data set. An LLM isn't going to find issues in the reproducibility / replication crisis. I have not seen one discredit a scientific paper with its own findings.
To be clear LLM's can be amazing tools, but garbage in garbage out still applies.
When you layer the concept of awareness into the mix it does alter reality for an individual or llm. Awareness creates interesting blind spots into our statistical models of reality.
On the topic of compression, I am reminded of an anecdote about Heidegger. Apparently he had a bias towards German and Greek, claiming that these languages were the only suitable forms for philosophy. His claim was based on the "puns" in language, or homonyms. He had some intuition that deep truths about reality were hidden in these over-loaded words, and that the particular puns in German and Greek were essential to understand the most fundamental philosophical ideas. This feels similar to the idea of shared embeddings being a critical aspect of LLM emergent intelligence.
This "superposition" of meaning in representation space again aligns with my intuitions. I'm glad there are people seriously studying this.
Language does have constraints, yet it evolves via its users to encompass new meanings.
Thus those constraints are artificial, unless you artificially enforce static language use. And of course, for an LLM to use those new concepts, it needs to be retokenized by being trained on new data.
For example, if we trained LLMs only on books, encyclopedias, newpapers, and personal letters from 1850, it would have zero capacity to speak comprehensibly or even seem cogent on much of the modern world.
And it would forever remain in that disconnected positon.
LLMs do not think, understand anything, nor learn. Should you wish to call tokenization, learning, then you'd better call a clock "learning" from the gears and cogs that enable its function.
LLMs do not think, learn, or exhibit intelligence. (I feel this is not said enough).
We will never, ever get AGI from an LLM. Ever.
I am sympathetic to the wonder of LLMs. To seeing them as such. But I see some art as wonderous too. Some machinery is beautiful in execution and to use.
But that doesn't change truths.
I can't say that you are wrong, you might be right, especially about AGI. And I think it's unlikely that LLMs are the direct path to AGI. But, just looking at how human brains work, it seems unlikely that we would be intelligent either if we used your same reductionist logic.
An individual neuron doesn't "think" or "understand" anything. It's a biological cell that simply fires an electrochemical signal when its input threshold is met. It has no understanding of language or context. By your logic, since the fundamental components are just simple signal processors, the brain cannot possibly learn or be intelligent. Yet, from the complex interaction of ~86 billion of these simple biochemical machines, the emergent properties of thought, understanding, and consciousness arise.
Dismissing an LLM's capabilities because its underlying operations are basically just math operating on tokenized data is like dismissing human consciousness because it's "just electrochemistry" in a network of cells. Both arguments mistake the low-level mechanism for the high-level emergent phenomenon.
It won't prove intelligence, but at least it won't be static like a book.
> Tokenization has been the final barrier to truly end-to-end language models.
> We developed the H-Net: a hierarchical network that replaces tokenization with a dynamic chunking process directly inside the model, automatically discovering and operating over meaningful units of data
But something such as this is required to move towards actual intelligence.
LLMs have problems, in practice being "static" aint one of them.
> An individual neuron doesn't "think" or "understand" anything. It's a biological cell that simply fires an electrochemical signal when its input threshold is met.
is a wildly oversimplified representation of what a neuron is and how it interacts with the rest of the brain. Even what we know is far beyond this description and there's a vast amount more that we don't know.
One of the assumptions that people make (with or without realising they are making it) is that the brain is a componentised information processing device, that you can break it down into units that effectively communicate in a digital way. This may be true, but it may not. It may not be possible to decompose a mind into parts communicating via digital signals without sacrificing conscious experience.
Personally, I think it's likely that it isn't possible to do that, but I don't know for sure. Others clearly think it's likely that it is possible, which is a perfectly valid opinion. Neither should pretend that science has yet given us an unambiguous answer on the question.
---
My 2$: If you replace 'LLMs' with 'Humans,' most of your statements above still make sense — which suggests AGI might be possible.
A human would immediately start learning, and understanding the differences. Forming new memories. People learn every, single day.
An LLM would never have a single new memory, nor understand a single change had happened. Without being retokenized via new data it trains on, nothing of its symbolic world view would even change in the tiniest bit.
Telling it so in a context window isn't the LLM learning, that's gone the second the context window is gone. In 5 years, 10 years, 1M years, that LLM would be precisely the same. It would view the world as 1850. Forever.
Meanwhile, being told "It's 2025!" would immediately update the human's expectations. They would see the changes around them, and start learning. Note that initial reaction isn't relevant here, a person would eventually deal with it, and move forward.
You have posted a link, in another response, where someone is working on dynamically updating an LLM. However that is not how current LLMs work, thus invalid in any attempt to refute how I describe LLMs currently.
Further, until we see how this works -- precisely, it may be a non-starter. It may not work as described. It may be hype. It may work, but still require retraining if large amounts of symbolic data needs to change.
There's a reason it takes massive farms of hardware to tokenize, and I'm skeptical that the quoted twitter link reforges all relationships. Again, we'll see.. and this is something required for an LLM to actually "learn" about changes to how tokens are weighted.
For short term memory, they can store information in their context window. Yes the context window is finite, but they can still be very large, and there exist techniques to fit more information within that window.
For medium term memory, there's not one single solution, but anecdotally it seems like solutions exist. My workplace has a chatbot that can ingest documents and save slack conversations, and then use the knowledge from those to answer questions indefinitely into the future. I haven't looked into how it works, but empirically it functions well across thousands of people for more than a year.
Between a human and an LLM transported 200 years into the future, a human will certainly have better critical thinking skills, but I'd bet an LLM system designed for the task will probably be able to more quickly learn to have coherent and symmetric conversations about the world with modern humans.
Overall, there's a lot that's unclear, and I think I'm pretty pessimistic about progress towards AGI, but I don't think we can deny that LLMs have something that is, at minimum, the best substitute for human intelligence that exists.
> Is it closer to Mussolini or David Beckham? Uhh, I guess Mussolini. (Ok, they’re definitely thinking of a person.)
This deduction is absurd! The only information you have at all is that it's more like a person than it is like bread (which could be almost anything).
They all got it "right", but Claude called out a second order effect that arose from the use case that the other two missed.
I get it, they might all be exploring the same space, but Claude went an extra, important, hop.
I am very curious to run real world engineering problems through SuperHeavy.
I think what this might be trying to say is something more like: ...there are many ways in which things can be related, but those relationships come from the underlying world we live in.
I.e. there are obviously many ways in which things can be related, but if we assume the quote is not entirely counterfactual, then it must be getting at something else. I suppose "way" here is being used in a different sense, but it isn't clear.
This phenomenon appears to occur for LLM learning as well, though it is less remarkable due to the fact that LLMs likely have significant overlap in their training data.
I believe this is good news for Alignment, because, as Plato pointed out, one of the most important forms is the Form of the Good - a (theoretical) universal human ideal containing notions of justice, virtue, compassion, excellence, etc. If this Form truly exists, and LLMs can learn it, it may be possible to train them to pursue it (or refuse requests that are opposed to it).
This "roughly" is doing a lot of heavy lifting to support the "Plato was right" thesis. The fact is we inhabit a shared reality with shared laws of physics and evolutionary pressures and so on; there are only so many ways to float a boat.
That doesn't necessarily mean Platonic Forms exist and that everyone arrives independently at the same Form.
Indeed you will find that people have fundamentally different ideas of what words like "freedom", "economy", "government" mean, despite using the same syntax.
tyronehed•6mo ago
When we arrive at AGI, you can be certain it will not contain a Transformer.
jxmorris12•6mo ago
I once saw a LessWrong post claiming that the Platonic Representation Hypothesis doesn't hold when you only embed random noise, as opposed to natural images: http://lesswrong.com/posts/Su2pg7iwBM55yjQdt/exploring-the-p...
blibble•6mo ago
of course it matters
if I supply the ants in my garden with instructions on how to build tanks and stealth bombers they're still not going to be able to conquer my front room
SJC_Hacker•6mo ago
I believe the current approach of using mostly a feed-forward in the inference stage, with well-filtered training data and backpropagation for discrete "training cycles" has limitations. I know this has been tried in the past, but something modelling how animal brains actually function, with continuous feedback, no explicit "training" (we're always being trained), might be the key.
Unfortunately our knowledge of "whats really going on" in the brains is still limited, investigative methods are crude as the brain is difficult to image at the resolution we need, and in real time. Last I checked no one's quite figured out how memory works, for example. Whether its "stored in the network" somehow through feedback (like a SR-latch or flip-flop in electronics) or whether there's some underlying chemical process within the neuron itself (we know that chemicals definitely regulate brain function, don't know how much it goes the other way and it can be used to encode state)