Generally when training image encoders/decoders, the input images are normalized so some base commonality is possible (when playing around with Flux Kontext image-to-image I've noticed subtle adjustments in image temperature), but the fact that it's piss yellow is baffling. The autoregressive nature of the generation would not explain it either.
or perhaps we can further simplify and call it just sand?
or maybe atoms?
- one of my professors
[1] https://arxiv.org/abs/2405.15943
[2] https://x.com/OwainEvans_UK/status/1894436637054214509
[3] https://www.anthropic.com/research/tracing-thoughts-language...
Masked language modeling and its need to understand inputs both forwards and backwards is a more intuitive way for having a model learn a representation of the world, but causal language modeling goes brrrrrrrr.
I and all humans fit the definition of what a best token predictor is. Think about it.
Not a neuroscientist but this seems like a vast oversimplification to me based on what I've read
For one thing, the brain isn't sequential. There's a ton of parallel operations in there
It also isn't a purely electrical network either, there's tons of chemicals involved
It sounds so easy to compare it to an electrical network but it really, truly is much more complex
There is literally physical space between dendrites and neurons where chemicals called neurotransmitters are released then receptors pick up the chemicals.
So the pathway through the brain is not purely electrical.
That being said I agree with your overall point as it’s just signals and systems and how the mechanism works is irrelevant to the fact that everything can be modelled by a neuron.
This is the sort of statement that seems like it is trivially true but I'm really not sure we can be so certain
I agree that the way humans experience time makes it seem like it must be sequential, and a lot of our science would be really messed up if time weren't sequential in the grand scheme
That said, aren't there some doubts about this? I'm no expert but my understanding is that some quantum theories suggest time is not as linear as we think
That's not my problem. That's your problem.
Just classic hype.
Workaccount2•16h ago
The observation that LLMs are just doing math gets you nowhere, everything is just doing math.
perching_aix•16h ago
However, I find it incredibly valuable generally to know things aren't magic, and that there's a method to the madness.
For example, I had a bit of a spat with a colleague who was 100% certain that AI models are not only unreliable because from a human perspective, insignificant changes to their inputs can cause significant changes to their outputs, but because, in their idea, they were actually random, in the nondeterministic sense. That I was speaking in hypotheticals when I took an issue with this, as he recalled my beliefs about superdeterminism, and inferred that "yeah if you know where every atom is in your processor and the state they're in, then sure, maybe they're deterministic, but that's not a useful definition of deterministic".
Me "knowing" that they're not only not any more special than any other program, but that it's just a bunch of matrix math, provided me with the confidence and resiliency necessary to reason my colleague out of his position, including busting out a local model to demonstrate the reproducibility of model interactions first hand, that he was then able to replicate on his end on a completely different hardware. Even learned a bit about the "magic" involved myself along the way (that different versions of ollama may give different results, although not necessarily).
pxc•16h ago
TFA literally and unironically includes such phrases as "AI is awesome".
It characterizes AI as "useful", "impressive" and capable of "genuine technological marvels".
In what sense is the article dismissive? What, exactly, is it dismissive of?
perching_aix•14h ago
This does not contradict what I said.
> In what sense is the article dismissive? What, exactly, is it dismissive of?
Consider the following direct quotes:
> It’s like having the world’s most educated parrot: it has heard everything, and now it can mimic a convincing answer.
or
> they generate responses using the same principle: predicting likely answers from huge amounts of training text. They don’t understand the request like a human would; they just know statistically which words tend to follow which. The result can be very useful and surprisingly coherent, but it’s coming from calculation, not comprehension
I believe these examples are self-evidently dismissive, but to further put it into words, the article - ironically - rides on the idea that there's more to understanding then just pattern recognition at a large scale, something mystical and magical, something beyond the frameworks of mathematics and computing, and thus these models are no true scotsmans. I wholeheartedly disagree with this idea; I find the sheer capability of higher level semantic information extraction and manipulation to be already a clear and undeniable evidence of an understanding. This is one thing the article is dismissive of (in my view).
They even put it into words:
> As impressive as the output is, there’s no mystical intelligence at play – just a lot of number crunching and clever programming.
Implying that real intelligence is mystical, not even just in the epistemological but in the ontological sense, too.
> But here at Zero Fluff, we don’t do magic – we do reality.
Please.
It also blatantly contradicts very easily accessible information on how a typical modern LLM works; no, they are not just spouting off a likely series of words (or tokens) in order, as if they were reciting from somewhere. This is also a common lie that this article just propagates further. If that's really how they worked, they'd be even less useful than they presently are. This is another thing the article is dismissive of (in my view).
JustinCS•13h ago
It's common to believe that we have a more mystical quality, a consciousness, due to a soul, or just being vastly more complex, but few can draw a line clearly.
That said, this article certainly gives a more accurate understanding of LLMs compared to thinking of them as if they had human-like intelligence, but I think it goes too far in insinuating that they'll always be limited due to being "just math".
On a side note, this article seems pretty obviously the product of AI generation, even if human edited, and I think it has lots of fluff, contrary to the name.
pxc•2h ago
Okay, I guess. But I wouldn't characterize that as "being dismissive of AI".
perching_aix•2h ago
I'd imagine we do not share the same subjective perspective on this (e.g. I don't think my views are particularly radical), so you wouldn't characterize it that way, whereas I do. Makes sense to me. Didn't intend to mislead you into thinking this wasn't one of these cases, apologies if this is not what you expected. I wrote under the assumption that you did.
I feel a lot of disagreements are just this, it's just that most often people need 30 comments to get here, if they even manage to without getting sidetracked, and without getting too emotionally invested / worked up.
pxc•1h ago
I'll say this against your perspective (or perhaps just use of language), though: it seems to leave little room for skepticism of the greatest general (not product-specific) claims made today in the AI industry. You either buy into the notion that the path we're on is contiguous with "AGI", or you're dismissive of AI! This is nearly as deflationary as your view of consciousness. ;)
I would expect "dismissive" to describe more categorically dismissive views, and not to extend, e.g., to views which admit that AI is in principle possible, non-eliminativist materialism (e.g., functionalists who say we just don't have good reason to say LLMs or other neural networks have the requisite structure for consciousness), etc.
Since you brought him up, Gödel himself seemed to have a much more "miraculous" notion of human cognition that came out in (IIRC) letters in which he explains why he doesn't think human mathematicians' work is hindered by his second incompleteness theorem. That, I would say is dismissive of AI.
But if any view not grounded in illusionism is dismissive of AI, what can a non-dismissive person possibly identify as AI hype? Just particular marketing claims about the concrete capabilities of particular models? If that's true, than rather than characterizing extreme or marginal views, a view is "dismissive" just for refusing to buy into the highest degree of contemporary AI hype.
perching_aix•47m ago
Maybe I can alleviate this to an extent by expanding on my views, since I believe that's not the case.
I tried alluding to this by saying that, in my view, models have an understanding [of things], but to put it in more explicit terms, for me "understanding" on its own is a fairly weak term. Like I personally consider the semantic diffing tool I use to diff YAMLs to have an understanding of YAML. Not in a metaphorical sense, but in a literal sense. The understanding is hardwired, sure, but to me that makes no difference. It may also not even be completely YAML standard-compliant, which is what would be the "equivalent" of an AI model understanding something to an extent but not fully or not right.
This leaves a lot of room for criticism and skepticism, as that means models can have elementary understandings of things, that while are understandings, are nevertheless not e.g. meaningfully useful in practice, or fail to live up to the claims and hype vendors spout. Which is sometimes exactly how I view a lot of the models available today. They are not capable of fully understanding what I write, and to the extent they are, they do not necessarily understand it the way I'd expect them to (i.e. as a human). But instead of me classifying this as them not understanding, I still decidedly consider these tools to be just often on the immature, beginning side of a longer spectrum that to me is understanding. I hope this makes sense, even if you still do not find this view agreeable or relatable.
You may argue that my definition is too wide, and that then everything can "understand", but that's not necessarily how I think of this either. A "more rigorous" way of putting my thoughts would be that I think things can understand to the extent they can hold representations of some specific thing and manipulate them while keeping to that representation (pretty much what happens when you traverse a model's latent space in a specific axis). But I'm not sure I spent enough time thinking about this thoroughly to be able to confidently tell you that this is a complete and consistent description, fully reflective of my understanding of understanding (pun intended).
Like when in an image model you can quite literally manipulate the gender, hairstyle, clothing of characters depicted by moving along specific directions, to me that is a clear evidence of that model having an understanding of these concepts, and in the literal sense.
captn3m0•15h ago
perching_aix•14h ago
xigoi•12h ago
perching_aix•12h ago
At least that's what I did, and then as long as the prompts were exactly the same, the responses remained exactly the same too. Tested with a quantized gemma3 using ollama, I'd say that's modern enough (barely a month or so old). Maybe lowering the temp is not even necessary as long as you keep the seed stickied, didn't test that.
Workaccount2•6h ago
perching_aix•4h ago
So even setting the temp to 0 is not actually needed. This is handy in case somebody makes the claim that the randomness (nonzero temp parameter) makes a model perform better.
The devil really is in the "insignificant for humans but significant for the model" details, basically. Not in the computational determinism.
xigoi•3h ago
perching_aix•3h ago
ninetyninenine•15h ago
It’s called mathematical modeling and anything we understand in the universe can be modeled. If we don’t understand something we feel a model should exist we just don’t know it yet.
AI we don’t have a model. Like we have a model for atoms and we know the human brain is made of atoms so in that sense the brain can be modeled but we don’t have a high level model that can explain things in a way we understand.
It’s the same with AI. We understand it from the perspective of prediction and best fit curve at the lowest level but we don’t fully understand what’s going on at a higher level.
ncarlson•14h ago
So, some engineers just stumbled upon LLMs and said, "Holy smokes, we've created something impressive, but we really can't explain how this stuff works!"
We built these things. Piece by piece. If you don't understand the state-of-the-art architectures, I don't blame you. Neither do I. It's exhausting trying to keep up. But these technologies, by and large, are understood by the engineers that created them.
ijidak•13h ago
This is an emergent behavior that wasn’t predicted prior to the first breakthroughs which were intended for translation, not for this type of higher level reasoning.
Put it this way, if we truly understood how LLMs think perfectly we could predict the maximum number of parameters that would achieve peak intelligence and go straight to that number.
Just as we now know exactly the boundaries of mass density that yield a black hole, etc.
The fact that we don’t know when scaling will cease to yield new levels of reasoning means we don’t have a precise understanding of how the parameters are yielding higher levels of intelligence.
We’re just building larger and seeing what happens.
ncarlson•13h ago
I'm curious what you mean by higher level thought (or reasoning). Can you elaborate or provide some references?
ninetyninenine•11h ago
All techniques to build AI stem from an understanding of AI from that perspective.
The thing is… That analogy applies to the human brain as well. Human brains can be characterized as a best fit curve in a multi dimensional space.
But if we can characterize the human brain this way does that mean we completely understand the human brains? No. There is clearly another perspective, another layer of abstraction that we don’t fully comprehend. Yes when the human brain is responding to a query it is essentially plugging the input into a curve function and providing an output and even when this is true a certain perspective is clearly missing.
The human brain is clearly different from an LLM. BUT the missing insight that we lack about the human brain is also the same insight we lack about the LLM. Both intelligences can be characterized as a multi dimensional function but we so far can’t understand anything beyond that. This perspective we can't understand or characterize can be referred to as a higher level of abstraction... a different perspective.
https://medium.com/@adnanmasood/is-it-true-that-no-one-actua...
twelve40•13h ago
It's a bit of a strange argument to make. We've been making airplanes for 100+ years, we understand how they work and there is absolutely no magic or emergent behavior in them, yet even today nobody can give an instant birth to the perfect-shape airframe, it's still a very long and complicated process of calculations, wind tunnel tests, basically trial and error. It doesn't mean we don't understand how airplanes work.
ninetyninenine•11h ago
The very people who build LLMs do not know how it works. They cannot explain it. They admit they don’t know how it works.
Ask the LLM to generate a poem. No one on the face of the earth can predict what poem the LLM will generate nor can they explain why that specific poem was generated.
Workaccount2•6h ago
In a similar way we know the framework of LLMs, but we don't know the "fractal" that grows from it.
ninetyninenine•11h ago
If they did understand LLMs why do they have so much trouble explaining why an LLM produced certain output? Why can’t they fully control an LLM?
These are algorithms running on computers which are deterministic machines that in theory we have total and absolute control over. The fact that we can’t control something running on this type of machine points to the sheer complexity and lack of understanding of the thing we are trying to run.
stevenhuang•9h ago
Simply incorrect. Look into the field of AI interpretability. The learned weights are black boxes, we don't know what goes on inside them.
Workaccount2•6h ago
ninetyninenine•2h ago
The fact that you don’t agree indicates you literally don’t get it. It also indicates you aren’t in any way an engineer who works on AI, because what I am talking about here is an unequivocal and universally held viewpoint held by literally the people who build these things.
112233•12h ago
This may seem inconsequential and pretentious at first, but it feels like a "land grab" by the AI-adjacent people, trying to claim authority over anything that numerically minimizes differetiable function value.