frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

I have reimplemented Stable Diffusion 3.5 from scratch in pure PyTorch

https://github.com/yousef-rafat/miniDiffusion
210•yousef_g•5h ago•27 comments

Inside the Apollo "8-Ball" FDAI (Flight Director / Attitude Indicator)

https://www.righto.com/2025/06/inside-apollo-fdai.html
68•zdw•3h ago•14 comments

Unsupervised Elicitation of Language Models

https://arxiv.org/abs/2506.10139
92•kordlessagain•6h ago•5 comments

Solar Orbiter gets world-first views of the Sun's poles

https://www.esa.int/Science_Exploration/Space_Science/Solar_Orbiter/Solar_Orbiter_gets_world-first_views_of_the_Sun_s_poles
99•sohkamyung•2d ago•9 comments

Self-driving company Waymo's market share in San Francisco exceeds Lyft's

https://underscoresf.com/in-san-francisco-waymo-has-now-bested-lyft-uber-is-next/
66•namanyayg•2h ago•33 comments

Peano arithmetic is enough, because Peano arithmetic encodes computation

https://math.stackexchange.com/a/5075056/6708
191•btilly•1d ago•79 comments

Last fifty years of integer linear programming: Recent practical advances

https://inria.hal.science/hal-04776866v1
136•teleforce•12h ago•34 comments

How the Final Cartridge III Freezer Works

https://www.pagetable.com/?p=1810
5•ingve•2h ago•0 comments

The Many Sides of Erik Satie

https://thereader.mitpress.mit.edu/the-many-sides-of-erik-satie/
102•anarbadalov•6d ago•21 comments

SIMD-friendly algorithms for substring searching (2018)

http://0x80.pl/notesen/2016-11-28-simd-strfind.html
167•Rendello•15h ago•28 comments

SSHTron: A multiplayer lightcycle game that runs through SSH

https://github.com/zachlatta/sshtron
41•thunderbong•2h ago•7 comments

Endometriosis is an interesting disease

https://www.owlposting.com/p/endometriosis-is-an-incredibly-interesting
278•crescit_eundo•20h ago•173 comments

Waymo rides cost more than Uber or Lyft and people are paying anyway

https://techcrunch.com/2025/06/12/waymo-rides-cost-more-than-uber-or-lyft-and-people-are-paying-anyway/
70•achristmascarl•2d ago•104 comments

Slowing the flow of core-dump-related CVEs

https://lwn.net/SubscriberLink/1024160/f18b880c8cd1eef1/
67•jwilk•3d ago•10 comments

Peeling the Covers Off Germany's Exascale "Jupiter" Supercomputer

https://www.nextplatform.com/2025/06/11/peeling-the-covers-off-germanys-exascale-jupiter-supercomputer/
13•rbanffy•2d ago•2 comments

Solidroad (YC W25) Is Hiring

https://solidroad.com/careers
1•pjfin•7h ago

"Language and Image Minus Cognition." Leif Weatherby on LLMs

https://www.jhiblog.org/2025/06/11/language-and-image-minus-cognition-an-interview-with-leif-weatherby/
18•Traces•3d ago•6 comments

TimeGuessr

https://timeguessr.com/
204•stefanpie•4d ago•40 comments

Me an' Algernon – grappling with (temporary) cognitive decline

https://tidyfirst.substack.com/p/me-an-algernon
79•KentBeck•4d ago•48 comments

Liquid Glass – WWDC25 [video]

https://developer.apple.com/videos/play/wwdc2025/219
143•lnrd•4d ago•254 comments

Filedb: Disk-based key-value store inspired by Bitcask

https://github.com/rajivharlalka/filedb
99•todsacerdoti•16h ago•9 comments

Self-Adapting Language Models

https://arxiv.org/abs/2506.10943
192•archon1410•1d ago•52 comments

Implementing Logic Programming

https://btmc.substack.com/p/implementing-logic-programming
169•sirwhinesalot•21h ago•53 comments

Python argparse has a limitation on argument groups that makes me sad

https://utcc.utoronto.ca/~cks/space/blog/python/ArgparseAndNestedGroups
14•zdw•3d ago•1 comments

Student discovers fungus predicted by Albert Hoffman

https://wvutoday.wvu.edu/stories/2025/06/02/wvu-student-makes-long-awaited-discovery-of-mystery-fungus-sought-by-lsd-s-inventor
141•zafka•3d ago•112 comments

The Army’s Newest Recruits: Tech Execs From Meta, OpenAI and More

https://www.wsj.com/tech/army-reserve-tech-executives-meta-palantir-796f5360
181•aspenmayer•1d ago•159 comments

The international standard for identifying postal items

https://www.akpain.net/blog/s10-upu/
91•surprisetalk•2d ago•19 comments

Strace Tips for Better Debugging

https://rrampage.github.io/2025/06/13/strace-tips-for-better-debugging/
23•signa11•11h ago•0 comments

Mollusk shell assemblages as a tool for identifying unaltered seagrass beds

https://www.int-res.com/abstracts/meps/v760/meps14839
13•PaulHoule•2d ago•0 comments

If the moon were only 1 pixel: A tediously accurate solar system model (2014)

https://joshworth.com/dev/pixelspace/pixelspace_solarsystem.html
833•sdoering•1d ago•248 comments
Open in hackernews

AI Isn't Magic, It's Maths

https://zerofluff.substack.com/p/ai-isnt-magic-its-maths
30•squircle•2d ago

Comments

Workaccount2•16h ago
Man, people in the "it's just maths and probably" camp are in for a world of hurt when they learn that everything is just maths and probability.

The observation that LLMs are just doing math gets you nowhere, everything is just doing math.

perching_aix•16h ago
I largely agree, and upon reading this article is sadly also in that camp of applying this perspective to be dismissive.

However, I find it incredibly valuable generally to know things aren't magic, and that there's a method to the madness.

For example, I had a bit of a spat with a colleague who was 100% certain that AI models are not only unreliable because from a human perspective, insignificant changes to their inputs can cause significant changes to their outputs, but because, in their idea, they were actually random, in the nondeterministic sense. That I was speaking in hypotheticals when I took an issue with this, as he recalled my beliefs about superdeterminism, and inferred that "yeah if you know where every atom is in your processor and the state they're in, then sure, maybe they're deterministic, but that's not a useful definition of deterministic".

Me "knowing" that they're not only not any more special than any other program, but that it's just a bunch of matrix math, provided me with the confidence and resiliency necessary to reason my colleague out of his position, including busting out a local model to demonstrate the reproducibility of model interactions first hand, that he was then able to replicate on his end on a completely different hardware. Even learned a bit about the "magic" involved myself along the way (that different versions of ollama may give different results, although not necessarily).

pxc•16h ago
> [The] article is sadly also in that camp of applying this perspective to be dismissive.

TFA literally and unironically includes such phrases as "AI is awesome".

It characterizes AI as "useful", "impressive" and capable of "genuine technological marvels".

In what sense is the article dismissive? What, exactly, is it dismissive of?

perching_aix•14h ago
> TFA literally and unironically includes such phrases as "AI is awesome". It characterizes AI as "useful", "impressive" and capable of "genuine technological marvels".

This does not contradict what I said.

> In what sense is the article dismissive? What, exactly, is it dismissive of?

Consider the following direct quotes:

> It’s like having the world’s most educated parrot: it has heard everything, and now it can mimic a convincing answer.

or

> they generate responses using the same principle: predicting likely answers from huge amounts of training text. They don’t understand the request like a human would; they just know statistically which words tend to follow which. The result can be very useful and surprisingly coherent, but it’s coming from calculation, not comprehension

I believe these examples are self-evidently dismissive, but to further put it into words, the article - ironically - rides on the idea that there's more to understanding then just pattern recognition at a large scale, something mystical and magical, something beyond the frameworks of mathematics and computing, and thus these models are no true scotsmans. I wholeheartedly disagree with this idea; I find the sheer capability of higher level semantic information extraction and manipulation to be already a clear and undeniable evidence of an understanding. This is one thing the article is dismissive of (in my view).

They even put it into words:

> As impressive as the output is, there’s no mystical intelligence at play – just a lot of number crunching and clever programming.

Implying that real intelligence is mystical, not even just in the epistemological but in the ontological sense, too.

> But here at Zero Fluff, we don’t do magic – we do reality.

Please.

It also blatantly contradicts very easily accessible information on how a typical modern LLM works; no, they are not just spouting off a likely series of words (or tokens) in order, as if they were reciting from somewhere. This is also a common lie that this article just propagates further. If that's really how they worked, they'd be even less useful than they presently are. This is another thing the article is dismissive of (in my view).

JustinCS•13h ago
I agree, real intelligence may also potentially be explained as all "math and probability", whether it's neurons or atoms. A key difference between our brains and LLMs is that the underlying math behind LLMs is still substantially more comprehensible to us, for now.

It's common to believe that we have a more mystical quality, a consciousness, due to a soul, or just being vastly more complex, but few can draw a line clearly.

That said, this article certainly gives a more accurate understanding of LLMs compared to thinking of them as if they had human-like intelligence, but I think it goes too far in insinuating that they'll always be limited due to being "just math".

On a side note, this article seems pretty obviously the product of AI generation, even if human edited, and I think it has lots of fluff, contrary to the name.

pxc•2h ago
Oh, so the issue isn't that the article is dismissive of AI as a technology, but that its author doesn't share your radical views in philosophy of mind (eliminative materialism/illusionism).

Okay, I guess. But I wouldn't characterize that as "being dismissive of AI".

perching_aix•2h ago
Well I did. I do not recognize dismissiveness as something objective or formally defined (it's a natural language property after all), which leaves us with subjectivity or at best shared subjectivity. It's also plain context dependent, at that much I hope we do agree on, so indeed if you do not accept my philosophical view underpinning why I'd find it dismissive, clearly you won't appreciate it as dismissive. It's a lot like with Gödel's "proof of god". Only holds as long as you subscribe to the axioms being built upon.

I'd imagine we do not share the same subjective perspective on this (e.g. I don't think my views are particularly radical), so you wouldn't characterize it that way, whereas I do. Makes sense to me. Didn't intend to mislead you into thinking this wasn't one of these cases, apologies if this is not what you expected. I wrote under the assumption that you did.

I feel a lot of disagreements are just this, it's just that most often people need 30 comments to get here, if they even manage to without getting sidetracked, and without getting too emotionally invested / worked up.

pxc•1h ago
Sometimes it's useful, imo, to pick apart the differences in intellectual temperament, intuition, foundational beliefs, etc., that drive disagreements. But I agree, sometimes it's a lot of clumsy fumbling.

I'll say this against your perspective (or perhaps just use of language), though: it seems to leave little room for skepticism of the greatest general (not product-specific) claims made today in the AI industry. You either buy into the notion that the path we're on is contiguous with "AGI", or you're dismissive of AI! This is nearly as deflationary as your view of consciousness. ;)

I would expect "dismissive" to describe more categorically dismissive views, and not to extend, e.g., to views which admit that AI is in principle possible, non-eliminativist materialism (e.g., functionalists who say we just don't have good reason to say LLMs or other neural networks have the requisite structure for consciousness), etc.

Since you brought him up, Gödel himself seemed to have a much more "miraculous" notion of human cognition that came out in (IIRC) letters in which he explains why he doesn't think human mathematicians' work is hindered by his second incompleteness theorem. That, I would say is dismissive of AI.

But if any view not grounded in illusionism is dismissive of AI, what can a non-dismissive person possibly identify as AI hype? Just particular marketing claims about the concrete capabilities of particular models? If that's true, than rather than characterizing extreme or marginal views, a view is "dismissive" just for refusing to buy into the highest degree of contemporary AI hype.

perching_aix•47m ago
> I'll say this against your perspective (or perhaps just use of language), though: it seems to leave little room for skepticism of the greatest general (not product-specific) claims made today in the AI industry.

Maybe I can alleviate this to an extent by expanding on my views, since I believe that's not the case.

I tried alluding to this by saying that, in my view, models have an understanding [of things], but to put it in more explicit terms, for me "understanding" on its own is a fairly weak term. Like I personally consider the semantic diffing tool I use to diff YAMLs to have an understanding of YAML. Not in a metaphorical sense, but in a literal sense. The understanding is hardwired, sure, but to me that makes no difference. It may also not even be completely YAML standard-compliant, which is what would be the "equivalent" of an AI model understanding something to an extent but not fully or not right.

This leaves a lot of room for criticism and skepticism, as that means models can have elementary understandings of things, that while are understandings, are nevertheless not e.g. meaningfully useful in practice, or fail to live up to the claims and hype vendors spout. Which is sometimes exactly how I view a lot of the models available today. They are not capable of fully understanding what I write, and to the extent they are, they do not necessarily understand it the way I'd expect them to (i.e. as a human). But instead of me classifying this as them not understanding, I still decidedly consider these tools to be just often on the immature, beginning side of a longer spectrum that to me is understanding. I hope this makes sense, even if you still do not find this view agreeable or relatable.

You may argue that my definition is too wide, and that then everything can "understand", but that's not necessarily how I think of this either. A "more rigorous" way of putting my thoughts would be that I think things can understand to the extent they can hold representations of some specific thing and manipulate them while keeping to that representation (pretty much what happens when you traverse a model's latent space in a specific axis). But I'm not sure I spent enough time thinking about this thoroughly to be able to confidently tell you that this is a complete and consistent description, fully reflective of my understanding of understanding (pun intended).

Like when in an image model you can quite literally manipulate the gender, hairstyle, clothing of characters depicted by moving along specific directions, to me that is a clear evidence of that model having an understanding of these concepts, and in the literal sense.

captn3m0•15h ago
I also had to argue with a lawyer on the same point - he held a firm belief that “Modern GenAI systems” are different from older ML systems in that they are non-deterministic and random. And that this inherent randomness is what makes them both unexplainable (you can’t guarantee what it would type) and useful (they can be creative).
perching_aix•14h ago
I honestly find this kinda stuff more terrifying than the models themselves.
xigoi•12h ago
Last time I checked, modern LLMs could give you a different answer for the same sequence of prompts each time. Did that change?
perching_aix•12h ago
They can, but won't necessarily. If you use a managed service, they likely will, due to batched inference. Otherwise, it's simply a matter of configuring the seed to a fixed value and the temperature to 0.

At least that's what I did, and then as long as the prompts were exactly the same, the responses remained exactly the same too. Tested with a quantized gemma3 using ollama, I'd say that's modern enough (barely a month or so old). Maybe lowering the temp is not even necessary as long as you keep the seed stickied, didn't test that.

Workaccount2•6h ago
Just a note for everyone, people do these fixed prompts with fixed seed and 0-temp tests to track model versions. A cool application of the technique
perching_aix•4h ago
Small update: now that I brought it up, I figured I should just test it. And indeed, as long as the seed is pinned, the replies are bit-for-bit identical run to run (note that the conversation context was cleared before each attempt, of course).

So even setting the temp to 0 is not actually needed. This is handy in case somebody makes the claim that the randomness (nonzero temp parameter) makes a model perform better.

The devil really is in the "insignificant for humans but significant for the model" details, basically. Not in the computational determinism.

xigoi•3h ago
A random number generator also gives deterministic results when you fix the seed. Does that make it not a random number generator?
perching_aix•3h ago
Yes, it does. They're called pseudo-random number generators (PRNGs) in literature for that very reason instead.
ninetyninenine•15h ago
Yeah your brain is also maths and probability.

It’s called mathematical modeling and anything we understand in the universe can be modeled. If we don’t understand something we feel a model should exist we just don’t know it yet.

AI we don’t have a model. Like we have a model for atoms and we know the human brain is made of atoms so in that sense the brain can be modeled but we don’t have a high level model that can explain things in a way we understand.

It’s the same with AI. We understand it from the perspective of prediction and best fit curve at the lowest level but we don’t fully understand what’s going on at a higher level.

ncarlson•14h ago
> AI we don’t have a model.

So, some engineers just stumbled upon LLMs and said, "Holy smokes, we've created something impressive, but we really can't explain how this stuff works!"

We built these things. Piece by piece. If you don't understand the state-of-the-art architectures, I don't blame you. Neither do I. It's exhausting trying to keep up. But these technologies, by and large, are understood by the engineers that created them.

ijidak•13h ago
Not true. How the higher level thought is occurring continues to be a mystery.

This is an emergent behavior that wasn’t predicted prior to the first breakthroughs which were intended for translation, not for this type of higher level reasoning.

Put it this way, if we truly understood how LLMs think perfectly we could predict the maximum number of parameters that would achieve peak intelligence and go straight to that number.

Just as we now know exactly the boundaries of mass density that yield a black hole, etc.

The fact that we don’t know when scaling will cease to yield new levels of reasoning means we don’t have a precise understanding of how the parameters are yielding higher levels of intelligence.

We’re just building larger and seeing what happens.

ncarlson•13h ago
> How the higher level thought is occurring continues to be a mystery. This is an emergent behavior that wasn’t predicted prior to the first breakthroughs which were intended for translation, not for this type of higher level reasoning.

I'm curious what you mean by higher level thought (or reasoning). Can you elaborate or provide some references?

ninetyninenine•11h ago
The analogy that is used to build artificial neural networks is statistical prediction and best fit curve.

All techniques to build AI stem from an understanding of AI from that perspective.

The thing is… That analogy applies to the human brain as well. Human brains can be characterized as a best fit curve in a multi dimensional space.

But if we can characterize the human brain this way does that mean we completely understand the human brains? No. There is clearly another perspective, another layer of abstraction that we don’t fully comprehend. Yes when the human brain is responding to a query it is essentially plugging the input into a curve function and providing an output and even when this is true a certain perspective is clearly missing.

The human brain is clearly different from an LLM. BUT the missing insight that we lack about the human brain is also the same insight we lack about the LLM. Both intelligences can be characterized as a multi dimensional function but we so far can’t understand anything beyond that. This perspective we can't understand or characterize can be referred to as a higher level of abstraction... a different perspective.

https://medium.com/@adnanmasood/is-it-true-that-no-one-actua...

twelve40•13h ago
> if we truly understood how LLMs think perfectly we could predict the maximum number of parameters that would achieve peak

It's a bit of a strange argument to make. We've been making airplanes for 100+ years, we understand how they work and there is absolutely no magic or emergent behavior in them, yet even today nobody can give an instant birth to the perfect-shape airframe, it's still a very long and complicated process of calculations, wind tunnel tests, basically trial and error. It doesn't mean we don't understand how airplanes work.

ninetyninenine•11h ago
It’s not a strange argument. You just lack insight.

The very people who build LLMs do not know how it works. They cannot explain it. They admit they don’t know how it works.

Ask the LLM to generate a poem. No one on the face of the earth can predict what poem the LLM will generate nor can they explain why that specific poem was generated.

Workaccount2•6h ago
Fractals are a better representation, a simple equation that iterated upon gives these fantastically complex patterns. Even knowing the equation you could spend years investing why boundaries between unique fractal structures appear where they do, and why they melt from arches to columns and spirals.

In a similar way we know the framework of LLMs, but we don't know the "fractal" that grows from it.

ninetyninenine•11h ago
The engineers who built these things in actuality don’t understand how it works. Literally. In fact you can ask them and they say this readily. I believe the CEO of anthropic is quoted as saying this.

If they did understand LLMs why do they have so much trouble explaining why an LLM produced certain output? Why can’t they fully control an LLM?

These are algorithms running on computers which are deterministic machines that in theory we have total and absolute control over. The fact that we can’t control something running on this type of machine points to the sheer complexity and lack of understanding of the thing we are trying to run.

stevenhuang•9h ago
> But these technologies, by and large, are understood by the engineers that created them.

Simply incorrect. Look into the field of AI interpretability. The learned weights are black boxes, we don't know what goes on inside them.

Workaccount2•6h ago
Models are grown, not built. The ruleset is engineered, the training framework built, but the model itself that grows through training is incredibly dense complexity.
ninetyninenine•2h ago
Put it this way Carlson. If you were building LLMs if you understood machine learning if you were one of these engineers who work at open ai, you would agree with me.

The fact that you don’t agree indicates you literally don’t get it. It also indicates you aren’t in any way an engineer who works on AI, because what I am talking about here is an unequivocal and universally held viewpoint held by literally the people who build these things.

112233•12h ago
Not only that, I have observed people reversing the flow and claiming everything is AI because it uses similar maths. E.g. I saw a guy with "AI degree" argue at lenght that weather forecast models are AI because the numerical solver works similarly to the gradient descent.

This may seem inconsequential and pretentious at first, but it feels like a "land grab" by the AI-adjacent people, trying to claim authority over anything that numerically minimizes differetiable function value.

lucaslazarus•16h ago
On a tangentially-related note: does anyone have a good intuition for why ChatGPT-generated images (like the one in this piece) are getting increasingly yellow? I often see explanations attributing this to a feedback loop in training data but I don't see why that would persist for so long and not be corrected at generation time.
minimaxir•16h ago
They aren't getting increasingly yellow (I don't think the base model has been updated since the release of GPT-4o Image Generation), but the fact that they are always so yellow is bizarre and I am still shocked OpenAI shipped it knowing that effect exists, especially since it has the practical effect of instantly being able to clock it as an AI image generation.

Generally when training image encoders/decoders, the input images are normalized so some base commonality is possible (when playing around with Flux Kontext image-to-image I've noticed subtle adjustments in image temperature), but the fact that it's piss yellow is baffling. The autoregressive nature of the generation would not explain it either.

Workaccount2•5h ago
Perhaps they do it on purpose to give the images a characteristic look.
israrkhan•16h ago
A computer (or a phone) is not magic, its just billions of transistors.

or perhaps we can further simplify and call it just sand?

or maybe atoms?

4b11b4•16h ago
You're just mapping from distribution to distribution

- one of my professors

esafak•5h ago
The entirety of machine learning fits into the "just" part.
hackinthebochs•16h ago
LLMs are modelling the world, not just "predicting the next token". They are not akin to "stochastic parrots". Some examples here[1][2][3]. Anyone claiming otherwise at this point is not arguing in good faith. There are so many interesting things to say about LLMs, yet somehow the conversation about them is stuck in 2021.

[1] https://arxiv.org/abs/2405.15943

[2] https://x.com/OwainEvans_UK/status/1894436637054214509

[3] https://www.anthropic.com/research/tracing-thoughts-language...

minimaxir•16h ago
LLMs are still trained to predict the next token: gradient descent just inevitably converges on building a world model as the best way to do it.

Masked language modeling and its need to understand inputs both forwards and backwards is a more intuitive way for having a model learn a representation of the world, but causal language modeling goes brrrrrrrr.

ninetyninenine•14h ago
In theory one can make a token predictor virtually indistinguishable from a human. In fact… I myself am a best token predictor.

I and all humans fit the definition of what a best token predictor is. Think about it.

tracerbulletx•14h ago
Yeah, the brain obviously has types of circuits and networks that are doing things llms don't do, they have timing, and rhythm, and extremely complex feedback loops, there's no justification to call llms sentient. But all the people trying to say they're categorically different are wrong. Brains process sequential electrical signals from the senses, and send sequential signals to the muscles. That's it. The fact that modern neural networks can trivially shift modes from audio, language, image, video, 3d or any other symbolic representation is obviously a significant development and something significant has happened in our understanding of intelligence.
bluefirebrand•14h ago
> Brains process sequential electrical signals from the senses, and send sequential signals to the muscles. That's it.

Not a neuroscientist but this seems like a vast oversimplification to me based on what I've read

For one thing, the brain isn't sequential. There's a ton of parallel operations in there

It also isn't a purely electrical network either, there's tons of chemicals involved

It sounds so easy to compare it to an electrical network but it really, truly is much more complex

Workaccount2•5h ago
Not to nitpick, but the chemicals are ultimately just modulators of the electronic circuits.
bluefirebrand•5h ago
I suppose, but I think the important part is that they ultimately make the circuit pretty non-deterministic and difficult to predict, which is not really a feature that we normally expect a circuit to have
ninetyninenine•2h ago
No this is not true.

There is literally physical space between dendrites and neurons where chemicals called neurotransmitters are released then receptors pick up the chemicals.

So the pathway through the brain is not purely electrical.

That being said I agree with your overall point as it’s just signals and systems and how the mechanism works is irrelevant to the fact that everything can be modelled by a neuron.

tracerbulletx•3h ago
Time is sequential. The inputs and outputs are sequential. The processing is not sequential. In fact it can be spatial, binaural hearing works because of the precise length of the neuronal path between the ears and a processing area. We know a lot about the brain, many neural networks are based on well described brain regions. It is both true that our existing neural networks are no where near as complex as the brain, and ALSO that they share many similarities and are functionally doing basically the same type of thing.
bluefirebrand•2h ago
> Time is sequential

This is the sort of statement that seems like it is trivially true but I'm really not sure we can be so certain

I agree that the way humans experience time makes it seem like it must be sequential, and a lot of our science would be really messed up if time weren't sequential in the grand scheme

That said, aren't there some doubts about this? I'm no expert but my understanding is that some quantum theories suggest time is not as linear as we think

ninetyninenine•11h ago
Both the human brain and LLMs can be characterized as a best fit curve. It’s just the algorithm to render that curve is different. There are similarities and there are differences and in both cases we fundamentally don’t understand what’s going on beyond the analogy of a best fit curve.
blahburn•16h ago
Yeah, but it’s kinda magic
ncarlson•14h ago
A lot of things are magic when you don't understand the underlying principles of operation.
ncarlson•14h ago
There's a LOT of pushback against the idea that AI is not magic. Imagine if there was a narrative that said, "[compilers|kernels|web browsers] are magic. Even though we have the source code, we don't really know what's going on under the hood."

That's not my problem. That's your problem.

mrbungie•14h ago
AI is magic, at times, when is convenient, and it is also extremely scientific and mathy, at times, when it is convenient. Don't you dare doubt those thoughts at the wrong time though.

Just classic hype.

xigoi•12h ago
The difference is that neural networks are uninterpretable. You may understand how LLMs in general work, but you can pretty much never know what the individual weights in a given model do.
112233•12h ago
How is this narrative different from the way cryptographic hash functions are thought of? We have the source code, but we cannot understand how to reverse the function. The way modern world functions depends on that assumption.
senectus1•11h ago
Maths is magic. Its the cheat source code to this universe.