Announcing that one line of the piece made you mad without providing any other thought is not very constructive.
And there will be more compute for the rest of us :)
are you being serious with this one
> Pariyatti’s nonprofit mission, it should be noted, specifically incorporates a strict code of ethics, or sīla: not to kill, not to steal, not to engage in sexual misconduct, not to lie, and not to take intoxicants.
Not a whole lot of Pali in most LLM editorials.
I must remember to add this quality guarantee to my own software projects.
My software projects are also uranium-free.
No they're not. They're starving, struggling to find work and lamenting AI is eating their lunch. It's quite ironic that after complaining LLMs are plagiarism machines, the author thinks using them for translation is fine.
"LLMs are evil! Except when they're useful for me" I guess.
I can't imagine why someone would want to openly advertise that they're so closed minded. Everything after this paragraph is just anti-LLM ranting.
But I agree that it is self limiting to not bother to consider the ways that LLM inference and human thinking might be similar (or not).
To me, they seem do a pretty reasonable emulation of single- threaded thinking.
I would say the exact same about you, rejecting an absolutely accurate and factual statement like that as closed minded strikes me as the same as the people who insist that medical science is closed minded about crystals and magnets.
I can't imagine why someone would want to openly advertise they think LLMs are actual intelligence, unless they were in a position to benefit financially from the LLM hype train of course.
Because humans often anthropomorphize completely inert things? E.g. a coffee machine or a bomb disposal robot.
So far whatever behavior LLMs have shown is basically fueled by Sci-Fi stories of how a robot should behave under such and such.
> "They are robots. Programs. Fancy robots and big complicated programs, to be sure — but computer programs, nonetheless."
This is totally misleading to anyone with less familiarity with how LLMs work. They are only programs in as much as they perform inference from a fixed, stored, statistical model. It turns out that treating them theoretically in the same way as other computer programs gives a poor representation of their behaviour.
This distinction is important, because no, "regurgitating data" is not something that was "patched out", like a bug in a computer program. The internal representations became more differentially private as newer (subtly different) training techniques were discovered. There is an objective metric by which one can measure this "plagiarism" in the theory, and it isn't nearly as simple as "copying" vs "not copying".
It's also still an ongoing issue and an active area of research, see [1] for example. It is impossible for the models to never "plagiarize" in the sense we think of while remaining useful. But humans repeat things verbatim too in little snippets, all the time. So there is some threshold where no-one seems to care anymore; think of it like the % threshold in something like Turnitin. That's the point that researchers would like to target.
Of course, this is separate from all of the ethical issues around training on data collected without explicit consent, and I would argue that's where the real issues lie.
The larger, and I'd argue more problematic, plagiarism is when people take this composite output of LLMs and pass it off as their own.
The same could be said of humans too. Humans are made of cells that work deterministically. Sure, humans are fancy, big complicated combinations of cells - but they're cells, nonetheless.
That view of humans - and LLMs - ignores the fact that when you combine large numbers of simple building blocks, you can get completely novel behavior. Protons, neutrons and electrons come together to create chemistry. Molecules come together to create biological systems. A bunch of neurons taken together created the poetry of Shakespeare.
Unless you have a dualistic view of the world, in which the mind is a separate realm that exists independently of matter and does not arise from neurons interacting in our brains, you have to accept that robots can be intelligent. Just to put this more sharply: Would a perfect simulation of a human brain be intelligent or not? If you answer "no," then you believe that thought comes from some other, immaterial realm, not from our brains.
I can bang smooth rocks to get sharper rocks; that doesn't make sharper rocks more intelligent. Makes them sharper, though.
Which is to say, novel behavior != intelligence.
bayarearefugee•1h ago
40 years?
Virtually nobody cares about this already... today.
(I'm not refuting the author's claim that LLMs are built on plagiarism, just noting how the world has collectively decided to turn a blind eye to it)