It's not, it's a statistical model of existing text. There's no "genAI", there is "genMS" - generative machine statistics.
When you look at it that way, it's obvious the models are build on enormous amounts of work done by people publishing the the internet. This amount of work is many orders of magnitude more hours than it took to produce the training algorithms.
As a result, most of the money people pay to access these models should go to the original authors.
And that even ignores the fact that a model trained on AGPL code should be licensed under AGPL as well as its output - even if no single training input can be identified in the output, it's quite straightforwardly _derived_ from enormous amounts of trainings data input and a tiny (barely relevant) bit of prompt input. It's _derivative_ work.
fwiw, I mostly agree with you (ai training stinks of some kind of infringement), but legal precedent is not favouring copyright holders at least for now.
In Bartz v. Anthropic and Kadrey v. Meta "judges have now held that copying works to train LLMs is “transformative” under the fair use doctrine" [1]
i.e. no infrigement - bearing in mind this applies only in the US. The EU and the rest of the world are setting their own precedents.
Copyright can only be contested in the jurisdiction that the alleged infringement occurred, and so far it seems that fair use is holding up. I'm curious to watch how it all plays out.
It might end up similarly to Uber vs The World. They used their deep pockets to destabilise taxis globally and now that the law is catching up it doesn't matter any more - Uber already won.
[1] https://www.ropesgray.com/en/insights/alerts/2025/07/a-tale-...
I know. I am describing how it should be.
Copyright was designed in a time when concealing plagiarism was time-consuming. Now it's a cheap mechanical operation.
What I am afraid is that this is being decided by people who don't have enough technical undersanding and who might be swayed by everyone calling it "AI" and thinking there's some kind of intelligence behind it. After all, they call genMS images/sounds/videos "AI" too, which is obviously nonsense.
Also, not sure what you mean by “statistics”.
If you mean that a parameter for a parameterized probability distribution is chosen in order to make the distribution align with a dataset, ok, that’s true.
That’s not generally what I think of when I hear “statistics” though?
Maybe but it should - see sibling comment.
Statistics as in taking a large input and processing it into much fewer values which describe the input in some relevant ways (and allow reproducing it). Admittedly it's pretty informal.
Often when tech comes out that does something better than people, it makes sense for people to stop doing it. But in the case of "books explaining things", AI only learned how to explain things by examining the existing corpus - and there won't be any more human-generated content to continue to learn and evolve from, so the explanatory skills of AI could wind up frozen in 2025.
An alternative would of course be that humans team up with AI to write better books of this sort, and are able to develop new and better ways of explaining things at a more rapid pace as a result.
A relatively recent example that sticks in my mind is how data visualization has improved. Documents from the second half of the 1900's are shockingly bad at data presentation, and the shock is due to how the standard of practice has improved in the last few decades. AI probably wouldn't have figured this out on its own, but is now able to train on many, many examples of good visualization.
I mean thats just nature. Darwin etc.
Maybe you should try the DPRK?
Nature does give everyone the right to exist without justification.
I'm not sure what Darwin has to say on the matter. He doesn't strike me as prescriptive.
You are a child of the universe no less than the trees and the stars; you have a right to be here.
It’s not a question of whether capitalism allows our existence - it’s very obviously the other way around.
The resulting content lacks consistency and coherency. The generated prose is rather breathless: in carefully articulated content, every sentence should have a place. Flagship models (Opus 4~) don’t seem to understand the value of a sentence.
I’ve tried to prompt engineer this behavior (one should carefully attend to each sentence: how is it contributing to the technical narrative in this section, and overall?), but didn’t have much success.
I suspect this might be solved by research on grounding generation against world models: much of verifying “is this sentence correct here?” has to do with my sharing of a world model of some domain with my audience. I use that world model to debug my own writing.
My fear isn't that LLMs will fail to meet our standards, my fear is that LLMs will drag us down to a new low of homogenized, dull and lifeless form of writing.
Perhaps I'm biased, but it feels as if the few times I am impressed by someones writing, the material is 20 years old or older, even if said author is still producing work.
I recently used an LLM to help scaffold a piece of technical writing. I showed it to several of my peers (sharp and accomplished PhD students), and they immediately identified the paragraphs (even sentences in captions!) written by an LLM. I had spent a good amount of time prompt engineering Opus 4 to try and produce a coherent piece of content aligned with a narrative which I had already developed, but it was still _immediately obvious_ what was not consistent ... it stood out like a sore thumb to experts.
By the the time we finished the content, there was not a sentence of LLM-generated writing in the final product.
Possibly an incorrect extrapolation, but I think my experience here is the current status quo. Code is a bit easier for these systems, because the LLM can align against test suites, type systems, etc. But for writing ... you just can't trust the current capabilities to pass the sniff test by experts.
After all, how does one commmunicate "coherence" as a test which the LLM might align its generations against? That's why you need a shared world model (with your audience) -- and if we could produce a computational representation of such a thing, we might have a chance at better coherent / consistent technical writing generations.
I don't think it will ever counter the change, but I suspect there will be some interesting developments in culture worldwide caused by this.
I suppose it will also depend on how affordable/accessible these models will be.
Sure, for some niche artwork and prostitution there will always be demand for human labor.
Also, I just purchased LazyVim For Ambitious Developers. I've used the online edition a number of times in recent months. Thanks for your work!
I think it's safe to say it is pretty clear.
As an example, you can power 10 developers with the highest tier of Claude Code Max for a year under the price of a new developer. At this point, having plenty of personal experience with the tool, I'd pick the former option.
There, one less job for a developer.
It's the income for me. This career was my only ticket out of poverty, it actually saved me from an otherwise horrible life. Now supposed to cheer that I might very well be replaced in the foreseeable future? It's the only line of work I'm skilled at or qualified to do and I honestly don't know what awaits me if I lose it.
Maybe this will change at some point in the future, but for now there's no way I would substitute a well-written book on a subject for AI slop. These models are trained on human-written material anyway, why not just go straight to the source?
> So what am I good for anymore?
And here I was thinking this question is why writers write at all. Who else would do something requiring so much work for so little reward but those who fundamentally think they aren't worth much, it's what unites us.
But I don't think I worry about being replaced, not because I'm irreplaceable, but because if I could be completely replaced I think that might be quite a delightful experience. Imagine all those people who need something from you satisfied. That's what being replaced would entail: not a single person demanding a single thing from you. But unfortunately no, I'm still needed here, annoyingly.
As a job? Income. Like every other job we do in this insufferable world.
This seems like the most dystopian statement given that writers are taking pride in the unity / fact that they are under rewarded / basically exploited by the system and you are taking this exploitation as a sense of unity?
I understand that you didn't mean any harm with the statement but I feel like this statement is true and it just shows what a bloody dystopian nightmare we live in man.
Exploitation has become the norm.
This has been the case since the dawn of ages.
I had also said the same thing after writing this comment and the conclusion I built was: The world has gotten so good at propaganda that even though we can change the world in good direction, nobody wants to because of such propaganda / in general the algorithms make us worry about smaller things than large scale change. Clippy came to my mind too thinking that it is maybe a symbol of change though. I might create a blog post some day about it but I hope you get the idea.
> I'm still needed here, annoyingly.
Indeed you are. Indeed you are.
Would like a bit more of whatever it is you're taking :) Seriously, you don't need to know you create some value to the world ? If we all stop contributing why the hell would Zuckerberg let us live ? We're ruining his earth in his eyes probably ... I bet his A.I is already telling him it's not great to let 8 billion people consume so much resources...
Is this Zuckerberg in the room with us right now?
You need to seek therapy, seriously.
I promise to seek therapy if you promise to try to understand sarcasm on the internet.
I'm actually sad by this because it was such a place that was so foundational my development as an engineer. I visited it once in last month or so, when I had an AWS Lambda - Playwright issue that need specific settings to solve the problem but Claude, Gemini gave me a mash of the answers. I'm not complaining, but I "grew up" with Stack.
Our space no longer looks like a pyramid, I liken it to diamond-ish. New grads will simply not be able to find jobs[0] and I'm worried for anyone entering the E in STEAM (can't speak for others).
It's happening now and we're only at the tip of the iceberg. I have 2 young children and am uncertain about how to help them navigate the future.
The sword of efficiency doesn't care about my children. I mean that's my whole job right? to help people become efficient? It's just the speed and scale is insane.
More questions than answers...
[0] https://www.nytimes.com/2025/08/10/technology/coding-ai-jobs...
This thought is quite common and widespread, I’ve heard it multiple times, and it always baffles me. The very idea that human beings would stop exploiting each other and just live a peaceful, content life with their AI helpers is so hopelessly distant from the way I understand the world. I wish I was wrong, but human beings don’t exploit each other because we need to, as if it was an unfortunate thing; we exploit each other because we want and because we can. Even if AI robots solved most of our problems, we will never simply accept that and let others live in peace. Some human beings will always look for opportunities to rise above others, as long as they somehow can. I’d go even further as to say that, in case a future like that became actually feasible and predictable, lots of people currently in positions of power would fight very hard to keep that from happening.
[1] https://en.wikipedia.org/wiki/Humankind:_A_Hopeful_History
Agreed, it doesn't. Nothing about that fact seems reason for optimism, however. It's a gigantic leap to go from "the current level of exploiting each other doesn't come naturally for people" to "that level is going to decrease, or stop rising, any time soon".
But don’t mistake “natural” with “good”. Actually that is much more natural (in a wild sense) than having a complex society full of moral and philosophical constraints. I myself believe very strongly in ethics and try to be ethical as much as I can, but that doesn’t mean ethical behaviour comes “naturally”. If you can’t accept that not everyone will be ethical, and some will act “savagely”, then you’re being naive and opening yourself to, well, opportunities of exploitation.
Edit - you know what, after a bit of reading, I do get the point being made. Thanks for the book reference, I’m going to place a hold at my library if they have it.
The issue is not people exploiting one another to rise above each other. The issue is a few people exploiting AI's they own to keep themselves above everyone en masse.
They don't lose the need to rise above you, rather staying above you is the entire point of the AIs they're creating. If you want to compete in the economy in the future, you will have no choice but to also exploit their AIs to try to rise above others. Thereby helping the owning class rise further above you.
Why not make your own? Because you don't happen to have an army of AI researchers to create it for you. Why not use open source models? Again, you can, but you're betting those models will be better than commercial models. And right now the capability gap there is widening.
People in power wouldn't fight AIs to keep themselves in power. Rather you, yourself, using the AIs they own to make a living, would more deeply entrench their already extant power.
Never, of course, you just have to do more work in less time and the win materializes in lower costs and higher profits for the capital owners. Workers may get some crumbs that fell from the table as an unintended side effect.
I'm sure a worker in the peak of the industrial revolution had a much worse time than I do now. But I'm also equally sure my parents and grandparents had it much easier in term of purchasing power, work/life balance, job stability, being able to afford a home on a single income, being able to afford kids, being able to retire at a somewhat reasonable time, &c.
Basically everything after the digital revolution disproportionately benefited a very small percentage of people, while previous advances benefitted the masses (agriculture, train/cars, factory automation, &c.). We got a lot of new shiny bells and whistle to regularly pump up the dopamine but we lost a lot of basics
> According to skeptics of the "late capitalism" idea, so far there just has not been any real evidence of:
> (1) long-term economic stagnation or prolonged negative economic growth in the advanced capitalist countries;
> (2) pervasive social decay and persistent cultural degeneration that just keeps getting worse and worse, and
> (3) pervasive and persistent rejection of capitalism and business culture by the majority of the population
(all three of the skeptic points seem ripe to be re-analyzed)
> For many Western Marxist scholars since that time, the historical epoch of late capitalism starts with the outbreak (or the end[9]) of World War II (1939–1945), and includes the post–World War II economic expansion, the world recession of the 1970s and early 1980s, the era of neoliberalism and globalization, the 2008 financial crisis and the aftermath in a multipolar world society. Particularly in the 1970s and 1980s, many economic and political analyses of late capitalism were published.
Late Capitalism (1973)
Here's Kurzweil on things getting better https://www.youtube.com/watch?v=uEztHu4NHrs&t=376s
It's easy to be cynical, but productivity gains in agriculture are the main reason why we have enough to eat. Less obviously, they led to a huge improvement in quality of life for workers across all levels of society. The effect played out over hundreds of years.
US farms produce more than enough food for the entire population with less than 2% of the work force. [0] Surpluses of capital as well as increases in available labor from improvements in agricultural productivity were among the many factors that enabled the industrial revolution. [1] The root causes of the English industrial revolution were many, but it's hard to escape the importance of agricultural productivity in the mix.
[0] https://www.ers.usda.gov/data-products/chart-gallery/chart-d...
[1] https://en.wikipedia.org/wiki/Industrial_Revolution#Causes
That question will haunt many over the next two decades. Especially once tech gets good enough to replace most manual labour. Suddenly a billion plus thinking exactly that
k310•5mo ago
The writer seems to assume that people can learn entirely from computer displays. (Including glasses) That would be a world where our entire lives or a great deal of them, are devoted to computer generated facts, experiences, well, everything.
There are still both creative and mundane experiences like the door molding that needs a fix.
I've been through a few revolutions that "changed everything" From the phone without a dial (true) to cell phones that solve the formerly horrifying "I couldn't find you at the airport" situation.
And so on.
I guess that the existential question is: "What is the purpose, meaning, and joy in life?"
It's not defined by the gadgets and technologies outside us, but truly by the relationships we have with people. So, if AI "replaces" my highly personal photographic experiences captured on film or memory, it's shared with close friends who know the story behind it and its value as an experience shared. I rarely post my photos anywhere, thanks to the image bureau and AI dragnets. They commoditize and destroy that personsl value. Art is self-expression. Read Joseph Conrad's preface [0]
Likewise, as long as life is full of experiences away from computer screens (and glasses), those are real life, and the technology is just ornament.
I had a plan for after-school education in which kids would go out and measure things and use computers to analyze the data. Like Kepler, but a lot faster. And the learning is in the doing.
Very long ago, 50 years to be factual, I wanted to get a third class Electrician's Mate rating, and in those days, you could strike for a rating. Pass some exams and show expertise with real gear. One of the Warrant Officers was tickled that I did it "the old way" instead of "A" school, because it's all "A" school now. Probably computerized.
Now, I'm retired, and relationships with people mean even more. I have more experiences and wisdom to share, and in a way that's unique to each person, not a multidimensional "profile" which might even be more complete than my understanding, but in a personal way that comes from the shared experience of being human and cares deeply about feelings, because we've had similar ones.
Not defined by the technology of the time: Mom and Dad's old "Operator" phone or 53 Ford. But by what we shared as persons, equal in humanity though as different as a kid and parent can be, and through evolving lives and times.
[0] https://standardebooks.org/ebooks/joseph-conrad/the-nigger-o...
pillefitz•5mo ago