If they did actually stumble on AGI (assuming it didn’t eat them too) it would be used by a select few to enslave or remove the rest of us.
No one in power is going to help unless there's money in it.
Also who's this Dario?
This technology, like every prior technology, will cause some people to lose their jobs and some new jobs to be created. This will annoy people who have to learn new skill instead of coasting until retirement as they planned.
It is no different than the buggy whip manufacturers being annoyed at Henry Ford. They were right that it was bad for their industry, but wrong about it being the death of... well all the million things they claimed it would be the death of.
Did you know the 2/3rds of the people alive today wouldn't be if it hadn't been for the invention of the Haber-bosch process? Technology isn't just a toy, it's our life support mechanism. The only way our population gets to keep growing is if our technology continues to improve.
Will there be some unintended consequences? Absolutely. Does that mean we can (or even should) stop it? Hell no. Being pro-human requires you to be pro-technology.
Sycophancy is for more than just LLMs.
Example 1 is bad, StackOverflow had clearly plateaued and was well into the downward freefall by the time ChatGPT was released.
Example 2 is apparently "open source" but it's actually just Tailwind which unfortunately had a very susceptible business model.
And I don't really think the framing here that it's eating its own tail makes sense.
It's also confusing to me why they're trying to solve the problem of it eating its own tail - there's a LOT of money being poured into the AI companies. They can try to solve that problem.
What I mean is - a snake eating its own tail is bad for the snake. It will kill it. But in this case the tail is something we humans valued and don't want eaten, regardless of the health of the snake. And the snake will probably find a way to become independent of the tail after it ate it, rather than die, which sucks for us if we valued the stuff the tail was made of, and of course makes the analogy totally nonsensical.
The actual solutions suggested here are not related to it eating its own tail anyway. They're related to the sentiment that the greed of AI companies needs to be reeled in, they need to give back, and we need solutions to the fact that we're getting spammed with slop.
I guess the last part is the part that ties into it "eating its own tail", but really, why frame it that way? Framing it that way means it's a problem for AI companies. Let's be honest and say it's a problem for us and we want it solved for our own reasons.
Essentially, Reddit is also eating it's own tail to survive as the flood of low quality irrelevant content is making the platform worse for speakers of all languages but nobody cares because "line go up."
> For each response, the GenAI tool lists the sources from which it extracted that content, perhaps formatted as a list of links back to the content creators, sorted by relevance, similar to a search engine
This literally isn’t possible given the architecture of transformer models and there’s no indication it will ever be.If it were to give you a model-only response it could not determine where the information in it was sourced from.
1. We built a machine that takes a bunch of words on a piece of paper, and suggests what words fit next.
2. A lot of people are using it to make stories, where you fill in "User says 'X'", and then the machine adds something like "Bot says 'Y'". You aren't shown the whole thing, a program finds the Y part and sends it to your computer screen.
3. Suppose the story ends, unfinished, with "User says 'Why did the chicken cross the road?'". We can use the machine to fix up the end, and it suggests "Bot says: 'To get to the other side!'"
4. Funny! But User character asks where the answer came from, the machine doesn't have a brain to think "Oh, wait that means ME!". Instead, it keeps making things longer in the same way as before, so that you'll see "words that fit" instead of words that are true. The true answer is something unsatisfying, like "it fit the math best".
5. This means there's no difference between "Bot says 'From the April Newsletter of Jokes Monthly'" versus "Bot says 'I don't feel like answering.'" Both are made-up the same way.
> Google's search result AI summary shows the links for example.
That's not the LLM/mad-libs program answering what data flowed into it during training, that's the LLM generating document text like "Bot runs do_web_search(XYZ) and displays the results." A regular normal program is looking for "Bot runs", snips out that text, does a regular web search right away, and then substitutes the results back inside.
But even the part that is coming from the context is only being produced by the weights. As I said, every token is some mathematical combination of the weights and the context.
So it can produce text that does not correctly summarize the content in its context, on incorrectly reproduce the link, or incorrectly map the link to the part of its context that came from that link, or more generally just make shit up.
Well, they could always try actually paying content creators. Unlike - for instance - StackOverflow.
There isn't any clean way to do "contributor gets paid" without adding in an entire mess of "ok, where is the money coming from? Paywalls? Advertising? Subscriptions?" and then also get into the mess of international money transfers (how do you pay someone in Iran from the US?)
And then add in the "ok, now the company is holding payment information of everyone(?) ..." and data breaches and account hacking is now so much more of an issue.
Once you add money to it, the financial inceptives and gamification collide to make it simply awful.
Actually we can. And we will.
There's also huge financial momentum shoving AI through the world's throat. Even if AI was proven to be a failure today, it would still be pushed for many years because of the momentum.
I just don't see how that can be reversed.
Also just like SEO to game search engines, "democratized RLHF" has big trust issues.
Copyright was predicated on the notion that ideas and styles can not be protected, but that explicit expressive works can. For example, a recipe can't be protected, but the story you wrap around it that tells how your grandma used to make it would be.
LLMs are particularly challenging to wrangle with because they perform language alchemy. They can (and do) re-express the core ideas, styles, themes, etc. without violating copyright.
People deem this 'theft' and 'stealing' because they are trying to reconcile the myth of intellectual property with reality, and are also simultaneously sensing the economic ladder being pulled up by elites who are watching and gaming the geopolitical world disorder.
There will be a new system of value capture that content creators need to position for, which is to be seen as a more valuable source of high quality materials than an LLM, serving a specific market, and effectively acquiring attention to owned properties and products.
It will not be pay-per-crawl. Or pay-per-use. It will be an attention game, just like everything in the modern economy.
Attention is the only way you can monetize information.
The ONLY things that matter when determining whether copyright was infringed are "access" and "substantial similarity". The first refers to whether the alleged infringer did, or had a reasonable opportunity to, view the copyrighted work. The second is more vague and open-ended. But if these two, alone, can be established in court, then absent a fair use or other defense (for example, all of the ways in which your work is "substantially similar" to the infringed work are public domain), you are infringing. Period. End of story.
The Tetris Company, for example, owns the idea of falling-tetromino puzzle video games. If you develop and release such a game, they will sue you and they will win. They have won in the past and they can retain Boies-tier lawyers to litigate a small crater where you once stood if need be. In fact, the ruling in the Tetris vs. Xio case means that look-and-feel copyrights, thought dead after Apple v. Microsoft and Lotus v. Borland, are now back on the table.
It's not like this is even terribly new. Atari, license holders to Pac-Man on game consoles at the time, sued Philips over the release of K.C. Munchkin! on their rival console, the Magnavox Odyssey 2. Munchkin didn't look like Pac-Man. The monsters didn't look like the ghosts from Pac-Man. The mazes and some of the game mechanics were significantly different. Yet, the judge ruled that because it featured an "eater" who ate dots and avoided enemies in a maze, and sometimes had the opportunity to eat the enemies, K.C. Munchkin! infringed on the copyrights to Pac-Man. The ideas used in Pac-Man were novel enough to be eligible for copyright protection.
Stack Overflow peaked in 2014 before beginning it's downward decline. How is that at all related to GenAI? GPT4 is when we really started seeing these things get used to replace SO, etc., and that would be early 2023 - and indeed the drop gets worse there - but after the COVID era spike, SO was already crashing hard.
Tailwind's business model was providing a component library built on top of their framework. It's a business model that relies on the framework being good enough for people to want to use it to begin with, but being bad enough that they'd rather pay for the component library than build it themselves. The more comfortable it is to use, the more productive it is, the worse the value proposition is for the premium upsell. Even other "open core" business models don't have this inherent dichotomy, much less open source on the whole, so it's really weird to try and extrapolate this out.
The thing is, people turn to LLMs to solve problems and answer questions. If they can't turn to the LLM to solve that problem or answer that question, they'll either turn elsewhere, in which case there is still a market for that book or blog post, or they'll drop the problem and question and move on. And if they were willing to drop the problem or question and move on without investigating post-LLM, were they ever invested enough to buy your book, or check more than the first couple of results on google?
I always found it very frustrating that for a person at the start of the learning curve it was "read only"
Actually asking a naive question there was to get horribly flamed on the site. It, and the people using it, were very keen to explain how stupid you were being
LLMs on the other hand are sweet and welcoming (to a fault) of the naive newbie
I have been learning to use Shell script with the help of LLMs, I could not achieve that using SO
Good riddance
The ONLY reason we are here today is because OpenAI, and Anthropic, by extension, took it upon themselves to launch chat bots trained on whatever datasources they could get in a short amount of time to quickly productize their investments. Their first versions didn't include any references to the source material, and just acted as if they knew everything.
When CoPilot was built as a better auto-complete engine, trained on opensource projects, it was an interesting idea, because it doing what people already did. They searched GitHub for examples of the solution or nudged them in that direction. However, the biggest difference, using other project code was stable, because it came with a LICENSE.md that you then agreed to, and paid it forward. (i.e. "I used code from this project").
CoPilot initially would just inject snippets for you, without you knowing the source. It was only later, they walked that back and if you did use CoPilot, it shows you the most-likely source of the code it used. This is exactly the direction all of the platforms seem headed.
It's not easy to walk back the free-for-all system (i.e. Napster), but I'm optimistic over time it'll become a more fair, pay to access system.
I do not know what will replace it, but I will not miss websites trying to monetise my attention
People today may have a better sense of the downsides of ad-based services than we did when the internet was becoming mainstream. Back then, the minor inconvenience of seeing a few ads seemed worth all the benefits of access all the internet had to offer. And it probably was. But today the public has more experience with the downsides of relentless advertising optimization and audience capture, so there might be more business models based on something other than advertising. Either way, GenAI advertising is certainly coming.
mrcwinn•1h ago
1. I pay OpenAI 2. OpenAI rev shares to StackOverflow 3. StackOverflow mostly keeps that money, but shares some with me for posting 4. I get some money back to help pay OpenAI?
This is nonsense. And if the frontier labs are right about simulated data, as Tesla seems to have been right with its FSD simulated visualization stack, does this really matter anyway? The value I get from an LLM far exceeds anything I have ever received from SO or an O'Reilly book (as much as I genuinely enjoy them collecting dust on a shelf).
If the argument is "fairness," I can sympathize but then shrug. If the argument is sustainability of training, I'm skeptical we need these payment models. And if the argument is about total value creation, I just don't buy it at all.
lbrito•1h ago
That seems to be the argument: LLM adoption leads to drop of organic training data, leading LLMs to eventually plateau, and we'll be left without the user-generated content we relied on for a while (like SO) and with subpar LLM. That's what I'm getting from the article anyway.
mapontosevenths•1h ago
That said, what it misses is that the AI prompts themselves become a giant source of data. None of these companies are promising not to use your data, and even if you don't opt-in the person you sent the document/email/whatever to will because they want it paraphrased or need help understanding it.
lbrito•1h ago
Good point, but can it match the old organic data? I'm skeptical. For one, the LLM environment lacks any truth or consensus mechanism that the old SO-like sites had. 100s of users might have discussed the same/similar technical problem with an LLM, but there's no way (afaik) for the AI to promote good content and demote bad ones, as it (AI) doesn't have the concept of correctness/truth. Also, the old sites were two-sided, with humans asking _and_ answering questions, while they are only on the asking side with AI.
cthalupa•57m ago
The LLM doesn't but reinforcement does. If someone keeps asking the model how to fix the problem after being given an answer, the answer is likely wrong. If someone deletes the chat after getting the answer, it was probably right.
_DeadFred_•54m ago
Those AI prompts that become data for the AI companies is yet another thing that the human creators used to understand what people wanted, topics to explore, feedback on what they hadn't communicated well enough. That 'value' is AI stealing yet more energy from the system resulting in even less/less valuable human creation.
TeMPOraL•1h ago
Still, for the one about organic data (or "pre-war steel") drying out, it's not a threat to model development at all. People repeating this point don't realize that we already have way more data than we need. We got to where we are by brute-forcing the problem - throwing more data at a simple training process. If new "pristine" data were to stop flowing now, we still a) have decent pre-trained base models, and a dataset that's more than sufficient to train more of them, and b) lots of low-hanging fruits to pick in training approaches, architectures and data curation, that will allow to get more performance out of same base data.
That, and the fact that synthetic data turned out to be quite effective after all, especially in the latter phases of training. No surprise there, for many classes of problems this is how we learn as well. Anyone who has experience studying math for maturity exam / university entry exams knows this: the best way to learn is to solve lots of variations of the same set of problems. These variations are all synthetic data, until recently generated by hand, but even their trivial nature doesn't make them less effective at teaching.
zzzeek•1h ago
that is the argument, yes.
Claude clearly got an enormous amount of its content from Stackoverflow. Which has mostly ceased to be a source of new content. However unlike the author I dont see any way to fix this; stackoverflow was only there because people had technical questions that needed answers.
Maybe if the LLMs do indeed start going stale as there's not enough training data for new technologies, Q&A sites like Stackoverflow would still have a place, since people would still resort to asking each other questions rather than LLMs that dont have training data for a newer technology.