If I wanted an AI summary of a topic or answer to a question, a chatbot of choice can easily provide that for you. There’s no need for yet another piece of blogspam that isn’t introducing new information into the world. That content is already available inside the AI model. At some point, we’ll get so oversaturated with fake, generated BS that there won’t be enough high quality new information to feed them.
I'm certainly using Google less and less these days, and even niche subreddits are getting an influx of LLM drivel.
There are fantastic uses of AI, but there's an over-abundance of low-effort growth hacking at scale that is saturating existing conduits of signal. I have to wonder if some of this might be done intentionally to poison the well.
How? Fill the web with AI generated content or just using LLMs to search for information? As more junk is poured into training LLMs this too will take a hit at some point. I remember how great the early web search was, one could find from thousands to millions of hits for request. At some point it got so polluted that it became nearly useless. It wasn't only spam that made is less useful, it was also the search providers who twisted the rules to get them to reap all the benefits.
Who or what is clamoring for that AI-generated padding which turns 200 words of bullet points into 2000 words of prose, though? It's not like there's suddenly going to be 10x more insight, it's just 10x more slop to slog through that dilutes whatever points the writer had.
If you have 200 words' worth of thoughts you want to share... you can just write 200 words.
I think if writing more than 200 words is painful for you, blogging probably isn't for you?
This is so, so wrong. The writing is the thoughts. A person's un-articulated bullet points are not worth that much. And AI is not going to pull novel ideas out of your brain via your bullet points. It's either going to summarize them incorrectly or homogenize them into something generic. It would be like dropping acid with a friend and asking ChatGPT to summarize our movie ideas.
The idea that writing is an irrelevant way to gatekeep people with otherwise brilliant ideas is not reality. You don't have to be James Baldwin, but I will not get a sense for what your ideas even are via an AI summary.
If you just want to get the information out then just post the bullet points, what do you care?
If you want to be recognized as a writer, then write.
Using LLMs to write is like wearing fast fashion. If you want certain kinds of people to notice you and think of you favorably, you need to wear clothes that represent how you want to be seen. It would be better if people made their own clothes, or paid money to "real" designers for bespoke and/or high-quality items. But most people can't afford the time and money for that. So we have places like Hot Topic and Forever 21 and you probably recognize when other people shop there and you turn up your nose at them. But they are still effective at getting what they want from their sartorial choices.
Writing is free. About as free as anything can be. It's not classism to ask people to put some actual thought into their ideas. It's just reality that people won't want to read slop.
>I find it interesting that the 6 replies to my comment assumed that I was talking about myself
No, it's just colloquial English on a forum to write you as an address to the reader, not literally to the person you reply to. As in, "if you (someone) feels they can't write without AI, then writing may not be for you (that person)."
Writing _is_ thinking.
Similarly, I remember there was a lot of frothy startup ideas around using AI to do very similar things. The canonical one I remember is "using AI to generate commit messages". But I don't want your AI commit messages... again, not because AI is just Platonically bad or something, but because if I want an AI summary of your commit, I'd rather do it in two years when I actually need the summary, and then use a 2027 AI to do it rather than a 2025 AI. There's little to no utility to basically caching an AI response and freezing it for me. I don't need help with that.
The value is a nice starting point but the message is still confirmed by the actual expert. If it's fully auto-generated or I start "accepting" everything, then I agree it becomes completely useless.
To be fair, there has never been a lot of utility in you as a human being involved, theoretically speaking. The users do not use a forum because you, a human, are pulling knobs and turning levers somewhere behind a meaningless digital profile. Any human involvement that has been required for the software to function is merely an implementation detail. The harsh reality, as software developers continually need to be reminded of, is that users really don't care about how the software works under the hood!
For today, a human posting AI-generated content to a forum is still providing all the other necessary functions required, like curation and moderation. That is just as important and the content itself, but something AI is still not very good at. A low-value poster may not put much care into that, granted, but "slop" would be dealt with the same way regardless of whether it was generated by AI or hand written by a person. The source of content is ultimately immaterial.
Once AI gets good, we'll all jump to AI-driven forums anyway, so those who embrace it now will be more likely to stave off the Digg/Slashdot future.
The idea that words people write don't mean anything or imply anything in an abstract sense is misguided, in my opinion. When one reads something a person wrote, they think about what the person who wrote it was thinking, what it means to them, what the implications of what they think might be... there are people who do not think about things like this, so they don't care and view genAI text as equivalent because that level of thought simply isn't put into their reading.
Anyway, my point is talking on a forum filled with LLMs would probably stop being interesting and engaging very quickly because LLMs are bad at emulating the lateral thinking, diversity of ideas, and abstraction of communication that make talking to a human fun.
While that is no doubt true today, the earlier comment that sparked this posits that may only be a temporary state that may improve in the future. Once LLM and human creation is indistinguishable, there is no reason to have concern for what generated the content, is there?
Nobody uses a forum for the human connection. There is no human! I can't see your face, I can't touch your skin, I don't feel the heat radiating from your body. Hell, if we meet each other on the street later today, I'll never know it was you. There is only software. I do assume, knowing a thing or two about how the technology works, that in implementation that there is a human somewhere in the loop, but I don't completely know for sure, and it wouldn't make a difference anyway.
There is a place for human connection, most certainly, but it is found in the "real world". Forums are not equivalent. They are something else entirely.
> Anyway, my point is talking on a forum filled with LLMs would probably stop being interesting and engaging very quickly
Just as it does when humans write drivel by hand. There is merit to banning accounts that post garage, but what produced that garbage is irrelevant. AI's involvement, or lack thereof, makes no difference. The quality of an account can be judged on its output, not the mechanism by which it operates.
I see this a lot in AI discussions - it can, at best, do what we do at a level we'd consider "good enough." It can write mediocre slop just as well as the most mediocre of us. To me, that is lowering the bar for our work exceptionally. We shoot for the stars; we just miss sometimes!
I understand the luddites are always fearful of losing their job, but that is ultimately irrational.
What we got: more content polluting search, aka worse search.
Having summarized results appear immediately with links to the sources is preferable to opening multiple tabs and sifting through low-quality content and clickbait.
Many real-world problems aren't as simple as "type some keywords" and get relevant results. AI excels as a "rubber duck", i.e., a tool to explore possible solutions, troubleshoot issues, discover new approaches, etc.
Yes, LLMs are useful for junior developers. But for experienced developers, they're more valuable.
It's a tool, just like search engines.
Airplanes are also a tool. Would you limit your travel to destinations within walking distance? Or avoid checking the weather because forecasts use Bayesian probability (and some mix of machine learning)? Or avoid power tools because they deny the freedom of doing things the hard way?
One can imagine that when early humans began wearing clothing to keep warm, there were naysayers who preferred to stay cold.
The most creative people I know are using AI to further their creativity. Example: storytelling, world building, voice models, game development, artwork, assistants that mimic their personality, helping loved ones enjoy a better quality of life as they age, smart home automations to help their grandmother, text-to-speech for the visually impaired or those who have trouble reading, custom voice commands, and so on.
Should I tell my mom to turn off Siri and avoid highlighting text and tapping "Speak" because it uses AI under the hood? I think not.
They embrace it, just like creative people have always done.
[0] https://arstechnica.com/gadgets/2024/05/google-is-reimaginin...
I confirmed that from my own memory via a Google AI summary, quoted verbatim above. Of course, I would never have learned it in the first place had somebody not written it down.
He did not. You should read the dialogue.
> I confirmed that from my own memory via a Google AI summary, quoted verbatim above.
This is the biggest problem with LLMs in my view. They are great at confirmation bias.
In Phaedrus 257c–279c Plato portrays Socrates discussing rhetoric and the merits of writing speeches not writing in general.
"Socrates: Then that is clear to all, that writing speeches is not in itself a disgrace.
Phaedrus: How can it be?
Socrates: But the disgrace, I fancy, consists in speaking or writing not well, but disgracefully and badly.
Phaedrus: Evidently."
I mean, writing had existed for 3 millennia by the point this dialogue was written.
Edit to add:
Projects like the Internet Archive will be even more important in the future.
AI is widely used for support tasks such as: - Transcribing interviews - Research assistance and generating story outlines - Suggesting headlines, SEO optimization, and copyediting - Automating routine content like financial reports and sports recaps
This seems like a reasonable approach, but even so I agree with your prediction that people will mostly interact with the web via their AI interface.
For something like a blog I would agree, but I found AI to be fantastic at generating copy for some SaaS websites I run. I find it to be a great "polishing engine" for copy that I write. I will often write some very sloppy copy that just gets the point across and then feed that to a model to get a more polished version that is geared to a specific outcome. Usually I will generate a couple variants of the copy I fed it, validate it for accuracy, slap it into my CMS and run an a/b test and then stick with the content that accomplishes the specific goal of the content best based on user engagement/click through/etc.
Em dashes, "it's not just (x), it's (y)," "underscoring (z)," the limited number of ways it structures sentences and paragraphs and likes to end things with an emphasized conclusion, and I could go on all day.
DeepSeek is a little bit better at writing in a generic and uncharacteristic tone, but still... it's not good.
And even when the most obvious "tells" are removed, articles can sometimes nevertheless seem AI-written. Just check this one out:
> https://searchengineland.com/ai-visibility-aexecution-proble...
Needlessly close to bullying way to try and prove your point.
Which part of this looks like bullying? It was opt-in. They attended the presentation because they were interested.
We meatbags are great pattern recognizers. Here is a list of my current triggers:
"The twist?",
"Then something remarkable happened",
That said, this is more of an indictment of the lazyness of the authors to provide clearer instructions on the style needed so the app defaults to such patterns.
(Context: I told it to write at "will" after a session of explaining Rene Girard's mimetic desire in the styles of various authors)
*Well, here we are,
You've got me tangled
*And hey, maybe I
*Imagine this: I'm sitting
*And maybe now, I'm
You've got me thinking
*Maybe it's saying, "Hey,
*And so, I think
Not just any wanting,
The kind that makes
There's something beautiful about
The way it drives
*But here's the kicker:
It's got a mind
It makes us do
It's a double-edged sword
*And yet, without it,
Probably just sitting around,
*So maybe, just maybe,
*Not just because you've
*It's that tiny ember
In the end, desire's
It's the fire that
*And maybe, just maybe, [yes, again]
So here's to the
May it burn bright
https://ccp.cx/a/chatgpt-voice.htm>>But here's the thing.
>This is one of the usual key turning points in these essays. An earlier one happened when it was like [introduces idea] [straw-mans objection] [denies strawman]. I didn't bring it up because it's not always a strong one and this one didn't seem entirely too heavy-handed. There is, however, much more often a very obvious "But here's the thing" (or similar) to be found. As soon as I saw that, I already knew I was going to find a paragraph beginning with "So" somewhere near the end.
I'm sick of seeing this everywhere. 2 hosting companies use this in every single weekly spam email they send.
paxys•8mo ago
elzbardico•8mo ago
1- https://www.adamsmith.org/blog/the-cantillion-effect
stanford_labrat•8mo ago
Except for the founders/early employees who get a modest (sometimes excessive) paycheck.
chimeracoder•8mo ago
That would be the case if VCs were investing their own money, but they're not. They're investing on behalf of their LPs. Who LPs are is generally an extremely closely-guarded secret, but it includes institutional investors, which means middle-class pensions and 401(k)s are wrapped up in these investments as well, just as they were tied up in the 2008 financial crisis.
It's not as clean-cut as it seems.
givemeethekeys•8mo ago
rightbyte•8mo ago
hinkley•8mo ago
swyx•8mo ago
an-honest-moose•8mo ago
bowsamic•8mo ago