It’s one thing that the internet is full of slop now, but something about this feels deeply worse. It’s something that has for all of history involved real humans authors. Now it seems some human authors use AI an don’t even bother to check the results make sense.
It can also be seen in newer fanfics. If you visit AO3 you might get to see a few that are completely written with LLMs and the authors sometimes don't even bother reading it themselves before publishing it. The lower quality is almost always apparent from the get-go.
LLMs can be very useful for writing, but I don't see serious writers using LLMs except for maybe checking facts / as a knowledgebase. The lowest tier of readers was already one-shot by LLMs over a year ago.
Feels so wrong in physical book form.
The actions in the train scene didn't seem so bizarre to me, and even if they did we can still write bizarre events and characters? Similarly the spiderweb skin, where's the line between 'no human would ...' and writing a strange character?
I don't have anything like the OOP's experience with literature or even AI, but I'm not really convinced. It is nevertheless interesting that they believe they've identified it, and to think of the ramifications, aside from whether it's a correct analysis or not.
> If you want to read something very similar to The Hunger Games, this is your book. It goes reaping > dress-up > training > rating > games. The characters are different, but the plot is virtually the same.
> the book reads like a hunger games #1 rewrite
https://www.goodreads.com/book/show/214331246-sunrise-on-the...
Although most fans seem to love it.
> It would be considered standard industry practice to hire a ghostwriter for these prequels
The use of ghostwriters is the main problem, I think. AI is just another ghostwriter.
I recently read a book that's 3rd in a trilogy. I loved the first 2 books but didn't like the 3rd at all and indeed stopped reading after about 100 pages. It felt like the 3rd book was just a perfunctory response to the publisher's request for another sequence in the series, a mere money grab. But now, suddenly, I'm starting to wonder whether the 3rd book was even written by the author...
I was aware that non-writers, e.g., politicians, use ghostwriters when they publish a book, but it would have never occurred to me that experienced, accomplished fiction writers would also use ghostwriters.
Its not good at all, but makes financial sense.
(But then why stop there, have the estate of the esteemed author go on contracting ghostwriters! Does it only work if you keep the death a secret, or would a licensed P.G. Wodehouse ghostwriter do as well today as if he were a recluse and never proclaimed dead?)
I think the distinction, for me, is that when I pay for a book I want access to the author's creative thoughts and personality, not just their particular "brand". I realize that a lot of readers don't care, especially in the YA space, but I'd rather read a worse novel from the person who conceived The Hunger Games than a perfect imitation from someone who's merely imitating the brand.
Tom Clancy has new books coming out every year and he's been dead for over a decade. They don't hide the "ghostwriter" but they also put Tom Clancy in huge letters at the top even though he had less than nothing to do with it.
https://www.amazon.com/Clancy-Line-Demarcation-Jack-Novel-eb...
precompute•2h ago
The smoking gun for LLM-written text is when the text is a "linked list". It can only ever directly reference the previous thing. That's not the case here. And the latest Hunger Games book isn't yet another amazon-published slopfest. It's been through a couple of rounds of editing, at the very least.
I'm not saying RedditOP is completely off-kilter. There might be something to what he's saying. Maybe Suzanne Collins (the author of the book) has been consuming a lot of LLM-generated content. Or maybe she's just ahead of the curve and writing in a style that's likely to catch fire (no pun intended) [1].
[1]: Yes, I wrote this myself! And the entire reply!
lloydatkinson•1h ago
Did we read the same extracts? The nonsensical actions and movements of the lovers in the entire train scene? The obnoxious call and response structure? The absurd comparison between a grandmother skin and a spiders web because "silk"?
> It can only ever directly reference the previous thing. That's not the case here. And the latest Hunger Games book isn't yet another amazon-published slopfest. It's been through a couple of rounds of editing, at the very least.
I don't agree with your assessment here, "the last thing" can be literally anything the user prompts. Are you suggesting that because none of the previous books in the series are written by AI, that's somehow an argument that the latest in the series can't be?
precompute•1h ago
If a LLM was indeed used, the output was likely massaged to a degree that wouldn't be immediately obvious.
Now, my 2c: Writing is sometimes atrocious and sometimes authors jam in stupid things to maintain flow. If a person could make the connection between "Silk", "Spider", "weaving" and "grandmother" then another person could as well (even when one is "verifying" and another "proving"). And using those properly and in context and succinctly is far beyond most LLMs, and would require a fair amount of gambling, which would be out of character for someone whose writing prowess has been verified pre-LLMs.
As for what I mean by "directly reference the previous thing": LLMs can jam in well-known (to them) phrases and sentences and structures onto an idea/request. However, they are unable to loop upon the particulars of that idea/request in a coherent manner, which leads to slopification at large output sizes, and shows us the ceiling of the quality of writing a LLM can output.
orwin•1h ago
dns_snek•1h ago
precompute•1h ago
dns_snek•9m ago
You claimed, with a high level of confidence, that the text isn't written by AI because it lacks obvious "tells" which you believe to be present in any LLM generated text. But if the absence of these "tells" reliably indicated human writing then LLM detectors would have false negative rate of approximately 0%, do they?