Think of interview candidates rejected by AI and employees fired by AI, or that case where a snack pack was identified by AI as a weapon in a student's pocket. This will lead to "organic decision making".
Why?
A recording of the entire process of it's creation is one possible answer (though how are deep fakes countered)
But maybe there is some cryptographic solution involving single direction provable timestamps..
Does anyone know of anyone working on such a thing?
It's a social problem at heart and piling on yet more technology won't fix it.
Did the author come up with the main ideas, character arcs or plot devices himself? Did he ever seek assistance from AI to come up with new plot points, rewrite paragraphs, create dialog?
The only thing which really matters is trust.
(I don't have an answer, just wondering.)
I'm so happy I'm not doing any school/academic work anymore, because AI writing detection tools (I learned English though reading technical docs; of course my writing style is a bit clinical) and checking the edit history in a Google Docs document would've both fucked me over.
Are thoughts and ideas creations? Or you just mean the literal typewriting?
How do you prove an idea is original and you have been in a vacuum not influenced _by anything at all_?
If anything The Hunger Games is the perfect example that you can get away with anything you want, and that was almost 20 years ago.
Everything is a remix https://www.youtube.com/watch?v=nJPERZDfyWc or if you hate your life https://tvtropes.org/
People will know by reputation alone, which cannot be fabricated.
Self certification backed by a war chest to sue those who lie.
So yeah, simply filtering by year published could be a start
I also wrote an article on my blog that you are mainly writing for yourself and your family, friends and followers these days, the algorithm is very unlikely to get you outside of that word-of-mouth audience, unless you pay $$, go full-in promoting on social media (which may backfire), or are extremely lucky. With AI the algorithm has become the enemy and finding genuine indie authors is unfortunately getting harder.
Writing a book is, in most cases, something which happens between the author and their writing medium, how could any publisher verify anything about AI use, except in the most obvious cases?
The one thing which matters here is honesty and trust and I do not see how an outside organization could help in creating that honesty and maintaining that trust.
So this is the thing that Zitron and Doctorow are always talking about? Naked grifting in the AI industry?
Why? Can't it be done same way it's done with copyrighted material: by checking the authors process?
(Because at least in EU law permits writing basically same thing, if both authors reached it organically - have a trail of drafts, other writing process documents. As long as you proved you came upon it without influence from the other author.)
Proving that you done it without AI can be similar. For example - just videotaping whole writing process.
Now, as for if anyone cares about such proofs is another topic.
Which proves very little. It also would be something which authors would absolutely loath to do.
An independent certification body is quite an old-world solution for a problem like this, but I’m not sure this is something that can be done mathematically. A web of trust may be all we have.
I disagree. AI use is diffuse. An author is specific. Having people label their work as AI free is accountable in a way trying to require AI-generated work be labeled is not.
> similar to those found in cigarettes
Hyperbole undermines your argument. We have decades of rigorous and international evidence for the harms from cigarettes. We don’t for AI.
For example, I stumbled on https://www.amazon.com/dp/B0DT4TKY58 and had never heard of the author. Their page (https://www.amazon.com/stores/author/B004LUETE8) suggested they were incredibly prolific in a huge number of areas which already felt off. No information about "Robert Johnson" was available either. The publisher, HiTeX Press (https://www.amazon.com/s?k=HiTeX+Press) has a few other authors with similarly generic names and no information available about them, each the author of numerous books spanning a huge array of topics.
It feels even more bewildering and disheartening to see AI slop come into the physical world like this.
Reportedly, Kindle has already been flooded with "AI" generated books. And I've heard complaints from authors, of AI superficial rewritings of their own books being published by scammers. (So, not only "AI, write a YA novel, to the market, about a coming of age vampire young woman small town friends-to-lovers romance", but "AI, write a new novel in the style of Jane Smith, basically laundering previous things she's written" and "AI, copy the top-ranked fiction books in each category on Amazon, and substitute names of things, and how things are worded.")
For now, Kindle is already requiring publishers/authors to certify on which aspects of the books AI tools were used (e.g., text, illustrations, covers), something about how the tools were used (e.g., outright generation, assistive with heavy human work, etc.), and which tools were used. So that self-reporting is already being done somewhere, just not exposed to buyers yet.
That won't stop the dishonest, but at least it will help keep the honest writers honest. For example, if you, an honest writer, consider for a moment using generative AI to first-draft a scene, an awareness that you're required to disclose that generative AI use will give you pause, and maybe you decide that's not a direction you want to go with your work, nor how you want to be known.
Incidentally, I've noticed a lot of angry anti-generative-AI sentiment among creatives like writers and artists. Much more than among us techbros. Maybe the difference is that techbros are generally positioning ourselves to profit from AI, from copyright violations, selling AI products to others, and investment scams.
If a person who I know has taste signs off on a 100% AI book, I'll happily give it a spin. That person, to me, becomes the author as soon as they say that it's work that they would put their name on. The book has become an upside-down urinal. I'm not sure AI books are any different than cut-ups, other than somebody signed a cut-up. I've really enjoyed some cut-ups and stupid experiments, and really projected a lot onto them.
My experience in running things I've written through GPT-5 is that my angry reaction to its rave reviews, or its clumsy attempts to expand or rewrite, are stimulating in and of themselves. They often convince me to rewrite in order to throw the LLM even farther off the trail.
Maybe a lot of modern writers are looking for a certification because a lot of what they turn out is indistinguishable cliché, drawn from their experiences watching television in middle-class suburbs and reading the work of newspaper movie critics.
Lastly, everything about this site looks like it was created by AI.
Not so sure. Books are not all just entertainment but they also develop one's ouook on life, relationships, morality etc. I mean, of course books can also be written by "bad" people to propagate their view of things, but at least you're still peeking into the views and distilled experience of a fellow human who lived a personal life.
Who knows what values a book implicitly espouses that has no author and was optimized for being liked by readers. Do that on a large enough scale and it's really hard to tell what kind of effect it has.
There is some of this even without AI. Plenty of modern pulpy thriller and romance books for example are highly market-optimised by now.
There are thousands of data points out there for what works and doesn't and it would be a very principled author who ignores all the evidence of what demonstrably sells in favour of pure self-expression.
Then again, AI allows to turbocharge the analysis and pluck out the variables that statistically trigger higher sales. I'd be surprised if someone isn't right now explicitly training a Content-o-matic model on the text of books along with detailed sales data and reviews. Perhaps a large pro-AI company with access to all the e-book versions, 20 years of detailed sales data, as well as all telemetry such as highlighted passages and page turns on their reader devices? Even if you didn't or couldn't use it to literally write the whole thing, you can have it optimise the output against expected sales.
argomo•3h ago
tubignaaso•3h ago
AlecSchueler•2h ago
gdulli•2h ago
JumpCrisscross•2h ago
You’re really arguing greed (EDIT: and bad risk evaluation) didn’t exist before capitalism?
Is my cat capitalist?
asmor•2h ago
JumpCrisscross•2h ago
lukevp•3h ago
asmor•2h ago
JumpCrisscross•2h ago
asmor•2h ago
Capitalism does however incentive unhealthy and self-destructive exploitation. Including of generative AI.