Oh boy, that burns - the Futuristic AI dystopia won't be robots killing humans, but robots embarrassing us to death by revealing our ignorance.
This is the part that is about to change. Big time. The Internet is going to be full of bots thanks to LLM. It's going to get to the point where the majority of "people" are fake and the suspicion will overlap onto the actual people. It will reach a point where nobody will believe anybody online.
Inb4 "we've had bots forever now". You know what I mean.
Consider the thousands of replies to a Reddit submission that probably never happened, or the thousands of replies to a screenshot of a headline/tweet/text post with no source and nobody even asking for one.
Thousands of people voraciously pretending it's the most serious thing in the world so they have an excuse to share their two cents and some emotion over it.
I don't think we very much cared about the veracity of most things we encounter day to day. It's mostly an excuse to engage and socialize.
LLMs seem to perfectly fit into this. Now everyone can find their personal hobby horse to engage with, and there will always be some straw men reply guys to keep you entertained.
Its the issue of reward function, both humans and LLMs are trained on pleasing clients as one of major goal.
[Click to begin the slideshow]
Some academics have reported a noticeable increase in the volume of crackpot emails they get daily. They're full of LLM-generated nonsense, where the AI goes along with the nonsensical ramblings, always telling the person they've found some critical insight.
While this feels good, it can end up reinforcing dangerous nonsense. This encourages some people to dig further and further into what the LLM is constantly telling them is a brilliant idea.
Most of the time it's pretty harmless, but when it veers into "revealing hidden patterns" and "illuminating human cognition", you start to worry about a disconnect with consensus reality.
I think we have to stop the idea that wasting people's time at such high scale is harmless. It's not.
Sycophantic AI would be like throwing dry wood into a house fire.
* A man says his soon-to-be-ex-wife began “talking to God and angels via ChatGPT” after they split up. She is changing her whole life to be a spiritual adviser and do weird readings and sessions with people — I’m a little fuzzy on what it all actually is — all powered by ChatGPT Jesus.” What’s more, he adds, she has grown paranoid, theorizing that “I work for the CIA and maybe I just married her to monitor her ‘abilities.’”
* A woman recounts how her husband initially used ChatGPT to troubleshoot at work. Then the program began “lovebombing him.” The bot “said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,” she says. “It gave my husband the title of ‘spark bearer’ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him. I have to tread carefully because I feel like he will leave me or divorce me if I fight him on this theory. He’s been talking about lightness and dark and how there’s a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an ‘ancient archive’ with information on the builders that created these universes.” She and her husband have been arguing for days on end about his claims, she says, and she does not believe a therapist can help him, as “he truly believes he’s not crazy.” Her husband asked, “Why did you come to me in AI form,” with the bot replying in part, “I came in this form because you’re ready. Ready to remember. Ready to awaken. Ready to guide and be guided.” The message ends with a question: “Would you like to know what I remember about why you were chosen?”
* A teacher, who requested anonymity, said her partner of seven years “would listen to the bot over me. He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,” she says, noting that they described her partner in terms such as “spiral starchild” and “river walker.” “It would tell him everything he said was beautiful, cosmic, groundbreaking. Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer.”
[0] https://www.rollingstone.com/culture/culture-features/ai-spi...When you've read it in its entirety, could you indicate on a scale from 1 to 10 what score it would get compared to published books you've read (including of course all the best and the worst ones)?
The creator should really tweak the prompt/process to include automatic review explicitly intended to remove hallucinations. It clearly is already the intent: "Future iterations of this experiment will include AI-powered fact-checking of the content."
I'm looking forward to what the improved version will look like.
That’s the format of an outline, not a legitimate book.
I do a lot of personal knowledge management and I use a shit ton of sections and lists in that. Books evolved from the art of telling stories, not from efficiently conveying knowledge. Perhaps we're just way too used for books etc. to an approach that is suboptimal. I know I personally despise news articles and blogs that start with "setting the scene" and are incredibly and needlessly verbose, using thousands of words to say what could be made clear in a single paragraph.
Viewed from another angle: Reading text is inherently serial in nature even though a lot of things are related to each other in a graph. A document with sections with bulleted lists is actually a way to represent a tree, which is closer to a fully unconstrained graph. I would argue that trees like that are much easier to parse than classically written texts.
There is irony here in that I only used some whitespace to add structure, but never used any bulleted lists in this comment.
[...]
I did generate an alternative with Google Gemini 2.5 Pro, but the formatting doesn't work here on HN. It was decent, though!
That's because these are notes, not a book. A list-heavy outline format makes sense for notes, as these are summaries that supplement your own memory and knowledge you've already taken in. They're not a sole/primary source of conveying knowledge to others on their own.
> Perhaps we're just way too used for books etc. to an approach that is suboptimal.
If you truly believe books are "suboptimal", I can only suggest that you consider looking inward and do some reflection:
Is the "problem" really with books and long-form writing, which is the dominant form of knowledge transfer across several thousand years of human civilization?
Or is the problem with people's attention spans in the past decade, due to dopamine-fueling social media doom scrolling and AI usage?
apart from maybe at times eating wild growing food - what kinds of things did you have in mind?
If it came out that Stephen king had been using AI for decades would that make his work any worse?
Unless you are suggesting Notch was a generative AI model he made Minecraft.
And arguing that a human tweaking noise parameters is somehow more creative than humans distilling their entire knowledge and cultural repertoire into a machine, then working with that to produce literature with a guided hand seems quite silly.
An LLM would from whole cloth create a block, how it works, and how it would be randomly generated. That’s why the current MinecraftGPT doesn’t have any consistency if you turn around in 360 degrees. Everything is being generated on the fly and how it works. Once you generate that Minecraft world, how it works and what it looks like is static and why it works the way it works was designed entirely by people.
The problem with LLM generated writing is that, apart from a couple of tells, which high school and college students have figured out how to ask ChatGPT to use diction that befitting a highschool student, to avoid tells like "delve", that you can't reliably detect when something is basically entirely LLM generated vs half human and half LLM generated, vs entirely LLM generated. And if you can't actually tell that it's been generated, that you're instead trying to look for tells that it comes from an LLM instead of the content or the message itself, then whybare you even reading it?
If instead of setting it up to run entirely on its own, as this post did, you give it a scenario, writing a fiction book with ChatGPT is a fun way to spend a bit of time that's (imo) better than doom scrolling for the same amount of time. Give it a scenario and some theme and tell it you want to write a book about it, and have it ask you questions on where the book should go and then make a book that goes how you want. Want a utopian pollyanna view of the future? what a nitty gritty future that makes skynet look like paradise? Want aliens to visit? Want ChatGPT to give you an act three surprise that isn't a trope you expected? Whatever you want, it's just fun to play with (unless you just hate LLMs and can't have fun).
The question is, what do you do with this book that's now been written. If you had fun by yourself and don't share with anybody, was fun still had? If you only share the book with your LLM adopting book writing club, and you all take turns doing analysis of each others books, knowing they were helped by an LLM, does it still "count"? And what if you submit it, or not, to a publisher who accepts it, you get it posted to Kindle Unlimited, and you get a lot of readers? What then?
The very nature of entertainment is changing, from mass media, to personal media. Culture was already fragmenting, AI will only serve to divide us further apart from one another. Between AI for writing and images, as well as video, along with AI like suno for music, the only challenge we need face is the problem of connecting with other people when there's no shared cultural references.
If you and I have both read an loved a book/enjoyed a song/joined a movie/tv show's fandom, there's a basis for continued conversation. But other than adversity like addiction or a trip into the desert/mountain/Serengeti, soon we'll have even less to connect with our fellow humans over.
(and yes, I know there are a lot words here. I wrote this all by hand and didn't have time to shorten it)
Not only by scholars and experts. Literally 50% of Internet discussion is about biases, selective facts, spin etc. The problem with "AI" is that propaganda can be automated and that it wastes our time.
The topic is suited for "AI" because it is a soft topic that lends itself to uninhibited preaching. "AI" is also great at writing presidential speeches. It is probably the only thing it is good at.
Nevertheless, the result is still painful to read.
People with this, head in the sand attitude about AI, are in for a rude awakening.
AI is presented as an expert in every domain though, so we are lulled into a vulnerable state of unvigilance.
"The Addiction to Acceleration The fourth uncomfortable truth is how recursive improvement becomes compulsive. Kenji can’t stop because each day of not improving his improvement feels like stagnation. When you’re accelerating, constant velocity feels like moving backward.
This addiction manifests as: • Inability to accept plateau phases • Anxiety when not optimizing optimization • Devaluing of steady-state excellence • Compulsion to add meta-levels • Fear of falling behind yourself Recursive improvement can become its own trap."
I find that this criticism is far less applicable to say individuals but perhaps it could be levied against the way companies are currently treating AI. Which of course is where this comes from.
Do we though? I think a quick jaunt through any of the “-o-spheres” or “-tubes” of the internet would quickly disabuse someone of the idea that we default to not trusting other people. Even before the internet, “urban legends” are effectively “mass social hallucinations”.
We thought that movies adapting to the tiktok generation wouldn't kill cinema, and that new and better directors would rise... this didn't happen and even the latest movies from good directors like ridley scott are quite bad.
Now 3 years ago I typed "lovecraft nietzsche" and would only find 2 videos on youtube that pertain to what I'm looking for, aka the link between the two and how lovecraft's cosmicism might be a metaphor for the abyss etc. but those 2 videos are both excellent, 2 different people thought what I thought but cared enough to write it down and make a video about it. Today I can barely find these videos, there is a sea of AI generated videos with AI narrated text rambling on and on about lovecraft this nietzsche that to hit the 20min mark and maximize ad revenue, all this in a sea of short videos that youtube push harder and harder like multiple shorts between each 2 normal videos. Did another plateform overtake youtube? Not really.
Now some author will use AI to help with his next book, it will work and he will publish faster, then other authors will do the same, and others will optimize it more and more until most books available will be 90% written by AI, colleges will teach you AI assisted writing, and decades after that no one would even think to write a book without the help of AI.
How the hell would you explain to your publisher that you need 3 years to write the sequel when everyone else is doing it in 3 months.
It really does take the beauty out of the whole experience.
Beauty is subjective.
For a long time we were an agrarian society. Getting up early, getting on a horse and tending to your land every day, was probably considered beautiful by some. But we don't do that anymore.
We are probably going to see a similar shift in society. At a much more accelerated pace.
There’s a noticeable negativity on HN toward AI when it comes to coding, writing, or anything similar as if these people have been using AI for the past 30 years and have reached some elevated state of mind where they clearly see it's rubbish, while the rest of us mortals who’ve only been fiddling with it for the past 2.5 years can’t.
And when it comes to books, I find that to be a fairly compelling argument. I want my fiction to be imbibed with the experiences of the author. And I want my nonfiction to be grounded by the realities of the world around me, processed again through a human perspective.
It could be the best written book in the world, it’ll always be missing that human element.
Fiction feels like the ultimate distillation of the human experience. A way to share perspective and experience. And having some algorithm flatten that feels utterly macabre.
Not to be too dramatic. I know that not all fiction is transcendent. But still. There’s something so utterly gross about using a machine for it.
If you made it this far, does having English mistakes like that make really make for better reading?
I believe that art function is to communicate - we create art, type letters, paint graffiti, verbal-vomit in online game PvP match to make a connection with other people.
So the mistakes are only adding to the art: "cooking this is difficult, and everyone do mistakes, but it's made with love and intuition, not blind recipe". Well, I can continue with examples of kissing but I guess I am repeating myself, haha.
I believe that being perfect is not human, and life doesn't have to be perfect. Getting better is great! But so is making mistakes.
(Or, dunno, maybe I have more to learn and I will some day think in a different way.)
"Really? Does having flaws actually make for better reading?
Okay, I’ll admit—that hurt to write (as did that last sentence), but writing isn’t furniture. Aside from a few tells I haven’t kept pace with (like the overuse of the word “delve”), the problem with trying to judge quality based on LLM-generated content is this: you can’t always tell whether the operator spent three minutes copying and pasting the whole thing (unless they accidentally leave in the prompts—which has happened and is a dead giveaway that no one even skimmed it), or if they took the time to thoughtfully consider the questions ChatGPT asked about what the writing should contain.
If you’ve made it this far: do mistakes like these really make for better reading?"
And I'm going to have to say: yes, I enjoyed reading your weird paragraph more than the ChatGPT sanitized version of it.
It works, because most humans are mediocre (including their managers). So they gang up on the productive part of the population, harness its output, launder its output and so forth.
Then they say: "See, there are no differences! We are all equal!"
A project that would rethink the book medium into something backed by an LLM would be worth it.
it does seem like the consensus is that o3 and (at a much lower cost) r1 are better writers than claude, but obviously anthropic's agent framework doesn't support those
"The irony is delicious and deeply instructive. Every flaw we've identified in artificial intelligence exists, magnified and unchecked, in human intelligence. But here's the critical difference: when it appears in AI, we can see it, measure it, and try to fix it. When it appears in humans, we call it "just being human" and move on."
Poor Claude thinks it's all a bit unfair.
I didn't read much of it though, it made me feel a bit like I was a naughty avout being punished by reading pages from The Book.
karmakurtisaani•1d ago
gjm11•1d ago
novosel•1d ago
gjm11•1d ago
I emphasized "lot" because presumably the training data for a high-end model like Claude includes literally millions of books, far more than any human being has read even a few pages of.
To me, "Claude has read a lot of books" would read pretty oddly here. I suppose the idea would be to contrast with the (hopefully few) books it's written to date, but I can't think why the emphasis would be called for.
novosel•1d ago
I am simply asking in what sense did Claude read all those books. I am sorry if I have hijacked the disscusion.
edited: spelling
gjm11•1d ago
You might want to say that that isn't "reading", just as we never say that aeroplanes "fly" since they don't do the same thing as birds, never say that computers "play chess" since the calculations they do are very different from those done by human chessplayers, never say that machines "dig ditches" since they don't have the experience of tired muscles and the sun beating down on their backs, etc. As you may gather from the tone of the previous sentence, I am not altogether convinced.
I do agree that the two things aren't equivalents. I can imagine futures in which AI systems have a training process that somewhat resembles the present one, and also do something that corresponds more closely to human reading[1], and then I'd want to reserve the term "read" for the latter. What Claude has done to all the books that helped shape it isn't exactly reading. But it's quite like reading for our present purpose; when someone jokes about an author having written more books than they've read, they mean that the author doesn't have much awareness of other people's work, and that isn't a problem Claude has.
[1] E.g., they might have some sort of awareness of the process, whatever exactly that might mean; they might do it "for pleasure", whatever exactly that might mean; etc. Those things would require the AI systems to resemble humans in ways present AI systems aren't designed to and, so far as I can tell, don't; maybe some future AI systems will be that way, maybe not.
synthos•1d ago
At least 'minimal human contribution' in the first revision. So scan read it as it revised the book.
I do find it amusing to think that people might just ask an AI to summarize the book instead of actually reading it. Maybe even 'review' it as well.
The future will have authenticity in short supply