Heck, I, too, have noticed that nobody reads anything: what does that have to do with AI? At least with AI, people could read a summary of his 30 page corporate memo and ask it questions.
I repeat: that people do not read is not a new problem, nor is it made one iota worse by AI.
In practice, I find that if I don't format something as a bulleted/numbered list, nobody is going to look at it.
I’m one of those people, if I’m honest. If I’m reading for work, I want the minimum words necessary to get the point across. I’m looking for information, not a story.
If I’m reading for enjoyment though, it’s another thing entirely.
Alice: Hey ChatGPT, please take this bullet list of points and turn it into a polite, but assertive and persuasive, email to Bob.
Bob: Hey, ChatGPT, please take Alice's email and turn it into a succinct list of bullet points.
Although, the next step on this ladder is going to be that people don't even double-check the facts in the original document, and just take what the LLM said as truth, which is perhaps scarier to me than people not reading the original documents in the first place...
This for example is fascinating https://youtu.be/Ca_RbPXraDE?si=WmqTH-DONchvlJjE
Writing and reading is similar to writing and talking. Multitasking but in reality it is switching between both of them. Really fast switching, but it is not a real parallelism.
Here on HN, short comments are more appreciated than longer comments. People are skimming, not reading. The ability to say a lot with very few words is what is appreciated the most.
That's nothing new btw. As Mark Twain once wrote: “I didn’t have time to write a short letter, so I wrote a long one instead.”
Using LLMs to rephrase things more efficiently is a good use of LLMs. People are getting used to better signal to noise ratios in written text and higher standards of writing. And they'll mercilessly use LLMs to deal with long form drivel produced by their colleagues.
That's not actually true. It may be true now when everyone still has context, but if you built a sound system that will outlast your own contributions to it, the documentation becomes invaluable.
> People are skimming, not reading.
Yes. The cause of much suffering and misery in the modern world.
> And they'll mercilessly use LLMs to deal with long form drivel produced by their colleagues.
otoh, they also use them to generate more noise and drivel than we ever imagined possible. When it took human effort to pump out boring corp-speak, that at least put a cap on the amount of useless documentation and verbiage being emitted. Now the ceiling has been completely blown off. People who have been incapable of even crafting a single sentence their entire lives can now shovel volumes of AI-generated garbage down our throats.
The myth is that the culture at Amazon is contrary to that. Is that true?
I definitely feel like writing clarifies my own thoughts and helps me find inconsistencies or other problems with what I think I want to say. If everyone is letting the LLM do most of the work writing and reading, their thought process and eventual conclusions are definitely going to be strongly influenced by that LLM. A great justification for the alignment efforts I guess, or a great opportunity for to propagandists.
We've invented the equivalent of a "calculator for words" and we're going through the growing pains of discovering that putting words together is a separate activity from thinking. We've never needed to conceptualize them as separate activities until now so we don't have the conceptual distinction and language to even describe them that way.
I think proving theorems is a better analogy than rote calculation—like the writing process, it is creative.
While mathematicians don't sit down and smash random symbols together to concoct proofs, many of them will stress the importance of good notation. Chewing on notation a bit can help reveal connections in the hunt for a proof.
When people claim that "writing is thinking" they mean something similar. The free-associative process of writing out our thoughts can help bring an initial clarity that is difficult to achieve without an external medium. Most of the time, we ought to polish those thoughts and connections just as mathematician ought to polish his proof sketch, but there is some extent to which the actual activity does in fact make up a significant portion of the thinking process. This is why offloading all of that activity to LLMs is dangerous. You can judge the string of propositions produced by an LLM, but you haven't thought them.
Title is about AI. Content is just rambling on how people dislike reading business reports.
If writing is thinking I think the author is having a trouble thinking coherently.
Or how the thoughts we have as we are writing shape our understanding, and so we come out with not only a written composition, but a new frame of mind; AI generated writing allows us to preserve our frame of mind—for better or worse!
There is something to be said for the ways in which coding, too, is an exercise in self-authoring, ie authoring-of-the-self.
I'm currently using a LLM to rewrite a fitness book. It takes ~20 pages of rambling text by a professional coach // amateur writer and turns it into a crisp clear 4 pages of latex with informative diagrams, flow charts, color-coding, tables, etc. I sent it out to friends and they all love the new style. Even the ones who hate the gym.
My experience is LLMs can write very very well; we just have to care.
Hubert Humphrey (VP US) was asked how long it would take him to prepare a 15 minute talk: "one week". Asked how long to prepare a two hour talk? "I am ready right now".
although, I agree with the author since many emails and messages onlinkedin i get these days are just long post shits by AI. I am not reading them anymore but it's some other ai summarising ebcause no human talks or writes like basic ai prompting does. so so difficult to read that
My experience is that people who think this are really bad writers. That's fine, because most human writing is bad too. So if your goal is just to put more bad writing into the world for commercial reasons, then there's some utility in it for sure.
I haven’t seen many examples of anything in either visual or prose arts coming out of an AI that I’ve liked, and the ones I have seen seem like they took a human doing a lot of prompting to the point that they are essentially human made art using the AI as a renderer. (Which is fine. I’m just saying the AI didn’t make it autonomously.)
Most people are very bad readers, too.
For example, most of my coworkers don't read books at all, and the few that do, only read tech or work-related books. (Note that most don't even read that).
FWIW I think there's a kernel of truth when you worry about reading skills, but 1) that's a longer trend involving all kinds of political and cultural issues, and 2) right now I'm happy with any improvement to technical communication. I think people might read more if books were better written & more respectful of their time.
Thanks for replying so quickly! Just to clarify, what do you mean by latex?
So you don't use AI-generated illustrations. Those are real.
What tool are you using for this?
I too have used LLM to do writing that, frankly, I wouldn't have done without it. Often I don't even take what it says but it helps to get the ideas out in written form and then I can edit it to sound more like how I want to sound.
But your consumer changed as well.
https://www.youtube.com/watch?v=SC2eSujzrUY
Inventions created for convenience decades ago have now become health concerns. I wonder how AI might affect our intellectual well-being in the decades to come.
I feel the movie well captures the tone of the current moment.
Worth a watch.
We’re already seeing the regression to the mean, which is basically fanatical clinging to myths and historicism that favors whatever period or place favors the lifestyle they personally want.
Personally: All my best business has been done in front of a whiteboard.
I dont use LLMs at all for writing. Mainly for checking stuff and the most boilerplate of code.
It's that (already) old joke: we give the LLM 5 bullet points to write a memo and the recipient uses an LLM to turn it back to 5 bullet points.
Some plausible (to me) possibilities:
1. Bifurcation: Maybe a subset of knowledge workers continue to write and read and therefore drive the decisions of the business. The remainder just do what the LLM says and eventually get automated away.
2. Augmentation: Thinking is primarily done by humans, but augmented by AI. E.g., I write my thoughts down (maybe in 5 bullet points or maybe in paragraphs) and I give it to the LLM to critique. The LLM helps by poking holes and providing better arguments. The result can be distributed to everyone else by LLMs in customized form (some people get bullet points, some get slide decks, some get the full document).
3. Transformation: Maybe the AI does the thinking. Would that be so bad? The board of directors sets goals and approves the basic strategy. The executive team is far smaller and just oversees the AI. The AI decides how to allocate resources, align incentives, and communicate plans. Just as programmers let the compiler write the machine code, why bother with the minutiae of resource allocation? That sounds like something an algorithm could do. And since nobody reads anyway, the AI can direct people individually, but in a coordinated fashion. Indeed, the AI can be far more coordinated than an executive team.
This already happens. Being the person who writes the doc [for what we wanna do next] gives it ridiculous leverage and sway in the business. Everyone else is immediately put in the position of feedbacking instead of driving and deciding.
Being the person who feedbacks gives you incredible leverage over people who just follow instructions from the final version
This is already how we moved from stupidly long and formal emails to Slack messages. And from messages to reactions.
I understand not every field went there, but I think it's just a matter of time we collectively cut the traditional boilerplate, which would negate most of what the LLMs are bringing to the table right now.
> 2. Augmentation
I see it as the equivalent of Intellisense but expanded to everything. As a concept, it doesn't sound so bad ?
Where I am conflicted is creative writing -- its something I have been interested in but never pursued...and now I am able to pursue it with AI's help. There is a degree of embarrassment when confiding to folks, that yes a piece was AI assisted... see here by what I mean: https://humancurious.substack.com/p/the-architect-and-the-cr...
I hate this thing, it's so soul-less.
I sort of feel like it would blunt the downsides of AI rewriting everything if it had to explain why it was making all the changes. Being told the rationale would allow users to make better decisions about whether to accept/reject a change, and also help the user avoid making the same writing mistakes in the future.
My manager uses AI when generating docs and emails. I think he does it because English isn't his native language and he goes for the "polished" look.
Frankly, I prefer the grammar errors and the authentic version. The AI polish is always impersonal and reeks of low effort.
I also love how everyone thinks coworkers don't notice the AI touch... of course we do.
It's like when you are growing up and a certain type of behaviour that would work for socializing when you're 14 years old suddenly doesn't work anymore when you are 21. You learn about it when someone you trust brings you attention to it and suddenly you have the opportunity to reflect and change your behaviour.
The thing with AI that I really fear is the same with mind-altering drugs like Adderall. In some places you just can't afford the luxury of not using it without losing competitiveness (I think, never used it but I know of people that do with regularity).
So maybe we don't want to not read what we write, but sometimes there is a middle manager making you do it. Then it's a problem of context that awareness in itself doesn't help, maybe only in the long run.
This is strange to me. You could give me 100 chances to guess which “mind-altering drug” you are thinking of and Adderall wouldn’t cross my mind. Amphetamine is a stimulant; it’s plainly not mind-altering in the way that psychedelics are. Adderall is mind-altering in the same way that caffeine is. Which is to say, it isn’t.
I'm unfortunately not too optimistic about this. There are plenty of things that are bad for you that everyone is aware of: not exercising, eating junk food, spending all day online, etc. But so many people do these things anyways; the human mind is incredible at cheating itself to make things easier on itself, and I don't think this is an exception.
I'll take the flip side of the argument and say AI allows humans to get back to raw, first-principals research and writing.
In many ways, middle men of literature (i.e. re-bloggers, article writers, etc) are moot. Groups who don't actually add value but have insane amount of ads on the page. This pushes people to actually write original content.
Is this perfect? no way. Is it dangerous? yes.
There are so many problems, but for better/worse, too many of us have changed the way we operate. It's here to stay.
Is the post some sort of series? I will be looking forward for the part that addresses possible solutions.
My memory gets exercised a lot less frequently than it would need to without writing.
But memory is also not thinking. It is a component in thinking, but it is not thinking itself. Discourse, arguably, though, whether in natural or symbolic language is thinking. If we offload all of that onto machines, we'll do less of it, and yes our expectations will change, but I actually think the scenario here is different than the one Socrates faced and that the stakes are slightly higher—and Socrates wasn't wrong, we just needed internal memory less than we thought once external memory became feasible, as cool and badass as it may seem to "own" Socrates in retrospect.
johnnienaked•6h ago
"AI" is using an algorithm and statistics in the same way---it's just more accurate at making intelligible sentences than the example above. I wouldn't call either thinking, would you?
bluefirebrand•6h ago
No one should call it writing, either
adamtaylor_13•6h ago
Now we can churn out text at rates unprecedented and the original problem, no one reading, is left untouched.
The author wonders what happens when the weird lossy gap in-between these processes gets worse.
There’s lots of evidence that writing helps formulate good thinking. Interestingly, CoT reasoning mirrors this even if the underlying mechanisms differ. So while I wouldn’t call this thinking, I also don’t think reducing LLM output to mere algorithmic output exactly captures what’s happening either.
EDIT: previous != precious.
zahlman•4h ago
https://www.youtube.com/watch?v=sqm4-B07LsE
johnnienaked•3h ago
I think you miss my point a bit.
Any text that can be churned out at unprecedented rates likely isn't worth reading (or writing, or looking at, or listening to), and anyone consuming this stuff already isn't doing much thinking.
You can lead a horse etc etc