I think the rates of ADHD are going to go through the roof soon, and I'm not sure if there is anything that can be done about it.
As a diagnosed medical condition I don't know, as people having seemingly shorter and short attention spans we are seeing it already, TikTok and YT shorts and the like don't help, we've weaponised inattention.
Specifically is there any correlation between people who have always read a lot as I do and people who don't.
My observation (anecdota) is that the people I know who read heavily are much better at and much more against AI slop vs people who don't read at all.
Even when I've played with the current latest LLM's and asked them questions, I simply don't like the way they answer, it feels off somehow.
AI is good at local coherence, but loses the plot over longer thoughts (paragraphs, pages). I don't think I could identify AI sentences but I'm totally confident I could identify an AI book.
This includes both opening a large text in a way of thinking that isn't reflected several paragraphs later, and also maintaining a repetitive "beat" in the rhythm of writing that is fine locally but becomes obnoxious and repetitive over longer periods. Maybe that's just regression to the mean of "voice?"
Also, reminds me of this cartoon from March 2023. [0]
[0] https://marketoonist.com/2023/03/ai-written-ai-read.html
pre-AI scientists would publish papers and then journalists would write summaries which were usually misleading and often wrong.
An AI operating on its own would likely be no better than the journalist, but an AI supervised by the original scientist quite likely might do a better job.
Isn't that the same with AI-generated source code? If lazy programmers didn't bother writing it, why should I bother reading it? I'll ask the AI to understand it and to make the necessary changes. Now, let's repeat this process over and over. I wonder what would be the state of such code over time. We are clearly walking this path.
Programming languages were originally invented for humans to write and read. Computers don't need them. They are fine with machine code. If we eliminate humans from the coding process, the code could become something that is not targeted for humans. And machines will be fine with that too.
Anyone can access ChatGPT, why do we need an intermediary?
Someone a while back shared, here on HN, almost an entire blog generated by (barely touched up) AI text. It even had Claude-isms like "excellent question!", em-dashes, the works. Why would anyone want to read that?
Or do you remember when Facebook groups or image communities were flooded with funny/meme AI-generated images, "The Godfather, only with Star Wars", etc? Thank you, but I can generate those zero-effort memes myself, I also have access to GenAI.
We truly don't need intermediaries.
PS: the person I mentioned before argued he didn't write the blog himself because he didn't have the time. If he didn't want to spend the time to write something, why should I spend the time to read it?
I agree with you that AI slop blog posts are a bad thing, but there are about zero people who use LLMs to spit out blog posts which will change their mind after reading your arguments. You’re not speaking their language, they don’t care about anything you do. They are selfish. The point is themselves, not the reader.
> Everyone wants to help each other.
No, they very much do not. There are a lot of scammers and shitty entitled people out there, and LLMs make it easier than ever to become one of them or increase the reach of those who already are.
True!
But when I encounter a web site/article/video that has obviously been touched by genAI, I add that source to a blacklist and will never see anything from it again. If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.
Please do tell more. Do you make it like a rule in your adblocker or something else?
> If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.
I’m not convinced. The effort on their part is so low that even the lost audience (which will be far from everyone) is still probably worth it.
I think that's the best use case and it's not AI related as spell-checkers and translation integrations exist forever, now they are just better.
Especially for non-native speakers that work in a globalized market. Why wouldn't they use the tool in their toolbox?
Maybe someone will build an AI model that's succinct and to the point someday. Then I might appreciate the use a little more.
I will also take a janky script for a game hand-translated by an ESL indie dev over the ChatGPT House Style 99 times out of 100 if the result is even mostly comprehensible.
This ship sailed a long time ago. We have been exposed to AI-generated text content for a very long time without even realizing it. If you read a little more specialized web news, assume that at least 60% of the content is AI-translated from the original language. Not to mention, it could have been AI-generated in the source language as well. If you read the web in several languages, this becomes shockingly obvious.
My wife is ESL. She's asked me to review documents such as her resume, emails, etc. It's immediately obvious to me that it's been run through ChatGPT, and I'm sure it's immediately obvious to whomever she's sending the email. While it's a great tool to suggest alternatives and fix grammar mistakes that Word etc don't catch, using it wholesale to generate text is so obvious, you may as well write "yo unc gimme a job rn fr no cap" and your odds of impressing a recruiter would be about the same. (the latter might actually be better since it helps you stand out.)
Humans are really good at pattern matching, even unconsciously. When ChatGPT first came out people here were freaking out about how human it sounded. Yet by now most people have a strong intuition for what sounds ChatGPT-generated, and if you paste a GPT-generated comment here you'll (rightfully) get downvoted and flagged to oblivion.
So why wouldn't you use it? Because it masks the authenticity in your writing, at a time when authenticity is at a premium.
These type of complains about LLMs feel like the same ones people probably said about using a typewriter for writing a letter vs. a handwritten one saying it loses intimacy and personality.
I would have written "lexical fruit machine", for its left to right sequential ejaculation of tokens, and its amusingly antiquated homophobic criminological implication.
You're telling me I need to use 100% of my brain, reasoning power, and time to go over your code, but you didn't feel the need to hold yourself to the same standard?
I don’t think they are (telling you that). The person who sends you an AI slop PR would be just as happy (probably even happier) if you turned off your brain and just merged it without any critical thinking.
Waiting for the rest of the comment to load in order to figure out if it's sincere or parody.
This is like reviewing your own PRs, it completely defeats the purpose.
And no, using different models doesn’t fix the issue. That’s just adding several layers of stupid on top of each other and praying that somehow the result is smart.
That is literally how civilization works.
It's a joke.
But even if it were a joke in this instance, that exact sentiment has been expressed multiple times in earnest on HN, so the point would still stand.
As insulting as it is to submit an AI-generated PR without any effort at review while expecting a human to look it over, it is nearly as insulting to not just open the view the reviewer will have and take a look. I do this all the time and very often discover little things that I didn't see while tunneled into the code itself.
In the sense that you double check your work, sure. But you wouldn’t be commenting and asking for changes, you wouldn’t be using the reviewing feature of GitHub or whatever code forger you use, you’d simply make the fixes and push again without any review/discussion necessary. That’s what I mean.
> open the view the reviewer will have and take a look. I do this all the time
So do I, we’re in perfect agreement there.
It is, but for all the reasons AI is supposed to fix. If I look at code I myself wrote I might come to a different conclusion about how things should be done because humans are fallible and often have different things on their mind. If it's in any way worth using an AI should be producing one single correct answer each time, rendering self PR review useless.
If the problems we deal with are ambiguous enough that expert humans might arrive at several different valid ways to skin the cat, why would an LLM not too?
Yes. You just have to be in a different mindset. I look for cases that I haven't handled (and corner cases in general). I can try to summarize what the code does and see if it actually meets the goal, if there's any downsides. If the solution in the end turns out too complicated to describe, it may be time to step back and think again. If the code can run in many different configurations (or platforms), review time is when I start to see if I accidentally break anything.
> This is like reviewing your own PRs, it completely defeats the purpose.
I've been the first reviewer for all PRs I've raised, before notifying any other reviewers, for so many years that I couldn't even tell you when I started doing it. Going through the change set in the Github/Gitlab/Bitbucket interface, for me, seems to activate an different part of my brain than I was using when locked in vim. I'm quick to spot typos, bugs, flawed assumptions, edge cases, missing tests, to add comments to pre-empt questions ... you name it. The "reading code" and "writing code" parts of my brain often feel disconnected!
Obviously I don't approve my own PRs. But I always, always review them. Hell, I've also long recommended the practice to those around me too for the same reasons.
You don’t, we’re on the same page. This is just a case of using different meanings of “review”. I expanded on another sibling comment:
https://news.ycombinator.com/item?id=45723593
> Obviously I don't approve my own PRs.
Exactly. That’s the type of review I meant.
So, your minimum bar for a useful AI is that it must always be perfect and a far better programmer than any human that has ever lived?
Coding agents are basically interns. They make stupid mistakes, but even if they're doing things 95% correctly, then they're still adding a ton of value to the dev process.
Human reviewers can use AI tools to quickly sniff out common mistakes and recommend corrections. This is fine. Good even.
You are transparently engaging in bad faith by purposefully straw manning the argument. No one is arguing for “far better programmer than any human that has ever lived”. That is an exaggeration used to force the other person to reframe their argument within its already obvious context and make it look like they are admitting they were wrong. It’s a dirty argument, and against the HN guidelines (for good reason).
> Coding agents are basically interns.
No, they are not. Interns have the capacity to learn and grow and not make the same mistakes over and over.
> but even if they're doing things 95% correctly
They’re not. 95% is a gross exaggeration.
I first read that as "coworkers (who are) fully AI generated" and I didn't bat an eye.
All the AI hype has made me immune to AI related surprises. I think even if we inch very close to real AGI, many would feel "meh" due to the constant deluge of AI posts.
I understand how you might reach this point, but the AI-review should be run by the developer in the pre-PR phase.
Do you review your comments too with AI?
This reminds me of an awesome bit by Žižek where he describes an ultra-modern approach to dating. She brings the vibrator, he brings the synthetic sleeve, and after all the buzzing begins and the simulacra are getting on well, the humans sigh in relief. Now that this is out of the way they can just have a tea and a chat.
It's clearly ridiculous, yet at the point where papers or PRs are written by robots, reviewed by robots, for eventual usage/consumption/summary by yet more robots, it becomes very relevant. At some point one must ask, what is it all for, and should we maybe just skip some of these steps or revisit some assumptions about what we're trying to accomplish
I've been thinking this for a while, despairing, and amazed that not everyone is worried/surprised about this like me.
Who are we building all this stuff for, exactly?
Some technophiles are arguing this will free us to... do what exactly? Art, work, leisure, sex, analysis, argument, etc will be done for us. So we can do what exactly? Go extinct?
"With AI I can finally write the book I always wanted, but lacked the time and talent to write!". Ok, and who will read it? Everybody will be busy AI-writing other books in their favorite fantasy world, tailored specifically to them, and it's not like a human wrote it anyway so nobody's feelings should be hurt if nobody reads your stuff.
That's why it isn't necessary to add the "to be fair" comment i see crop up every time someone complains about the low quality of AI.
Dealing with low effort people is bad enough without encouraging more people to be the same. We don't need tools to make life worse.
It's as if someone created a device that made cancer airborne and contagious and you come in to say "to be fair, cancer existed before this device, the device just made it way worse". Yes? And? Do you have a solution to solving the cancer? Then pointing it out really isn't doing anything. Focus on getting people to stop using the contagious aerosol first.
Code review is one of the places where experience is transferred. It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."
If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.
Almost nobody uses it for that today, unfortunately, and code reviews in both directions are probably where the vast majority of learning software development comes from. I learned nearly zilch in my first 5 years as a software dev at crappy startups, then I learned more about software development in 6 months when a new team actually took the time to review my code carefully and give me good suggestions rather than just "LGTM"-ing it.
But otherwise, writing code with LLM‘s is more than just the prompt. You have to feed it the right context, maybe discuss things with it first so it gets it and then you iterate with it.
So if someone has done the effort and verified the result like it‘s their own code, and if it actually works like they intended, what’s wrong with sending a PR?
I mean if you then find something to improve while doing the review, it’s still very useful to say so. If someone is using LLMs to code seriously and not just to vibecode a blackbox, this feedback is still as valuable as before, because at least for me, if I knew about the better way of doing something I would have iterated further and implemented it or have it implemented.
So I don‘t see how suddenly the experience transfer is gone. Regardless if it’s an LLM assisted PR or one I coded myself, both are still capped by my skill level not the LLMs
But just say it! Bypass the middleman who's just going to make it blurrier or more long-winded.
You're never going to get that raw shit you say you want, because it has negative value for creator's brands, it looks way lazier than spot checked AI output, and people see the lack of baseline polish and nope out right away unless it's a creator they're already sold on (then you can pump out literal garbage, as long as you keep it a low % of your total content you can get away with shit new creators only dream of).
Anyone who has done any serious writing knows that a good editor will always find a dozen or more errors in any essay of reasonable length, and very few people are willing to pay for professional proofreading services on blog posts. On the other side of the coin, readers will wince and stumble over such errors; they will not wonder at the artisanal authenticity of your post, but merely be annoyed. Wabi-sabi is an aesthetic best reserved for decor, not prose.
Fellas, is it antihuman to use tools to perfect your work?
I can't draw a perfect circle by hand, that's why I use a compass. Do I need to make it bad on purpose and feel embarrassed by the 1000th time just to feel more human? Do I want to make mistakes by doing mental calculations instead of using a calculator, like a normal person? Of course not.
Where this "I'm proud of my sloppy shit, this is what's make me human" thing comes from?
We rised above other species because we learnt to use tools, and now we define to be "human"... by not using tools? The fuck?
Also, ironically, this entire post smells like AI slop.
I think low effort LLM use is hilariously bad. The content it produces too. Tuning it, giving is style, safeguards, limits, direction, examples, etc. can improve it significantly.
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!
It would be more human to handwrite your blog post instead. I don’t see how this is a good argument. The use of tools to help with writing and communication should make it easier to convey your thoughts, and that itself is valuable.
All I care about is content, too, but people using LLMs to blog and make readmes is routinely getting garbage content past the filters and into my eyeballs. It's especially egregious when the author put good content into the LLM and pasted the garage output at us.
Are there people out there using an LLM as a starting point but taking ownership of the words they post, taking care that what they're posting still says what they're trying to say, etc? Maybe? But we're increasingly drowning in slop.
It's not an assumption. Look at this example: https://news.ycombinator.com/item?id=45591707
The author posted their input to the LLM in the comments after receiving critcism, and that input was much better than their actual post.
In this thread I'm less sure: https://news.ycombinator.com/item?id=45713835 - it DOES look like there was something interesting thrown into the LLM that then put garbage out. It's more of an informed guess than an assumption, you can tell the author did have an experience to share, but you can't really figure out what's what because of all the slop. In this case the author redid their post in response to criticism and it's still pretty bad to me, and then they kept using an LLM to post comments in the thread, I can't really tell how much non-garbage was going in.
I cannot blame people for using software as a crutch when human-based writing has become too hard and seldom rewarded anymore unless you are super-talented, which statistically the vast majority of people are not.
“Blog” stands for “web log”. If it’s on the web, it’s digital, there was never a period when blogs were hand written.
> The use of tools to help with writing and communication should make it easier to convey your thoughts
If you’re using an LLM to spit out text for you, they’re not your thoughts, you’re not the one writing, and you’re not doing a good job at communicating. Might as well just give people your prompt.
I’ve seen exactly that. In one case, it was JPEG scans of handwriting, but most of the time, it’s a cursive font (which may obviate “handwritten”).
I can’t remember which famous author it was, that always submitted their manuscripts as cursive writing on yellow legal pads.
Must have been thrilling to edit.
For example, there was never a period when movies were made by creating frames as oil paintings and photographing them. A couple of movies were made like that, but that was never the norm or a necessity or the intended process.
Like, I’m totally on board with rejecting slop, but not all content that AI was involved in is slop, and it’s kind of frustrating so many people see things so black and white.
It's like listening to Bach's Prelude in C from WTCI where he just came up with a humdrum chord progression and uses the exact same melodic pattern for each chord, for the entire piece. Thanks, but I can write a trivial for loop in C if I ever want that. What a loser!
Edit: Lest HN thinks I'm cherry picking-- look at how many times Bach repeats the exact same harmony/melody, just shifting up or down by a step. A significant chunk of his output is copypasta. So if you like burritos filled with lettuce and LLM-generated blogs, by all means downvote me to oblivion while you jam out to "Robo-Bach"
This is just pedantic nonsense
The thoughts I put into a text are mostly independent of the sentences or _language_ they're written in. Not completely independent, but to claim thoughts are completely dependent on text (thus also the language) is nonsense.
> Might as well just give people your prompt.
What would be the value of seeing a dozen diffs? By the same logic, should we also include every draft?
It's about to find the sweet spot.
Vibe coding is crap, but I love the smarter autocomplete I get from AI.
Generating whole blog posts from thin air is crap, but I love smart grammar, spelling, and diction fixes I get from AI.
Why do you trust the output? Chatbots are so inaccurate you surely must be going out of your way to misinform yourself.
And it will make them up just like it does everything else. You can’t trust those either.
In fact, one of the simplest ways to find out a post is AI slop is by checking the sources posted at the end and seeing they don’t exist.
Asking for sources isn’t a magical incantation that suddenly makes things true.
> It isn’t guaranteed that content written by humans is necessarily correct either.
This is a poor argument. The overwhelming difference with humans is that you learn who you can trust about what. With LLMs, you can never reach that level.
If you have a habit of asking random lay persons for technical advice, I can see why an idiot chatbot would seem like an upgrade.
This is similar to the common objection for AI-coding that the hard part is done before the actual writing. Code generation was never a significant bottleneck in most cases.
What does bother me is when clearly AI-generated blog posts (perhaps unintentionally) attempt to mask their artificial nature through superfluous jokes or unnaturally lighthearted tone. It often obscures content and makes the reading experience inefficient, without the grace of a human writer that could make it worth it.
However, if I’m reading a non-technical blog, I am reading because I want something human. I want to enjoy a work a real person sank their time and labor into. The less touched by machines, the better.
> It would be more human to handwrite your blog post instead.
And I would totally ready handwritten blog posts!
But it can make for tiresome reading. Like, a 2000 word post can be compressed to 700 or something had a human editor pruned it.
Maybe humans aren't so unique after all, but that's its own topic.
I don’t think having a ML-backed proofreading system is an intrinsically bad idea; the oft-maligned “Apple Intelligence” suite has a proofreading function which is actually pretty good (although it has a UI so abysmal it’s virtually useless in most circumstances). But unless you truly, deeply believe your own writing isn’t as good as a precocious eighth-grader trying to impress their teacher with a book report, don’t ask an LLM to rewrite your stuff.
Somehow this is currently the top comment. Why?
Most non-quantitative content has value due to a foundation of distinct lived experience. Averages of the lived experience of billions just don't hit the same, and are less likely to be meaningful to me (a distinct human). Thus, I want to hear your personal thoughts, sans direct algorithmic intermediary.
It's like being okay with reading the entirety of generated ASM after someone compiles C++.
Whether I hand write a blog post or type it into a computer, I'm the one producing the string of characters I intend for you to read. If I use AI to write it, I am not. This is a far, far, far more important distinction than whatever differences we might imagine arise from hand writing vs. typing.
> your thoughts
No, they aren't! Not if you had AI write the post for you. That's the problem!
Ha! That's a very clever spot on insult. Most LLMs would probably be seriously offended by this would thy be rational beings.
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake.
OK, you are pushing it buddy. My mandarin is not that good; as a matter of fact, I can handle no mandarin at all. Or french to that matter. But I'm certain a decent LLM can do that without me having to resort to reach out to another person, that might not be available or have enough time to deal with my shenanigans.
I agree that there are way too much AI slop being created and made public, but yet there are way too many cases where the use is fair and used for improving whatever the person is doing.
Yes, AI is being abused. No, I don't agree we should all go taliban against even fair use cases.
You know what I'm doing? I'm using AI to chase to the point and extract the relevant (For me) info.
But this kind of content is great for engagement farming on HN.
Just write “something something clankers bad”
While I agree with the author it’s a very moot and uninspired point
Super top articles with millions of readers are done with AI. It’s not an ai problem it’s the content. If it’s watery and no style tuned it’s bad. Same as human author
AI is a tool to help you _finish_ stuff, like a wood sander. It's not something you should use as a hacksaw, or as a hammer. As long as you are writing with your own voice, it's just better autocorrect.
It can also double as a peer reviewer and point out potential counterarguments, so you can address them upfront.
That said, when I do try to get LLMs to write something, I can't stand it, and feel like the OP here.
If yes, there's probably value in putting it out. I don't care if you used paper and ink, a text editor, a spell checker, or asked an LLM for help.
On the flip side, if anyone could've asked an LLM for the exact same text, and if you're outsourcing a critical thinking to the reader - then yeah, I think you deserve scorn. It's no different from content-farmed SEO spam.
Mind you, I'm what you'd call an old-school content creator. It would be an understatement to say I'm conflicted about gen AI. But I also feel that this is the most principled way to make demands of others: I have no problem getting angry at people for wasting my time or polluting the internet, but I don't think I can get angry at them for producing useful content the wrong way.
But you bet that I'm going to use AI to correct my grammar and spelling for the important proposal I'm about to send. No sense in losing credibility over something that can be corrected algorithmically.
Now, I take a cue from school, and write the outline first. With an outline, I can use a prompt for the LLM to play the role of a development editor to help me critique the throughline. This is helpful because I tend to meander, if I'm thinking at the level of words and sentences, rather than at the level of an outline.
Once I've edited the outline for a compelling throughline, I can then type out the full essay in my own voice. I've found it much easier to separate the process into these two stages.
Before outline critiquing: https://interjectedfuture.com/destroyed-at-the-boundary/
After outline critiquing: https://interjectedfuture.com/the-best-way-to-learn-might-be...
I'm still tweaking the developement editor. I find that it can be too much of a stickler on the form of the throughline.
The absolute bare minimum respect you can have for someone who’s making time for you is to make time for them. Offloading that to AI is the equivalent of shitting on someone’s plate and telling them to eat it.
I struggle everyday with the thought that the richest most powerful people in the world will sell their souls to get a bit richer.
If folks figure out a way to produce content that is human, contextual and useful... by all means.
It’s really funny how many business deals would be better if people would put the requests in an AI to explain what exactly is requested. Most people are not able to answer and if they’d use an AI they could respond in a proper way without wasting everyone’s time. But at least not using an AI shows the competency (or better - incompetence) level.
It’s also sad that I need to tell people to put my message in an AI to don’t ask me useless questions. And AI can fill most of the gaps people don’t get it. You might say my requests are not proper, but then how an AI can figure out what I want to say? I also put my requests in an AI when I can and can create eli5 explanations of the requests “for dummies”
Perhaps the author is speaking to the people who are only temporarily led astray by the pervasive BS online and by the recent wildly popular "cheating on your homework" culture?
This is just a continuation. It does tend to mean there is less effort to produce the output and thus there is a value degradation, but this has been true all along this technology trend.
I don't think we should be a purist as to how writing is produced.
Frustrated, I just throw that mess straight at claude-code and tell it to fix whatever nonsense it finds and do its best. It probably implements 80–90% of what the doc says — and invents the rest. Not that I’d know, since I never actually read the original AI-generated PRD myself.
In the end, no one’s happy. The whole creative and development process has lost that feeling of achievement, and nobody seems to care about code quality anymore.
For essays, honestly, I do not feel so bad, because I can see that other than some spaces like HN the quality of the average online writer has dropped so much that I prefer to have some machine-assisted text that can deliver the content.
However, my problem is with AI-generated code.
In most of the cases to create trivial apps, I think AI-generated code will be OK to good; however, the issue that I'm seeing as a code reviewer is that folks that you know their code design style are so heavily reliant on AI-generated code that you are sure that they did not write and do not understand the code.
One example: Working with some data scientists and researchers, most of them used to write things on Pandas, some trivial for loops, and some primitive imperative programming. Now, especially after Claude Code, most of the things are vectorized, with some sort of variable naming with way-compressed naming. Sometimes folks use Cython in some data pipeline tasks or even using functional programming to an extreme.
Good performance is great, and leveling up the quality of the codebase it's a net positive; however, I wonder in some scenario when things go south and/or Claude code is not available if those folks will be able to fix it.
I do understand the reasoning behind being original, but why make mistakes when we have tools to avoid them? That sounds like a strange recommendation.
If the goal is to get the job done, then use AI.
Do you really want to waste precious time for so little return?
I think some people turn AI conversations into blog posts that they pass off as their own because of SEO considerations. If Twitter didn't discourage people sharing links, perhaps we would see a lot more tweet threads that start with https://chatgpt.com/share/... and https://claude.ai/share/...
Other types of writing don't depend so much on the author's voice and for those I think AI is fine as long as the author's ideas and concepts are communicated effectively. In fact, if one is a poor writer due to a bad educational experience or writing in a language not your native language then AI might be a big improvement. Again, it's about communicating your ideas, not the words and grammer used.
If a post contains valuable information that I learn from it, I don't really care if AI wrote it or not. AI is just a tool, like any other tool humans invented.
I'm pretty sure people had the same reaction 50 years ago, when the first PCs started appearing: "It's insulting to see your calculations made by personal electronic devices."
---
Honestly, it feels rude to hand me something churned out by a lexical bingo machine when you could’ve written it yourself. I'm a person with thoughts, humor, contradictions, and experience not a content bin.
Don't you like the pride of making something that's yours? You should.
Don't use AI to patch grammar or dodge effort. Make the mistake. Feel awkward. Learn. That's being human.
People are kinder than you think. By letting a bot speak for you, you cut off the chance for connection.
Here's the secret: most people want to help you. You just don't ask. You think smart people never need help. Wrong. The smartest ones know when to ask and when to give.
So, human to human, save the AI for the boring stuff. Lead with your own thoughts. The best ideas are the ones you've actually felt.
If I'm finding that voice boring, I'll stop reading - whether or not AI was used.
The generic AI voice, and by that I mean very little prompting to add any "flavor", is boring.
Of course I've used AI to summarize things and give me information, like when I'm looking for a specific answer.
In the case of blogs though, I'm not always trying to find an "answer", I'm just interested in what you have to say and I'm reading for pleasure.
This is like saying a photographer shouldn't find the sunset they photographed pretty or be proud of the work, because they didn't personally labor to paint the image of it.
A lot more goes into a blog post than the actual act of typing the context out.
Lazy work is always lazy work, but its possible to make work you are proud with with AI, in the same way you can create work you are proud of with a camera
noir_lord•1h ago
Now you could argue but you don't know it was AI it could just be really mediocre writing - it could indeed but I hit the back button there as well so it's a wash either way.
rco8786•1h ago
Waterluvian•1h ago
noir_lord•1h ago
I'd sooner have a ship painting from the little shop in the village with the little old fella who paints them in the shop than a perfect robotic simulacrum of a Rembrandt.
Intention matters but it matters less sometimes but I think it matters.
Writing is communication, it's one of the things we as humans do that makes us unique - why would I want to reduce that to a machine generating it or read it when it has.
cubefox•55m ago
noir_lord•44m ago
The Matrix was and is fantastic on many levels.
embedding-shape•1h ago
shadowgovt•40m ago
At this point, I don't know there's much more to be said on the topic. Lines of contention are drawn, and all that's left is to see what people decide to do.