How we can tell that this wasn't written by an LLM.
Like always we have to lean on evaluating based on quality. You can produce quality using an LLM, but it's much easier to produce slop, which is why there's so much of it now.
At this point, I'm not sure whether you're a clawdbot running amok..
> I need to know there was intention behind it. [...] That someone needed to articulate the chaos in their head, and wrestle it into shape.
If forced to choose, I'd use coherence as evidence of care than use it as a refutation of humanity.
a large part of the business models of these systems is going to consist of dealing with these systems... it's a wonderful scheme
I'll want to communicate something to my team. I'll write 4 bullet points, plug it into an LLM, which will produce a flowing, multi paragraph e-mail. I'll distribute it to my co-workers. They will each open the e-mail, see the size, and immediately plug it into an LLM asking it to make a 4 bullet summary of what I've sent. Somewhere off in the distance a lake will dry up.
I believe it's already in place, making the internet a bit more wasteful.
So I get the frustration that "ai;dr" captures. On the other hand, I've also seen human writing incorrectly labeled AI. I wrote (using AI!) https://seeitwritten.com as a bit of an experiment on that front. It basically is a little keylogger that records your composition of the comment, so someone can replay it and see that it was written by a human (or a very sophisticated agent!). I've found it to be a little unsettling, though, having your rewrites and false starts available for all to see, so I'm not sure if I like it.
≈
The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. (Brandolini)
AI might suck, but if the author doesn't change, they get categorized as a lazy AI user, unless the rest of their writing is so spectacular that it's obvious an AI didn't write it.
My personal situation is fine though. AI writing usually has better sentence structure, so it's pretty easy (to me at least) to distinguish my own writing from AI because I have run-on sentences and too many commas. Nobody will ever confuse me with a lazy AI user, I'm just plain bad at writing.
No, you are writing for people who see LLM-signals and read on anyway.
Not sure that that's a win for you.
There's your trouble. The real problem is that most internet users are setting their baseline for "standard issue human writing" at exactly the level they themselves write. The problem is that more and more people do not draw a line between casual/professional writing, and as such balk at very normal professional writing as potentially AI-driven.
Blame OS developers for making it easy—SO easy!—to add all manner of special characters while typing if you wish, but the use of those characters, once they were within easy reach, grew well before AI writing became a widespread thing. If it hadn't, would AI be using it so much now?
\s
There are a lot of people like me in software. I’m tempted to say we are “shouted down”, but honestly it’s hard to be shouted down when you can talk circles around some people. But we are definitely in a minority. There are actually a lot of parallels between creative writing and software and a few things that are more than parallel. Like refactoring.
If you’re actually present when writing docs instead of monologuing in your head about how you hate doing “this shit”, then there’s a lot of rubber ducking that can be done while writing documentation. And while I can’t say that “let the AI do it” will wipe out 100% of this value, because the AI will document what you wrote instead of what you meant to write, I do think you will lose at least 80% of that value by skipping out on these steps.
It’s literal content expansion, the opposite of gzip’ing a file.
It’s like a kid who has a 500 word essay due tomorrow who needs to pad their actual message up to spec.
I agree that reading an LLM-produced essay is a waste of time and (human) attention. But in the case of overly-verbose human writing, it's the human that's wasting my time[1], and the LLM is gzip'ing the spew.
[1] Looking at you, New Yorker magazine.
Fun, I'd make playback speed something like 5x or whatever feels appropriate, I think nobody truly wants to watch those at 1x.
https://news.ycombinator.com/item?id=557191
I can't believe etherpad lost this item...
edit: oh, I found the one I was looking for: https://byronm.com/13sentences.html
They want all this artisnal hand written prose under the candle light with the moon in the background. And you are a horrible person for using AI, blablabla.
But ask for feedback? And you get Inky, Blinky, Pinky, and Clyde. Aka ghosted. But boy, do they tell a good story. Just ain't fucking true.
Counter: companies deserve the same amount of time invested in their application as they spend on your response.
I've noticed that attitude a lot. Everyone thinks their use of AI is perfectly justified while the others are generating slops. In gamedev it's especially prominent - artists think generating code is perfectly ok but get acute stress response when someone suggests generating art assets.
Communication is for humans. It's our super power. Delegating it loses all the context, all the trust-building potential from effort signals, and all the back-and-forth discussion in which ideas and bonds are formed.
But of course it doesn't do that becaude we can't trust it the way we do a traditional compiler. Someone has to validate its output, meaning it most certainly IS meant for humans. Maybe that will change someday, but we're not there yet.
Of more concern to me is that when it's unleashed on the ephemera of coding (Jira tickets, bug reports, update logs) it generates so much noise you need another AI to summarize it for you.
[1] Code as design, essays by Jack Reeves: https://www.developerdotstar.com/mag/articles/reeves_design_...
I don’t think either is inherently bad because it’s AI, but it can definitely be bad if the AI is less good at encoding those ideas into their respective formats.
Some code I cobbled together to pass a badly written assignment at school. Other code I curated to be beautiful for my own benefit or someone else’s.
I think the better analogy in writing would be… using an LLM to draft a reply to a hawkish car dealer you’re trying to not get screwed by is absolutely fine. Using it to write a birthday card for someone you care about is terrible.
No doubt, but I think there a bit of a difference between AI generating something utilitarian vs something expected to at least have some taste/flavor.
AI generated code may not be the best compared to what you could hand craft, along almost any axis you could suggest, but sometimes you just want to get the job done. If it works, it works, and maybe (at least sometimes) that's all the measure of success/progress you need.
Writing articles and posts is a bit different - it's not just about the content, it's about how it's expressed and did someone bother to make it interesting to read, and put some of their own personality into it. Writing is part communication, part art, and even the utilitarian communication part of it works better if it keeps the reader engaged and displays good theory of mind as to where the average reader may be coming from.
So, yeah, getting AI to do your grunt work programming is progress, and a post that reads like a washing machine manual can fairly be judged as slop in a context where you might have hoped for/expected better.
It's worth pointing out that AI is not a monolith. It might be better at writing code than making art assets. I don't work with gaming, but I've worked with Veo 3, and I can tell you, AI is not replacing Vince Gilligan and Rhea Seehorn. That statement has nothing to do with Claude though...
Shouldn’t we bother to write these things?
A blog post is for communicating (primarily, these days) to humans.
They’re not the same audience (yet).
Because writing is a dirty, scratched window with liquid between the frames and an LLM can be the microfiber cloth and degreaser that makes it just a bit clearer.
Outsourcing thinking is bad. Using an LLM to assist in communicating thought is or at least can be good.
The real problem I think the author has here is that it can be difficult to tell the difference and therefore difficult to judge if it id worth your time. However, I think author/publisher reputation is a far better signal than looking for AI tells.
Homogenization is good for milk, but not for writing.
If you use an LLM to generate the ideas and justification and formatting and etc etc, you're just delegating your part in the convo to a bot.
I keep seeing this and I don't think I agree. We outsource thinking everyday. Companies do this everyday. I don't study weather myself, I check an app and bring an umbrella if it says it's gonna rain. My team trusts each other do do some thinking in their area, and present bits sideways / upwards. We delegate lots of things. We collaborate on lots of things.
What needs to be clear is who owns what. I never send something I wouldn't stand by. Not in a correctness sense (I have, am and likely will be wrong on any number of things) but more in a "yeah, that is my output, and I stand by it now" kind of way. Tomorrow it might change.
Also remember that google quip "it's hard to edit an empty file". We have always used tools to help us. From scripts saved here and there, to shortcuts, to macros, IDE setups, extensions and so on. We "think once" and then try not to "think" on every little detail. We'd go nowhere with that approach.
There's a strong overlap between things which bad (unwise, reckless, unethical, fraudulent, etc.) in both cases.
> We outsource thinking everyday. [...] What needs to be clear is who owns what.
Also once you have clarity, there's another layer where some owning/approval/delegation is not permissible.
For example, a student ordering "make me a 3 page report on the Renaissance." Whether the order went to another human or an LLM, it is still cheating, and that wouldn't change even if they carefully reviewed it and gave it a stamp of careful approval.
However, if I had an idea and just fobbed the idea off to an LLM who fleshed it out and posted it to my blog, would you want to read the result? Do you want to argue against that idea if I never even put any thought into it and maybe don’t even care?
I’m like you in this regard. If I used an LLM to write something I still “own” the publishing of that thing. However, not everyone is like this.
Hardly seems mutually exclusive. Surely you should generally consider the reputation of someone who posts LLM-responses (without disclosing it) to be pretty low.
A lot of people don’t particularly want to waste time reading the LLM-responses to someone else’s unknown/unspecified prompts. Someone who would trick you in to that doesn’t have a lot of respect for their readers and is unlikely to post anything of value.
Don’t get me wrong. I don’t want to read (for example) AI fiction because I know there’s no actual mind behind it (to the extent that I can ever know this).
But AI is going to get better and the only thing that’s going to even work going forward is to trust publishers and authors who give high value regardless of how integral LLMs are to the process.
I don't understand how they can think it's a good idea, I instantly classify them as lazy and unauthentic. I'd rather get texts full of mistakes coming straight out of their head than this slop.
So when someone wants to know something about the topic that my website is focused on, chances are it will not be the material from the website they see directly, but a summary of what the LLM learned from my website.
Ergo, if I want to get my message across I have to write for the LLM. It's the only reader that really matters and it is going to have its stylistic preferences (I suspect bland, corporate, factual, authoritative, avoiding controversy but this will be the new SEO).
We meatbags are not the audience.
A simple query like "Ford Focus wheel nut torque" gives pages with blah blah like:
> Overview Of Lug Nut Torque For Ford Focus
> The Ford Focus uses specific lug nut torque to keep wheels secure while allowing safe driving dynamics. Correct torque helps prevent rotor distortion, brake heat transfer issues, and wheel detachment. While exact values can vary by model year, wheel size, and nut type, applying the proper torque is essential for all Ford Focus owners.
And the site probably has this text for each car model.
Somehow the ways the ad industry destroyed the Internet got very varied...
And I know it's different, but I'm surprised the overall sentiment is so pessimistic on HN. So maybe we will communicate through yet another black box on top of hundreds of existing ones already. But probably mostly when seeking specific information and wanting to get it efficiently. Yes this one is different, it makes human contact over text much more difficult, but the big part of all of this was happening already for years and now it's just widely available.
When posting on HN you don't see the other person typing like using talk command on unix, but it is still meaningful.
Ideally we would like to preserve what we have untouched and only have new stuff as an option but it's never been like this. Did we all enjoy win 3.11? I mean it was interesting.. but clicking.. so inefficient (and of course there are tons of people who will likely scream from their GUIs that it still is and windows sucks, I'd gladly join, but we have our keyboard bindings, other operating systems, and get by somehow)
Perception of new things stays relatively constant over the years though.
It's a problem to use a blender to polish your jewelry. However, it's perfectly alright to use a blender to make a smoothie. It's not cognitive dissonance to write a blog post imploring people to stop polishing jewelry using a blender while also making a daily smoothie using the same tool.
I cry every time somebody tries to frame it one dimensionally.
I can take the other person's prompt and run it through an LLM myself and proceed from there.
Doesn't ai;dr kind of contradict ai generated documentation? If I want to know what claude thinks about your code I can just ask it. Imo documentation is the least amenable thing to ai. As the article itself says, I want to read some intention and see how you shape whatever you're documenting.
(AI adding tests seems like a good use, not sure what's meant by scaffolding)
> Why should I bother to read something someone else couldn't be bothered to write?
and
> I can't imaging writing code by myself again, specially documentation, tests and most scaffolding.
So they expect nobody to read their documentation.
And you're wrong for suggesting that's the first use of ai;dr and further assuming that the author "stole" it from that post. https://rollenspiel.social/@holothuroid/113078030925958957 - September 4, 2024
Edit: ok, I've checked your profile and now I see that this is your website that you're astroturfing every thread you reply to. Stop doing that.
These blanket binary takes are tiresome. There is nuance and rough edges.
If you care about your voice, don't let LLMs write your words. But that doesn't mean you can't use AI to think, critique and draft lots of words for you. It depends on what purpose you're writing it for. If you're writing an impersonal document, like a design document, briefing, etc then who cares. In some cases you already have to write them in a voice that is not your own. Go ahead and write these in AI. But if you're trying to say something more personal then the words should be your own, AI will always try to 'smooth' out your voice, and if you care about it, you gotta write it yourself.
Now, how do you use AI effectively and still retain your voice? Here's one technique that works well: start with a voice memo, just record yourself maybe during a walk, and talk about a subject you want, free form, skip around jump sentences, just get it all out of your brain. Then open up a chat, add the recording or transcript, clearly state your intent in one sentence and ask the AI to consider your thoughts, your intent and ask clarifying questions. Like, what does the AI not understand about how your thoughts support the clearly stated intent of what you want to say? That'll produce a first draft, which will be bad. Then tell the AI all the things that don't make sense to you, that you don't like, just comment on the whole doc, get a second draft. Ask the AI if it has more questions for you, you can use live chat to make this conversation go smoother as well, when the AI is asking you questions, you can talk freely by voice. Repeat this one or two more times, and a much finer draft will take shape that is closer to what you want to say. During this drafting state, the AI will always try to smooth or average out your ideas, so it is important to keep pointing out all the ways in which it is wrong.
This process will help you with all the thinking involved being more up-front. Once you're read and critiqued several drafts, all your ideas will be much more clear and sort of 'cached' and ready to be used in your head. Then, sit down and write your own words from scratch, they will come much easier after all your thoughts have been exercised during the drafting process.
This is the root cause of the problem. Labeling all things as just "content". Content entering the lexicon is a mind shift in people. People are not looking for information, or art, just content. If all you want is content then AI is acceptable. If you want art then it becomes less good.
> Why should I bother to read something someone else couldn't be bothered to write?
Interesting mix of sentiments. Is this code you're generating primarily as part of a solo operation? If not, how do coworkers/code reviewers feel about it?
This take is baffling to me when I see it repeated. It's like saying why should people use Windows if Bill Gates did not write every line of it himself. We won't be able to see into Bill's mind. Why should you read a book if they couldn't bother to write it properly and have an editor come in and fix things.
The main purpose of a creative work is not seeing intimately into the creator's mind. And the idea that it is only people who don't care who use LLMs is wrong.
What? It’s nothing like that, at all. I don’t know that Gates has claimed to have written even a single line of Windows code. I’m not asking for the perfect analogy, but the analogy has to have some tie to reality or it’s not an analogy at all. I’m only half-joking when I wonder if an AI wrote this comment.
I haven't even really tried to use LLMs to write anything from a work context because of the ideas you talk about here.
IMO it’s lazy and bad for expressive writing, but for certain things it’s totally fine.
I think it's the size of the audience that the AI-generated content is for, is what makes the difference. AI code is generally for a small team (often one person), and AI prose for one person (email) or a team (internal doc) is often fine as it's hopefully intentional and tailored. But what's even the point for AI content (prose or code) for a wide audience? If you can just give me the prompt and I can generate it myself, there's no value there.
I don't have any solutions though. Sometimes I don't call out an article - like the Hashline post today - because it genuinely contains some interesting content. There is no doubt in my mind that I would have greatly preferred the post if it was just whatever the author promoted the LLM with rather than the LLM output and would have better communicated their thoughts to me. But it also would have died on /new and I never would have seen it.
But of course, like producing code with AI, it's very easy to produce cheap slop with it if you don't put in the time. And, unlike code, the recipient of your work will be reading it word by word and line by line, so you can't just write tests and make sure "it works" - it has to pass the meaningfulness test.
For me too and for writing it has the upside that it's sooo relaxing to just type away and not worry about the small errors much anymore.
I no longer feel joy in reading things as almost most of the writing seem same and pale to me as if everyone is putting thoughts in the same way.
Having your own way of writing always felt personal in which you expressed your feelings most of the time.
The most sad part for me is I no longer am able to understand someone's true feelings (which anyway was hard to express in writing as articulation is hard).
We see it being used from our favourite sports person in their retirement post or from someone who has lost their loved ones or someone who just got their first job and it's just sad that we no longer can have that old pre AI days back again.
However, I agree that ordinary people filtering and flattening their communication into a single style is a great loss.
> I can't imaging writing code by myself again
After that, you say that you need to know the intention for "content".
I think it's pretty inconsistent. You have a strict rule in one direction for code and a strict rule in the opposite direction for "content".
I don't think that writing code unassisted should be taken for granted. Addy Osmani covered that in this talk: https://www.youtube.com/watch?v=FoXHScf1mjA I also don't think all "content" is the sort of content where you need to know the intention. I'll grant that some of it is, for sure.
Edit: I do like intentional writing. However, when AI is generating something high quality, it often seems like it has developed an intention for what it's building, whether one that was conceived and communicated clearly by the person working with the AI or one that emerged unexpectedly through the interaction. And this applies not just to prose but to code.
Conclusion:
Dismissing arguments solely because they are AI-generated constitutes a class of genetic fallacy, which should be called 'Argumentum ad machina'.
Premises:
1. The validity of a logical argument is determined by the truth of its premises and the soundness of its inferences, not by the identity of the entity presenting it.
2. Dismissing an argument based on its source rather than its content constitutes a genetic fallacy.
3. The phrase 'that's AI-generated' functions as a dismissal based on source rather than content.
Assumptions:
1. AI-generated arguments can have true premises and sound inferences
2. The genetic fallacy is a legitimate logical error to avoid
3. Source-based dismissals are categorically inappropriate in logical evaluation
4. AI should be treated as equivalent to any other source when evaluating arguments
> ..and call me an AI luddite
Oh please do call me an AI luddite. It's an honor for me.
I think using AI for writing feedback is fine, but if you're going to have it write for you, don't call it your writing.
Also you could long use "logit_bias" in the API of models which supported it to ban the EM dash, ban the word "not", ban semicolons, and ban the "fancy quotes" that were clearly added by "those who need to watch" to make sure that they can clearly figure out if you used an LLM or not.
alontorres•1h ago
But if the post was generated through a long process of back-and-forth with the model, where significant modifications/additions were made by a human? I don't think there's anything wrong with that.
yabones•1h ago
Zambyte•1h ago
alontorres•54m ago
I do agree with your core point - the thinking is what matters. Where I've found LLMs most useful in my own writing is as a thinking tool, not a writing tool.
Using them to challenge my assumptions, point out gaps in my argument, or steelman the opposing view. The final prose is mine, but the thinking got sharper through the process.
fwip•1h ago
alontorres•50m ago
But AI-generated content is here to stay, and it's only going to get harder to distinguish the two over time. At some point we probably just have to judge text on its own merits regardless of how it was produced.
lproven•1h ago
I do think there's a great deal wrong with that, and I won't read it at all.
Human can speak unto human unless there's language barrier. I am not interested in anyone's mechanically-recovered verbiage, no matter how much they massaged it.