frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

ai;dr

https://www.0xsid.com/blog/aidr
284•ssiddharth•1h ago

Comments

alontorres•1h ago
I think that this requires some nuance. Was the post generated with a simple short prompt that contributed little? Sure, it's probably slop.

But if the post was generated through a long process of back-and-forth with the model, where significant modifications/additions were made by a human? I don't think there's anything wrong with that.

yabones•1h ago
I don't see what value the LLM would add - writing itself isn't that hard. Thinking is hard, and outsourcing that to an LLM is what people dislike.
Zambyte•1h ago
Using an LLM to ask you questions about what you wrote can help you explore assumptions you are making about the reader, and can help you find what might be better written another way, or elaborated upon.
alontorres•33m ago
I'd push back a bit on "writing itself isn't that hard." Clear writing is difficult, and many people with good ideas struggle to communicate them effectively. An LLM can help bridge that gap.

I do agree with your core point - the thinking is what matters. Where I've found LLMs most useful in my own writing is as a thinking tool, not a writing tool.

Using them to challenge my assumptions, point out gaps in my argument, or steelman the opposing view. The final prose is mine, but the thinking got sharper through the process.

fwip•1h ago
One problem is that it's exceedingly difficult to tell, as a reader, which scenario you have encountered.
alontorres•29m ago
This is the strongest argument against it, I think. Sometimes you can't easily tell from the output whether someone thought deeply and used AI to polish, or just prompted and published. That adds another layer of cognitive burden for parsing text which is frustrating.

But AI-generated content is here to stay, and it's only going to get harder to distinguish the two over time. At some point we probably just have to judge text on its own merits regardless of how it was produced.

lproven•1h ago
You do you.

I do think there's a great deal wrong with that, and I won't read it at all.

Human can speak unto human unless there's language barrier. I am not interested in anyone's mechanically-recovered verbiage, no matter how much they massaged it.

soperj•1h ago
> specially documentation

How we can tell that this wasn't written by an LLM.

micromacrofoot•1h ago
You can't.

Like always we have to lean on evaluating based on quality. You can produce quality using an LLM, but it's much easier to produce slop, which is why there's so much of it now.

ssiddharth•1h ago
I'm too poor for Claude Max 20x. Not that it needs that firepower but eh, there's no real way. As I mentioned, almost every single quirk can be willed away with a little bit of attention and effort.

At this point, I'm not sure whether you're a clawdbot running amok..

elischleifer•1h ago
"The less polished and coherent something is, the more value I assign to it." - maybe a bit of an overstatement ;)
arscan•1h ago
This absolutely has been the case for me for the last few months. But what’s disheartening is that this signal will just be mimicked through simple prompting if too many people start tuning in to it. Or maybe that’s already happened?
jugglinmike•53m ago
It also contradicts the author's earlier argument:

> I need to know there was intention behind it. [...] That someone needed to articulate the chaos in their head, and wrestle it into shape.

If forced to choose, I'd use coherence as evidence of care than use it as a refutation of humanity.

numbers•1h ago
I remember this was back in 2023, when ChatGPT had first launched, and I had a manager whose English was not very good. He started sending emails that felt like they were written by a copywriter. And the messaging was so hard to parse through because there's so much ChatGPT fluff around it. Very quickly we realized that what he was saying was usually in the middle somewhere, but we'd have to read through the intro and the ending of the emails just so that we couldn't miss anything. It felt like wasting 2-3 extra minutes per team member.
micromacrofoot•1h ago
the solution is simple, ask ChatGPT to summarize it

a large part of the business models of these systems is going to consist of dealing with these systems... it's a wonderful scheme

afavour•1h ago
I have long believed that LLMs will herald a new corporate data transfer format, unlike most new formats that boast efficiency gains and compression, this new format will be incredibly wasteful and bloat transmission sizes.

I'll want to communicate something to my team. I'll write 4 bullet points, plug it into an LLM, which will produce a flowing, multi paragraph e-mail. I'll distribute it to my co-workers. They will each open the e-mail, see the size, and immediately plug it into an LLM asking it to make a 4 bullet summary of what I've sent. Somewhere off in the distance a lake will dry up.

entuno•1h ago
And hopefully they're the same four same bullet points..
the_af•56m ago
Ah, yes, the LLM Exchange Protocol.

I believe it's already in place, making the internet a bit more wasteful.

losvedir•1h ago
I really like Oxide's take on AI for prose: https://rfd.shared.oxide.computer/rfd/0576 and how it breaks the "social contract" where usually it takes more effort to write than to read, and so you have a sense that it's worth it to read.

So I get the frustration that "ai;dr" captures. On the other hand, I've also seen human writing incorrectly labeled AI. I wrote (using AI!) https://seeitwritten.com as a bit of an experiment on that front. It basically is a little keylogger that records your composition of the comment, so someone can replay it and see that it was written by a human (or a very sophisticated agent!). I've found it to be a little unsettling, though, having your rewrites and false starts available for all to see, so I'm not sure if I like it.

benob•1h ago
You could totally make a believable timing generation model from a few (hundreds) recordings of human writing. Detecting AI is hard...
mikestew•1h ago
Years ago I wrote something similar to test a biometric security piece that used keystroke timings (dwell and stroke) to determine if the person typing the password is the same person who owns the account. Short version of a long story is that it would be trivial to get data for AI to reproduce human typing. Because I did it years ago using something only slightly more sophisticated than urandom.
usefulposter•1h ago
LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (Cantrill)

≈

The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. (Brandolini)

https://en.wikipedia.org/wiki/Brandolini's_law

ssiddharth•1h ago
My biggest sorrow right now is the fact that my beloved emdash is a major signal for AI generated content. I've been using it for decades now but these days, I almost always pause for a second.
archagon•1h ago
To quote Office Space, “Why should I change? He’s the one who sucks.”
parsimo2010•56m ago
Mostly because when I see an em dash now, I assume that it was written by AI, not that the author is one of the people who puts enough effort into their product that they intentionally use specific sized dashes.

AI might suck, but if the author doesn't change, they get categorized as a lazy AI user, unless the rest of their writing is so spectacular that it's obvious an AI didn't write it.

My personal situation is fine though. AI writing usually has better sentence structure, so it's pretty easy (to me at least) to distinguish my own writing from AI because I have run-on sentences and too many commas. Nobody will ever confuse me with a lazy AI user, I'm just plain bad at writing.

wrs•48m ago
If you’re judging my writing so shallowly, I don’t think I’m writing for you.
lelanthran•38m ago
> If you’re judging my writing so shallowly, I don’t think I’m writing for you.

No, you are writing for people who see LLM-signals and read on anyway.

Not sure that that's a win for you.

98codes•30m ago
> assume

There's your trouble. The real problem is that most internet users are setting their baseline for "standard issue human writing" at exactly the level they themselves write. The problem is that more and more people do not draw a line between casual/professional writing, and as such balk at very normal professional writing as potentially AI-driven.

Blame OS developers for making it easy—SO easy!—to add all manner of special characters while typing if you wish, but the use of those characters, once they were within easy reach, grew well before AI writing became a widespread thing. If it hadn't, would AI be using it so much now?

catoc•56m ago
Exactly this! I love(d) using em dashes. Now they’ve become ehm dashes, experiencing exactly that pause — that moment of hesitation — that you describe
deron12•10m ago
AI never uses em dashes in a pair like this, whereas most people who like em dashes do. Anyone who calls paired em dash writing AI is only revealing themselves to be a duffer.
tkzed49•44m ago
I've gone back to using two dashes--LLMs typically don't write them that way.
Bukhmanizer•38m ago
You’re absolutely right. I hate AI writing — it’s not that I hate AI, it’s that it makes everything it says sound a specific combination of smug and authoritative — No matter the content. Once you realize it’s not saying anything, that’s the real aha moment.

\s

4b11b4•11m ago
Also, unfortunately I have in my global instructions to never use em dashes...
4b11b4•11m ago
Maybe I'll get over it eventually.
nxobject•4m ago
What I do – and I know this isn't conventional style – is use ex dashes. (Or, you could use spaces between em dashes, as incorrect as it is.)
woopwoop•1h ago
I like the idea that various communications media have implicit social contracts that can be broken. In my opinion, power point presentations break an implicit social contract that is held in handwritten talks: if it's worth you displaying a piece of information, so that I the listener feel the need to take it in or even copy it down, it has to be worth your time to actually physically write it on the board. With power point talks this is not honored, and the average power point talk is much, much worse than the average chalk talk. I bet there are lots of other examples.
hinkley•1h ago
Based on the programs I was nudged to as a child, it was a surprise to no one but me that I scored higher verbal on the SATs than I did math, which I would have told you was my favorite subject. Despite the fact that French was my easiest subject. I can still picture the look on my french teacher’s face if I’d have mentioned this in front of him.

There are a lot of people like me in software. I’m tempted to say we are “shouted down”, but honestly it’s hard to be shouted down when you can talk circles around some people. But we are definitely in a minority. There are actually a lot of parallels between creative writing and software and a few things that are more than parallel. Like refactoring.

If you’re actually present when writing docs instead of monologuing in your head about how you hate doing “this shit”, then there’s a lot of rubber ducking that can be done while writing documentation. And while I can’t say that “let the AI do it” will wipe out 100% of this value, because the AI will document what you wrote instead of what you meant to write, I do think you will lose at least 80% of that value by skipping out on these steps.

steveBK123•58m ago
The problem with Ai writing is that its a waste of everyones time.

It’s literal content expansion, the opposite of gzip’ing a file.

It’s like a kid who has a 500 word essay due tomorrow who needs to pad their actual message up to spec.

AnimalMuppet•43m ago
Well, LLMs can be either side of that. They can also be used to turn something verbose into a series of bullet points.

I agree that reading an LLM-produced essay is a waste of time and (human) attention. But in the case of overly-verbose human writing, it's the human that's wasting my time[1], and the LLM is gzip'ing the spew.

[1] Looking at you, New Yorker magazine.

steveBK123•40m ago
Right we are headed towards LLM generated slop summarized by another LLM. Wire format is expanded slop.
comboy•50m ago
> https://seeitwritten.com

Fun, I'd make playback speed something like 5x or whatever feels appropriate, I think nobody truly wants to watch those at 1x.

miniatureape•37m ago
I had a take on this same thing a number of years ago. Much simpler, but the idea was just to see it at a glance. https://miniatureape.github.io/sprezzatura/
comboy•30m ago
yeah the idea is not new at all:

https://news.ycombinator.com/item?id=557191

I can't believe etherpad lost this item...

edit: oh, I found the one I was looking for: https://byronm.com/13sentences.html

mystraline•47m ago
To be fair, Oxide is a joke.

They want all this artisnal hand written prose under the candle light with the moon in the background. And you are a horrible person for using AI, blablabla.

But ask for feedback? And you get Inky, Blinky, Pinky, and Clyde. Aka ghosted. But boy, do they tell a good story. Just ain't fucking true.

Counter: companies deserve the same amount of time invested in their application as they spend on your response.

unglaublich•42m ago
This can only be fixed by authors paying humans to read instead of the other way around.
raincole•1h ago
> AI-generated code feels like progress and efficiency, while AI-generated articles and posts feel low-effort

I've noticed that attitude a lot. Everyone thinks their use of AI is perfectly justified while the others are generating slops. In gamedev it's especially prominent - artists think generating code is perfectly ok but get acute stress response when someone suggests generating art assets.

joshuaissac•1h ago
AI-generated code is meant for the machine, or for the author/prompter. AI-generated text is typically meant for other people. I think that makes a meaningful difference.
jvanderbot•1h ago
This is precisely correct IMHO.

Communication is for humans. It's our super power. Delegating it loses all the context, all the trust-building potential from effort signals, and all the back-and-forth discussion in which ideas and bonds are formed.

acedTrex•1h ago
Compiled code is meant for the machine, Written code is for other humans.
philipp-gayret•1h ago
Compiled natural language is meant for the machine, Written natural language is for other humans.
CivBase•31m ago
If AI is the key to compiling natural language into machine code like so many claim, then the AI should output machine code directly.

But of course it doesn't do that becaude we can't trust it the way we do a traditional compiler. Someone has to validate its output, meaning it most certainly IS meant for humans. Maybe that will change someday, but we're not there yet.

gordonhart•1h ago
For better or worse, a lot of people seem to disagree with this, and believe that humans reading code is only necessary at the margins, similarly to debugging compiler outputs. Personally I don't believe we're there yet (and may not get there for some time) but this is where comments like GP's come from: human legibility is a secondary or tertiary concern and it's fine to give it up if the code meets its requirements and can be maintained effectively by LLMs.
hinkley•1h ago
And Sturgeon tells us 90% of people are wrong, so what can you do.
threetonesun•28m ago
I rarely see LLMs generate code that is less readable than the rest of the codebase it's been created for. I've seen humans who are short on time or economic incentive produce some truly unreadable code.

Of more concern to me is that when it's unleashed on the ephemera of coding (Jira tickets, bug reports, update logs) it generates so much noise you need another AI to summarize it for you.

ripe•1h ago
Code can be viewed as design [1]. By this view, generating code using LLMs is a low-effort, low-value activity.

[1] Code as design, essays by Jack Reeves: https://www.developerdotstar.com/mag/articles/reeves_design_...

everforward•33m ago
A lot of writing (maybe most) is almost the same. Code is a means of translating a process into semantics a computer understands. Most non-fiction writing is a means of translating information or an idea into semantics that allow other people to understand that information or idea.

I don’t think either is inherently bad because it’s AI, but it can definitely be bad if the AI is less good at encoding those ideas into their respective formats.

acedTrex•1h ago
Ya i hate the idea that theres a difference, Code to me has always been as expressive about a person as normal prose. LLMs you lose a lot of vital information about the programmers personality. It leads to worse outcomes because it makes the failures less predictable.
jama211•47m ago
Code _can_ be expressive but it also can not, it depends on its purpose.

Some code I cobbled together to pass a badly written assignment at school. Other code I curated to be beautiful for my own benefit or someone else’s.

I think the better analogy in writing would be… using an LLM to draft a reply to a hawkish car dealer you’re trying to not get screwed by is absolutely fine. Using it to write a birthday card for someone you care about is terrible.

acedTrex•22m ago
All code is expressive, if a person emitted it, it is expressive about their state of mind, their values and their context.
hinkley•1h ago
A flavor of the Primary Attribution Error perhaps? It’s not a snug fit, but it’s close.
Blackthorn•56m ago
Turns out it's only slop if it comes from anyone else, if you generated it it's just smart AI usage.
jama211•45m ago
Generating art is worse than generating code though IMO. It’s more personal. Everything exists on a spectrum, even slop.
HarHarVeryFunny•42m ago
> Everyone thinks their use of AI is perfectly justified while the others are generating slops

No doubt, but I think there a bit of a difference between AI generating something utilitarian vs something expected to at least have some taste/flavor.

AI generated code may not be the best compared to what you could hand craft, along almost any axis you could suggest, but sometimes you just want to get the job done. If it works, it works, and maybe (at least sometimes) that's all the measure of success/progress you need.

Writing articles and posts is a bit different - it's not just about the content, it's about how it's expressed and did someone bother to make it interesting to read, and put some of their own personality into it. Writing is part communication, part art, and even the utilitarian communication part of it works better if it keeps the reader engaged and displays good theory of mind as to where the average reader may be coming from.

So, yeah, getting AI to do your grunt work programming is progress, and a post that reads like a washing machine manual can fairly be judged as slop in a context where you might have hoped for/expected better.

dfxm12•31m ago
The author is a blogger (creator and consumer) and coder though. They are speaking from experience in both cases, so it's not apt to your metaphor.

It's worth pointing out that AI is not a monolith. It might be better at writing code than making art assets. I don't work with gaming, but I've worked with Veo 3, and I can tell you, AI is not replacing Vince Gilligan and Rhea Seehorn. That statement has nothing to do with Claude though...

9999gold•1h ago
> I can't imaging writing code by myself again, specially documentation, tests and most scaffolding

Shouldn’t we bother to write these things?

post-it•1h ago
Documentation, maybe. Tests and scaffolding, no way. 99% of my time writing tests is figuring out how to make this particular React component testable. It's a waste of time. It's very easy to verify that a test is correct, making them the ideal thing to use AI for.
twoodfin•1h ago
Other than documentation (where I agree!), those are for communicating desired actions (primarily) to a machine.

A blog post is for communicating (primarily, these days) to humans.

They’re not the same audience (yet).

jama211•40m ago
Nah.
0gs•1h ago
all writing is developer documentation.
cgriswald•1h ago
> For me, writing is the most direct window into how someone thinks, perceives, and groks the world. Once you outsource that to an LLM, I'm not sure what we're even doing here. Why should I bother to read something someone else couldn't be bothered to write?

Because writing is a dirty, scratched window with liquid between the frames and an LLM can be the microfiber cloth and degreaser that makes it just a bit clearer.

Outsourcing thinking is bad. Using an LLM to assist in communicating thought is or at least can be good.

The real problem I think the author has here is that it can be difficult to tell the difference and therefore difficult to judge if it id worth your time. However, I think author/publisher reputation is a far better signal than looking for AI tells.

JoshTriplett•1h ago
> Because writing is a dirty, scratched window with liquid between the frames and an LLM can be the microfiber cloth and degreaser that makes it just a bit clearer.

Homogenization is good for milk, but not for writing.

trollbridge•1h ago
I'm not sure I'd agree with the statement "homogenization is good for milk". What makes it "good"?
JoshTriplett•1h ago
Fair enough, tastes vary. Many people prefer that milk not be chunky or lumpy, and want it to be uniform and consistent. Perhaps some do not.
cgriswald•16m ago
Clarity is good for writing and homogenization can increase clarity. There is a reason technical writing doesn’t read like journalism doesn’t read like fiction. There’s a reason we have dictionaries and editors. There’s a reason we have style guides. Including an LLM in writing in any of these roles or others isn’t ipso facto bad. I think many people who think it is just don’t like the style. And that’s okay, but the article isn’t about the style per se but about effort. Both lazy writing and effortful writing can be done with or without an LLM.
jvanderbot•1h ago
If you use an LLM to refine your ideas, you're basically adding a third party to the chat. There's really no need to copy-paste anything - you are the one that changes before you speak.

If you use an LLM to generate the ideas and justification and formatting and etc etc, you're just delegating your part in the convo to a bot.

NitpickLawyer•1h ago
> Outsourcing thinking is bad.

I keep seeing this and I don't think I agree. We outsource thinking everyday. Companies do this everyday. I don't study weather myself, I check an app and bring an umbrella if it says it's gonna rain. My team trusts each other do do some thinking in their area, and present bits sideways / upwards. We delegate lots of things. We collaborate on lots of things.

What needs to be clear is who owns what. I never send something I wouldn't stand by. Not in a correctness sense (I have, am and likely will be wrong on any number of things) but more in a "yeah, that is my output, and I stand by it now" kind of way. Tomorrow it might change.

Also remember that google quip "it's hard to edit an empty file". We have always used tools to help us. From scripts saved here and there, to shortcuts, to macros, IDE setups, extensions and so on. We "think once" and then try not to "think" on every little detail. We'd go nowhere with that approach.

Terr_•37m ago
IMO it helps to take a scenario and then imagine every task is being delegated to a randomized impoverished human remote contractor, with the same (lack of) oversight and involvement by the user.

There's a strong overlap between things which bad (unwise, reckless, unethical, fraudulent, etc.) in both cases.

> We outsource thinking everyday. [...] What needs to be clear is who owns what.

Also once you have clarity, there's another layer where some owning/approval/delegation is not permissible.

For example, a student ordering "make me a 3 page report on the Renaissance." Whether the order went to another human or an LLM, it is still cheating, and that wouldn't change even if they carefully reviewed it and gave it a stamp of careful approval.

pohl•25m ago
Managers and business owners outsource thinking to their employees and they deserve huge paychecks for it. Entrepreneurs do it and we celebrate them. But an invention that allows the peon to delegate to an automaton? That’s where I draw the line.
cgriswald•6m ago
Right. I don’t think I disagee with anything you’ve said here.

However, if I had an idea and just fobbed the idea off to an LLM who fleshed it out and posted it to my blog, would you want to read the result? Do you want to argue against that idea if I never even put any thought into it and maybe don’t even care?

I’m like you in this regard. If I used an LLM to write something I still “own” the publishing of that thing. However, not everyone is like this.

jmull•54m ago
> author/publisher reputation is a far better signal than looking for AI tells

Hardly seems mutually exclusive. Surely you should generally consider the reputation of someone who posts LLM-responses (without disclosing it) to be pretty low.

A lot of people don’t particularly want to waste time reading the LLM-responses to someone else’s unknown/unspecified prompts. Someone who would trick you in to that doesn’t have a lot of respect for their readers and is unlikely to post anything of value.

phito•1h ago
I roll my eyes every time I see a coworker post a very a long message full of emojis, obviously generated by a LLM with 0 post editing. Even worse when it's for social communication such as welcoming a new member in the team. It just feels so fake and disingenuous, I might even say gross.

I don't understand how they can think it's a good idea, I instantly classify them as lazy and unauthentic. I'd rather get texts full of mistakes coming straight out of their head than this slop.

dontwannahearit•1h ago
It's pretty much over for the human-internet. Search was gamed, its usefulness has plummeted, so humans will increasingly ask their LLM of choice and that LLM will have been trained on the content of the internet.

So when someone wants to know something about the topic that my website is focused on, chances are it will not be the material from the website they see directly, but a summary of what the LLM learned from my website.

Ergo, if I want to get my message across I have to write for the LLM. It's the only reader that really matters and it is going to have its stylistic preferences (I suspect bland, corporate, factual, authoritative, avoiding controversy but this will be the new SEO).

We meatbags are not the audience.

netsharc•59m ago
Tragedy of the attention-economy. Ad networks gives you money if you placed their ads on your site, so people got machines to generate fluff to earn some money. Now all search result is just bullshit pages to capture your attention until the banner ad..

A simple query like "Ford Focus wheel nut torque" gives pages with blah blah like:

> Overview Of Lug Nut Torque For Ford Focus

> The Ford Focus uses specific lug nut torque to keep wheels secure while allowing safe driving dynamics. Correct torque helps prevent rotor distortion, brake heat transfer issues, and wheel detachment. While exact values can vary by model year, wheel size, and nut type, applying the proper torque is essential for all Ford Focus owners.

And the site probably has this text for each car model.

Somehow the ways the ad industry destroyed the Internet got very varied...

malfist•36m ago
And that site never actually lists the manufacturer recommended torque either. It's just all slop to get eyeballs.
dionian•13m ago
i think there is a huge market for Quality detection in the future. imagine a browser plugin that could filter AI slop like an ad blocker does ads. im sure it exists already. But im sure it needs to get more advanced
comboy•40m ago
There there, remember when all images were hand painted? (me neither)

And I know it's different, but I'm surprised the overall sentiment is so pessimistic on HN. So maybe we will communicate through yet another black box on top of hundreds of existing ones already. But probably mostly when seeking specific information and wanting to get it efficiently. Yes this one is different, it makes human contact over text much more difficult, but the big part of all of this was happening already for years and now it's just widely available.

When posting on HN you don't see the other person typing like using talk command on unix, but it is still meaningful.

Ideally we would like to preserve what we have untouched and only have new stuff as an option but it's never been like this. Did we all enjoy win 3.11? I mean it was interesting.. but clicking.. so inefficient (and of course there are tons of people who will likely scream from their GUIs that it still is and windows sucks, I'd gladly join, but we have our keyboard bindings, other operating systems, and get by somehow)

kjkjadksj•27m ago
There is a mountain of difference between photography and Ai
comboy•24m ago
This argument works against any new thing. Yes it is totally different than the thing that happened before and perhaps something that has never happened before, I don't deny that at all.

Perception of new things stays relatively constant over the years though.

Starlevel004•1h ago
I laugh every time somebody qualifies their anti-AI comments with "Actually I really like AI, I use it for everything else". The problem is bad, but the cause of the problem (and especially paying for the cause of the problem)? That's good!
Kerrick•1h ago
I laugh every time somebody thinks every problem must have a root cause that pollutes every non-problem it touches.

It's a problem to use a blender to polish your jewelry. However, it's perfectly alright to use a blender to make a smoothie. It's not cognitive dissonance to write a blog post imploring people to stop polishing jewelry using a blender while also making a daily smoothie using the same tool.

the_af•51m ago
Why laugh? Why can't a tool have good and bad uses, and why can't one be disappointed about the bad uses but embrace the good ones?
4b11b4•5m ago
It's not as one-dimensional as good vs bad. Transformers generally are extremely useful. Do I want to read your transformer generated writing? Fuck no. Is code generation/understanding/natural language interfaces to a computer good? I'd have to argue yes, certainly.

I cry every time somebody tries to frame it one dimensionally.

trollbridge•1h ago
I would be glad to read anyone's prompts they use to generate AI text. I don't see why I need to necessarily read the output, though.

I can take the other person's prompt and run it through an LLM myself and proceed from there.

stevenjgarner•1h ago
I think that may be an insight to something quite profound. We used to measure the "doubling of knowledge" against number of peer-reviewed papers of scientific research etc. Now not so much. "Knowledge" has become more proprietary, and condensed into the AI models replacing the libraries of training data. We now measure "doubling of knowledge" as the next version or iteration of a model. In some kind of real sense, the prompt IS more powerful than the output.
petetnt•1h ago
I agree with the general statement, if you didn’t spend time on writing it, I am not going to spend time reading it. That includes situations where the writer decides to strip all personality by letting AI format the end product. There’s irony in not wanting to read AI content, but still using it for code and especially documentation though, where the same principle should apply.
jimmaswell•1h ago
I find AI is great at documenting code. It's a description of what the code does and how to use it - all that matters is that it's correct and easy to read, which it almost certainly will be in my experience.
archagon•1h ago
Documentation is needed for intent. For everything else you could just read the code. With well-written code, “what the code does and how to use it” should be clear.
dematz•1h ago
>I can't imaging writing code by myself again, specially documentation, tests and most scaffolding.

Doesn't ai;dr kind of contradict ai generated documentation? If I want to know what claude thinks about your code I can just ask it. Imo documentation is the least amenable thing to ai. As the article itself says, I want to read some intention and see how you shape whatever you're documenting.

(AI adding tests seems like a good use, not sure what's meant by scaffolding)

fmbb•56m ago
The article is definitely contradicting itself. There are only two sentences between

> Why should I bother to read something someone else couldn't be bothered to write?

and

> I can't imaging writing code by myself again, specially documentation, tests and most scaffolding.

So they expect nobody to read their documentation.

jama211•41m ago
That’s not a contradiction - documentation often needs to be written with no expectation anyone will ever read it.
xvector•1h ago
Many engineers suck at writing. I'm fine with AI prose if it's more organized and information-dense than human prose. I'm sick of reading 6 page eng blogs to find a paragraph's worth of information.
Handy-Man•1h ago
OP took it from here without credit https://www.threads.com/@raytray4/post/DUmB657FR4P
dqv•14m ago
... attributionism for such a trivial thing is a waste of time. Multiple people can come up with a term like this independently because it's not that creative. People have been doing this with the ";dr" suffix for as long as it has been popular.

And you're wrong for suggesting that's the first use of ai;dr and further assuming that the author "stole" it from that post. https://rollenspiel.social/@holothuroid/113078030925958957 - September 4, 2024

FrankRay78•1h ago
Pop quiz. How much of the following article is AI generated versus hand written intention? Come on, tell me if you actually can tell anymore. https://bettersoftware.uk/2026/01/31/the-business-analyst-ro...
meindnoch•21m ago
I just skimmed the article, but I can already tell it's chock-full of LLMisms. In other words: ai;dr

Edit: ok, I've checked your profile and now I see that this is your website that you're astroturfing every thread you reply to. Stop doing that.

logicprog•1h ago
Yeah, I use LLM agents extensively for coding, but I have never once allowed an LLM to write anything for me. In the past month, I literally wrote 40,000 words of researched essays on various topics, and every single word was manually written, and every source manually read, myself. Writing is how I think, how I process information, and it's also an activity where efficiency is really not the goal.
TheChelsUK•1h ago
Thoughts with the people who use AI to help construct their thoughts because their cognitive decline impacts the ability to construct words and sentences, but still enjoying the production of content, blogging and th indieweb.

These blanket binary takes are tiresome. There is nuance and rough edges.

martythemaniak•1h ago
I use a technique where LLMs help me write, but the final output is manual and entirely mine. It's a bit of heavy process, but I think it blends the power of LLM and authenticity of my thoughts fairly well, I'll paste in my blog post below (which wasn't produced using this method, hence the rambly nature of it):

If you care about your voice, don't let LLMs write your words. But that doesn't mean you can't use AI to think, critique and draft lots of words for you. It depends on what purpose you're writing it for. If you're writing an impersonal document, like a design document, briefing, etc then who cares. In some cases you already have to write them in a voice that is not your own. Go ahead and write these in AI. But if you're trying to say something more personal then the words should be your own, AI will always try to 'smooth' out your voice, and if you care about it, you gotta write it yourself.

Now, how do you use AI effectively and still retain your voice? Here's one technique that works well: start with a voice memo, just record yourself maybe during a walk, and talk about a subject you want, free form, skip around jump sentences, just get it all out of your brain. Then open up a chat, add the recording or transcript, clearly state your intent in one sentence and ask the AI to consider your thoughts, your intent and ask clarifying questions. Like, what does the AI not understand about how your thoughts support the clearly stated intent of what you want to say? That'll produce a first draft, which will be bad. Then tell the AI all the things that don't make sense to you, that you don't like, just comment on the whole doc, get a second draft. Ask the AI if it has more questions for you, you can use live chat to make this conversation go smoother as well, when the AI is asking you questions, you can talk freely by voice. Repeat this one or two more times, and a much finer draft will take shape that is closer to what you want to say. During this drafting state, the AI will always try to smooth or average out your ideas, so it is important to keep pointing out all the ways in which it is wrong.

This process will help you with all the thinking involved being more up-front. Once you're read and critiqued several drafts, all your ideas will be much more clear and sort of 'cached' and ready to be used in your head. Then, sit down and write your own words from scratch, they will come much easier after all your thoughts have been exercised during the drafting process.

ecshafer•1h ago
> When it comes to content..

This is the root cause of the problem. Labeling all things as just "content". Content entering the lexicon is a mind shift in people. People are not looking for information, or art, just content. If all you want is content then AI is acceptable. If you want art then it becomes less good.

Tycho•1h ago
When people put together memos or decks in the last, even if that weren’t read very carefully, at least they reassured management that someone had actually things through. But that is no longer a reliable signal.
mikemarsh•1h ago
> I can't imaging writing code by myself again, specially documentation, tests and most scaffolding

> Why should I bother to read something someone else couldn't be bothered to write?

Interesting mix of sentiments. Is this code you're generating primarily as part of a solo operation? If not, how do coworkers/code reviewers feel about it?

improbableinf•1h ago
That’s exactly my thoughts. Code and documentation are one of the primary types of „content” by/for engineers. Kind of goes against the main topic of the article.
phyzome•1h ago
Seems pretty silly to me to rail against AI-generated writing and then say it's good for documentation.
Daishiman•37m ago
Documentation can be fairly rote, straightforward and can have a uniform style that doesn't benefit from being opinionated.
nubg•1h ago
ai;dr
charcircuit•1h ago
>Why should I bother to read something someone else couldn't be bothered to write?

This take is baffling to me when I see it repeated. It's like saying why should people use Windows if Bill Gates did not write every line of it himself. We won't be able to see into Bill's mind. Why should you read a book if they couldn't bother to write it properly and have an editor come in and fix things.

The main purpose of a creative work is not seeing intimately into the creator's mind. And the idea that it is only people who don't care who use LLMs is wrong.

ssiddharth•1h ago
What is creative about generating an article from a stub? The kernel of the article around which the LLM constructs the content? I'm not trying to be an ass, just curious.
mikestew•57m ago
It's like saying why should people use Windows if Bill Gates did not write every line of it himself.

What? It’s nothing like that, at all. I don’t know that Gates has claimed to have written even a single line of Windows code. I’m not asking for the perfect analogy, but the analogy has to have some tie to reality or it’s not an analogy at all. I’m only half-joking when I wonder if an AI wrote this comment.

esafak•1h ago
The purpose of communication is to reduce the cost of obtaining information; I tell you what I have already figured out and vice versa. If we're both querying the same oracle, there is nothing gained beyond the prompt itself (which can be valuable).
dizhn•1h ago
Short and sweet to have coined the term? Or did it exist already?
pwillia7•54m ago
I am 100% right there with you. Writing in my voice is maybe the last thing I have that I can do differently and 'better' than an LLM in a couple years time, or even right now if I'm really being honest.

I haven't even really tried to use LLMs to write anything from a work context because of the ideas you talk about here.

jama211•42m ago
Writing to express yourself sure. Using an llm for a birthday card would be a terrible sin. However if someone used it for, I dunno, drafting an email because you’re in a dispute with an evil real estate agent and you’re trying not to get screwed, I wouldn’t have any qualms about it.

IMO it’s lazy and bad for expressive writing, but for certain things it’s totally fine.

dsign•53m ago
Ever worried that ChatGPT would rattle you to the authorities because there is such a thing as thought crime? For that reason, there is a vast, unexplored territory where abhorrent ideas and pornographic vulgarity combine with literary prose (or convoluted, defective, god-awful prose, like the one I'm using right now) and entertaining story-telling that will remain human-only for a while. May we all find a next read that we love. Also, we all may need to (re-)learn to draw phalli.
BobAliceInATree•53m ago
> I'm having a hard time articulating this but AI-generated code feels like progress and efficiency, while AI-generated articles and posts feel low-effort and make the dead internet theory harder to dismiss.

I think it's the size of the audience that the AI-generated content is for, is what makes the difference. AI code is generally for a small team (often one person), and AI prose for one person (email) or a team (internal doc) is often fine as it's hopefully intentional and tailored. But what's even the point for AI content (prose or code) for a wide audience? If you can just give me the prompt and I can generate it myself, there's no value there.

exe34•53m ago
I tell all my friends: send me your prompts. Don't send me the resulting slop.
furyofantares•52m ago
I call out articles on here constantly and have gotten kind of tired of it. Well, very tired of it. I am in full agreement with this post.

I don't have any solutions though. Sometimes I don't call out an article - like the Hashline post today - because it genuinely contains some interesting content. There is no doubt in my mind that I would have greatly preferred the post if it was just whatever the author promoted the LLM with rather than the LLM output and would have better communicated their thoughts to me. But it also would have died on /new and I never would have seen it.

smallerfish•48m ago
I've been doing a lot of AI writing for a site - to do it well takes effort. I have a research agent, a fact check agent, a logical flow agent, a narrative arc analyzing agent, etc etc. Once I beat the article roughly into the shape I want it to be, I then read through end to end, either making edits myself or instructing an editor agent to do it. You can create some high quality writing with it, and it is still quicker than doing it the human-only way. One thing I like (which is not reason enough by itself) is that it gives you a little distance from the writing, making it easier to be ruthless about editing...it's much harder to cut a few paragraphs of precious prose that you spent an hour perfecting by hand. Another bonus is that you have fewer typos and grammatical issues.

But of course, like producing code with AI, it's very easy to produce cheap slop with it if you don't put in the time. And, unlike code, the recipient of your work will be reading it word by word and line by line, so you can't just write tests and make sure "it works" - it has to pass the meaningfulness test.

weinzierl•46m ago
"Growing up, typos and grammatical errors were a negative signal. Funnily enough, that’s completely flipped for me."

For me too and for writing it has the upside that it's sooo relaxing to just type away and not worry about the small errors much anymore.

hinkley•25m ago
Autocorrect is going to make me sound like a dumbass anyway.
ravirajx7•45m ago
AI has kind of ruined internet for me.

I no longer feel joy in reading things as almost most of the writing seem same and pale to me as if everyone is putting thoughts in the same way.

Having your own way of writing always felt personal in which you expressed your feelings most of the time.

The most sad part for me is I no longer am able to understand someone's true feelings (which anyway was hard to express in writing as articulation is hard).

We see it being used from our favourite sports person in their retirement post or from someone who has lost their loved ones or someone who just got their first job and it's just sad that we no longer can have that old pre AI days back again.

hinkley•24m ago
I used to daydream of a 'dark web' but for humans not criminals. But at this point I don't know how you'd keep slop out, given how high human collusion has gotten of late.
platinumrad•13m ago
Things like retirement posts have always been vetted (if not written by) PR agencies, so in that sense I think it would be a good thing if the mass delusion of parasocial relationships with celebrities were indirectly broken by AI-created skepticism.

However, I agree that ordinary people filtering and flattening their communication into a single style is a great loss.

benatkin•43m ago
Don't worry, author, I don't think you're a luddite. You make that quite clear with this:

> I can't imaging writing code by myself again

After that, you say that you need to know the intention for "content".

I think it's pretty inconsistent. You have a strict rule in one direction for code and a strict rule in the opposite direction for "content".

I don't think that writing code unassisted should be taken for granted. Addy Osmani covered that in this talk: https://www.youtube.com/watch?v=FoXHScf1mjA I also don't think all "content" is the sort of content where you need to know the intention. I'll grant that some of it is, for sure.

Edit: I do like intentional writing. However, when AI is generating something high quality, it often seems like it has developed an intention for what it's building, whether one that was conceived and communicated clearly by the person working with the AI or one that emerged unexpectedly through the interaction. And this applies not just to prose but to code.

nate•42m ago
i am absolutely on the fence here. I do like the ai cleanup of my rambling can do. but yes, i'm tempted to just leave it rambly, misspelled, etc. i find myself swearing more in my writing, just to give it more signal that: yeah, this probably aint an ai talking (writing) like this to you :) and yes, caps, barely.
fleebee•41m ago
I wasn't about to call them a luddite. This is a pretty poorly veiled attempt at drumming the inevitability of AI coding. Did they really need to defend their preference for not reading LLM prose with "I will never write code manually again"?
extra__tofu•38m ago
said “groks the world”; didn’t read
andrewdb•36m ago
We are getting to a point where AI will be able to construct sound arguments in prose. They will make logical sense. Dismissing them only because of their origin is fallacious thinking.

Conclusion:

Dismissing arguments solely because they are AI-generated constitutes a class of genetic fallacy, which should be called 'Argumentum ad machina'.

Premises:

1. The validity of a logical argument is determined by the truth of its premises and the soundness of its inferences, not by the identity of the entity presenting it.

2. Dismissing an argument based on its source rather than its content constitutes a genetic fallacy.

3. The phrase 'that's AI-generated' functions as a dismissal based on source rather than content.

Assumptions:

1. AI-generated arguments can have true premises and sound inferences

2. The genetic fallacy is a legitimate logical error to avoid

3. Source-based dismissals are categorically inappropriate in logical evaluation

4. AI should be treated as equivalent to any other source when evaluating arguments

grishka•35m ago
> Before you get your pitchforks out..

> ..and call me an AI luddite

Oh please do call me an AI luddite. It's an honor for me.

giancarlostoro•35m ago
The correct way to use AI for writing is to ask for feedback, not the entire output. This is my personal opinion, English is not my first language, so sometimes I miss what's obvious to a native speaker. I've always used tools that tell me what's wrong with my writing as an opportunity to learn to do better next time. When I finally had Firefox on my computer and it corrected my spelling, it helped me to improve my spelling 100-fold. I still have weird grammar issues with punctuation here and there, and don't ask me where to put a coma (comma?) - that's another one, because I always forget.

I think using AI for writing feedback is fine, but if you're going to have it write for you, don't call it your writing.

ef2k•30m ago
I really liked this post. It's concise and gets straight to the point. When it comes to presenting ideas, I think this is the best way to counter AI slop.

Noether's Theorem

https://grokipedia.com/page/Noether%27s_theorem
1•bilsbie•39s ago•0 comments

Apple security-focused developer event

https://twitter.com/radian/status/2014381898760651046
1•de_aztec•2m ago•0 comments

DanceJump for YouTube – Rhythm Dance Game – v0.4.1 Released

https://chromewebstore.google.com/detail/dancejump-for-youtube-rhy/hhdeflibphdghcpblkekakmbennfcaci
1•maaydin•2m ago•1 comments

Alabama offers three tricks to fix poor urban schools

https://www.economist.com/united-states/2026/02/12/alabama-offers-three-tricks-to-fix-poor-urban-...
1•alephnerd•2m ago•0 comments

Dyad 2.0: What Agentic AI Means for the Future of Computer Languages

https://juliahub.com/blog/announcing-dyad-2.0
2•ChrisRackauckas•3m ago•0 comments

Postgres Indexes, Partitioning and LWLock:LockManager Scalability

https://ardentperf.com/2024/03/03/postgres-indexes-partitioning-and-lwlocklockmanager-scalability/
1•abelanger•3m ago•0 comments

The Spy Who Found T. Rex

https://nautil.us/the-spy-who-found-t-rex-1267359/
1•speckx•6m ago•0 comments

Epstein Relationship Network

https://epstein-file-explorer.com/network
2•doener•6m ago•0 comments

Jeffrey Epstein might not have created /pol/, but helped carry out its mission

https://www.theverge.com/tech/877903/jeffrey-epstein-4chan-pol-moot
2•ceejayoz•7m ago•0 comments

See ChatGPT's Hidden Bias about Your State or City

https://www.washingtonpost.com/technology/interactive/2026/see-chatgpts-hidden-bias-about-your-st...
1•atlasunshrugged•8m ago•1 comments

FusionAuth: Deploy Anywhere, Even a Panic Room [video]

https://www.youtube.com/watch?v=Vi4SqPO-3Xg
1•mooreds•8m ago•0 comments

Show HN: Interactive visualizer for Karpathy's 243-line microGPT

https://github.com/Sjs2332/microGPT_Visualizer
1•Sayyed23•10m ago•1 comments

The Scientist Who Crocheted: George Washington Carver's Unexpected Legacy

https://pieceworkmagazine.com/george-washington-carver/
1•bryanrasmussen•11m ago•0 comments

Ariana and the Elder Codex Game's Trailer Previews Story

https://www.animenewsnetwork.com/news/2026-01-22/ariana-and-the-elder-codex-game-trailer-previews...
1•PaulHoule•12m ago•0 comments

Highspot is merging with rival Seismic in major sales software deal

https://www.geekwire.com/2026/seattle-based-highspot-is-merging-with-rival-seismic-in-major-sales...
1•derekered•12m ago•0 comments

I let my 6-year-old shape a space shooter and implemented it. Phaser and Next.js

https://astroavocado.com
2•dplusf•12m ago•1 comments

DashClaw: Open-Source Dashboard and Toolkit for Agents

https://dash-claw.vercel.app/
1•ucsandman•13m ago•0 comments

Why agile development is hard in hardware

https://evercurrent.substack.com/p/while-agile-development-is-hard-in
1•ideadibia•13m ago•0 comments

Fortify your app: Essential strategies to strengthen security

https://developer.apple.com/events/view/TUHA23T82K/dashboard
2•rauhul•13m ago•1 comments

Infinite Logo Ticker

https://www.goodcomponents.io/app/components/logo-ticker
1•eustoria•13m ago•0 comments

The alignment tax: What product managers and systems engineers have in common

https://evercurrent.ai/blog/the-alignment-tax-what-product-managers
1•ideadibia•14m ago•0 comments

Pure Blog

https://pureblog.org/
3•billybuckwheat•14m ago•1 comments

Show HN: Sift-kg – Turn documents into knowledge graphs from the CLI

https://github.com/juanceresa/sift-kg
1•juanceresa•14m ago•2 comments

Paper – Design, Share, Ship

https://paper.design
1•eustoria•14m ago•0 comments

Distributed relational database built on SQLite

https://github.com/rqlite/rqlite
1•noselasd•14m ago•0 comments

Archaeologists Identify Traces of 2k-Year-Old Pompeii Love Note

https://www.smithsonianmag.com/smart-news/archaeologists-say-theyve-identified-traces-of-a-2000-y...
1•stevenjgarner•15m ago•1 comments

Show HN: PicoGPT – GPT in a QR Code

https://github.com/Kuberwastaken/picogpt
1•kuberwastaken•16m ago•0 comments

How do you source funding news even before news catchers?

1•deskithere•16m ago•0 comments

Golden Spider Silk Cape

https://en.wikipedia.org/wiki/Golden_Spider_Silk_Cape
1•danielam•17m ago•0 comments

I died for 40 minutes – here's what it taught me about life

https://www.bbc.com/news/articles/c0jvxwvdy4zo
1•tartoran•17m ago•0 comments