Why are we getting upset when someone is doing what we all do? Is it the obvious (over) use?
It got to a point where I am not revising my writing as I used to, so it will remain "authentic" (leaving minor spelling and perhaps some grammatical errors - common for non native English speakers).
there is no questions we are going deal with growing amount of ai generated content, and by we it is us, humans, and our agents, and the question is when will it balanced out? when we will all accept it as a fact?
Is it related to the fact we are willing to pay extra for "hand crafted" goods? (as an analogy: it is obvious that you had your machine generated my shoe)
(*) perhaps not all but 99.999% of us.
-- For the irony in it, below is the "Polished by Claude" version of the above --
ASK HN: Balancing AI use in human-to-human communication
We're all using LLMs throughout our workflows. That's not a controversial claim. And yet there's a near-universal irritation when you receive an email that was clearly "polished" by one — or read a comment that was obviously AI-rephrased. The frustration is real, and so is the hypocrisy.
Is it the obviousness of it? The lack of effort? Or something more uncomfortable — that we recognize the mask because we wear it ourselves? It's gotten to the point where I've stopped revising my own writing, deliberately leaving minor spelling errors and grammatical quirks intact. Common for non-native speakers. I keep them as proof of presence.
The volume of AI-generated content will only grow. Not just from humans making choices about it — but from our agents, acting on our behalf, communicating with each other's agents. At what point does the frustration normalize? When do we simply accept it as infrastructure, the way we accepted email or autocorrect?
There's a useful analogy in handcrafted goods. We know the shoe was machine-made. We still pay more for the one that wasn't. The premium isn't about the shoe — it's about the signal. Proof that a human chose to spend time.
Maybe that's what we're really reacting to: not the AI assistance itself, but the absence of the signal.