frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: How do you motivate your humans to stop AI-washing their emails?

12•causal•2h ago
I see it more and more in email, Slack, text, etc: People too scared to share their own thoughts so they AI-wash it and send an exhausting page of "It's not X, it's Y!" slop instead.

I'm not the CEO, I can't order people to stop. The CEO does it too.

I try talking to people directly, but people get defensive and there's always the chance they didn't use AI. I need indirect means of socializing change.

Looking for anything I can use to socialize against AI-washing: Articles, memes, policies that other companies have successfully used- whatever.

Comments

jjgreen•2h ago
A noble but essentially Sisyphean goal, you might as well try to get people to stop playing with their phones.
causal•1h ago
Fair but I have seen workplaces keep phone use largely curtailed. Surely it's not so impossible with AI...right? Right...? :/
alexdobrenko•1h ago
is this written by AI
causal•56m ago
No sir
theorchid•1h ago
I tried to write my first blog posts using AI. I created dozens of restrictions and rules so that it would produce human-like text, which I then edited. The text contained only my thoughts, but the AI formatted them. However, no matter how much I tried to prohibit constructions such as "It's not X, it's Y!", it still added them. I had to revise 10 drafts before I had the final version. When I stopped using AI for my texts, my productivity increased, and I can now complete an essay in 1-2 drafts, which is 5 times faster than when using AI.

This is strikingly different from development. In development, AI increases my productivity fivefold, but in texts, it slows me down.

I thought, maybe the problem is simply that I don't know how to write texts, but I do know how to develop? But the thing is, AI development uses standard code, with recognized patterns, techniques, and architecture. It does what (almost) the best programmer in their field would do. And its code can be checked with linters and tests. It's verifiable work.

But AI is not yet capable of writing text the way a living person does. Because text cannot be verified.

causal•50m ago
Verifiability is part of it, but I think the "semantic ablation" article on the front page really captures my problem with AI-washed writing: https://www.theregister.com/2026/02/16/semantic_ablation_ai_...

I think any use of AI "unrolls" the prompt into a longer but thinner form. This is true of code too I think, but it's still useful because so much of coding is boilerplate and methods that have been written a thousand times before. Great, give me the standard implementation, who cares.

But if you're doing hard algorithmic work and really trying to do novel "computer science", I suspect semantic ablation would take an unacceptable toll.

svilen_dobrev•59m ago
the important word is "scared".

if the incentive / whiff / hint from-the-top is "those not using AI are out"... there's no stopping that..

causal•47m ago
Agreed... I'm not at the top.
butlike•58m ago
The last thing I want to do is have my emails glossed over with AI to make my boss think I'm MORE replaceable haha
11101010010001•58m ago
You answered your own question. People are 'too scared' to share their thoughts so they share AIs instead. I suspect if you scared people about the use of AI, there may be an increase in usage.
causal•48m ago
Did you mean decrease in your last sentence? Or do you simply mean any solution will make the problem worse?
mixmastamyk•57m ago
Block *.ai at the router, and all major sites. Someone has probably made a comprehensive blocklist by now.
causal•49m ago
I mean most of us certainly don't have that kind of authority, and that's not really going to stop AI use when it comes embedded in every service these days.
Lionga•49m ago
You’re describing a real coordination problem: over-polished, abstraction-heavy “AI voice” increases cognitive load and reduces signal. Since you don’t have positional authority—and leadership models the behavior—you need norm-shaping, not enforcement. Here are practical levers that work without calling anyone out:

1. Introduce a “Clarity Standard” (Not an Anti-AI Rule) Don’t frame it as anti-AI. Frame it as decision hygiene. Propose lightweight norms in a team doc or retro:

TL;DR (≤3 lines) required

One clear recommendation

Max 5 bullets

State assumptions explicitly

If AI-assisted, edit to your voice

This shifts evaluation from how it was written to how usable it is. Typical next step: Draft a 1-page “Decision Writing Guidelines” and float it as “Can we try this for a sprint?”

2. Seed a Meme That Rewards Brevity Social proof beats argument. Examples you can casually share in Slack:

“If it can’t fit in a screenshot, it’s not a Slack message.”

“Clarity > Fluency.”

“Strong opinions, lightly held. Weak opinions, heavily padded.”

Side-by-side: AI paragraph → Edited human version (cut by 60%)

You’re normalizing editing down, not calling out AI. Typical next step: Post a before/after edit of your own message and say: “Cut this from 300 → 90 words. Feels better.”

3. Cite Credible Writing Culture References Frame it as aligning with high-signal orgs:

High Output Management – Emphasizes crisp managerial communication.

The Pyramid Principle – Lead with the answer.

Amazon – Narrative memos, but tightly structured and decision-oriented.

Stripe – Known for clear internal writing culture.

Shopify – Publicly discussed AI use, but with expectations of accountability and ownership.

You’re not arguing against AI; you’re arguing for ownership and clarity. Typical next step: Share one short excerpt on “lead with the answer” and say: “Can we adopt this?”

4. Shift the Evaluation Criteria in Meetings When someone posts AI-washed text, respond with:

“What’s your recommendation?”

“If you had to bet your reputation, which option?”

“What decision are we making?”

This conditions brevity and personal ownership. Typical next step: Start consistently asking “What do you recommend?” in threads.

5. Propose an “AI Transparency Norm” (Soft) Not mandatory—just a norm:

“If you used AI, cool. But please edit for voice and add your take.”

This reframes AI as a drafting tool, not an authority. Typical next step: Add a line in your team doc: “AI is fine for drafting; final output should reflect your judgment.”

6. Run a Micro-Experiment Offer:

“For one sprint, can we try 5-bullet max updates?”

If productivity improves, the behavior self-reinforces.

Strategic Reality If the CEO models AI-washing, direct confrontation won’t work. Culture shifts via:

Incentives (brevity rewarded)

Norms (recommendations expected)

Modeling (you demonstrate signal-dense writing)

You don’t fight AI. You make verbosity socially expensive.

If helpful, I can draft:

A 1-page clarity guideline

A Slack post to introduce it

A short internal “writing quality” rubric

A meme template you can reuse

Which lever feels safest in your org right now?

causal•47m ago
Very funny
impendia•45m ago
If people are scared to share their thoughts, then that seems like the problem.

Also, how much of this communication is actually necessary? If someone doesn't care about an issue enough to write their own email, then why are they sending an email about it in the first place?

kylehotchkiss•30m ago
With slack and text, "Edit Message" exists. People need to get over their fear.

Email being a send once, what you said persists forever, is a little scarier. It'd be nice to have a messaging protocol used at work where a typo or wrong URL pasted isn't so consequential. I've been at this for 14 years now, and I still re-read emails I send to clients 10+ times to make sure I am not making even the most minor of mistakes.

tacostakohashi•28m ago
Sometimes I ask in chats / emails etc. "are there any new proposals that I missed here, all I'm seeing is AI slop?".

I think it's totally legit to ask, and specify that you are looking for new insights, proposals, etc. and not regurgitated AI summaries.

Ask HN: How do you motivate your humans to stop AI-washing their emails?

12•causal•2h ago•19 comments

Picknar – Lightweight YouTube Thumbnail Extractor (No Login, No API Key)

2•Picknar•1h ago•0 comments

Thank HN: You helped save 33,241 lives

7•chaseadam17•2h ago•0 comments

Watching an elderly relative trying to use the modern web

39•ColinWright•17h ago•17 comments

Ask HN: Why is my Claude experience so bad? What am I doing wrong?

76•moomoo11•4d ago•114 comments

Ask HN: How do companies that use Cursor handle compliance?

7•Poomba•15h ago•2 comments

Ask HN: Companies that advertise being a "best place to work", is it a red flag?

12•jrs235•23h ago•13 comments

Top non-ad google result for "polymarket" in Australia is a crypto scam

15•rtrgrd•1d ago•2 comments

Ask HN: Why is YouTube's recommendation system so bad?

14•mr-pink•23h ago•12 comments

Ask HN: Do global AGENTS.md with coding principles make sense?

4•endorphine•1d ago•3 comments

Ask HN: Are there examples of 3D printing data onto physical surfaces?

17•catapart•3d ago•33 comments

Ask HN: Are you using an agent orchestrator to write code?

40•gusmally•5d ago•61 comments

Ask HN: Ranking sliders on a personal blog?

12•incognito124•1d ago•1 comments

Ask HN: Did YouTube change how it handles uBlock?

22•tefloon69•4d ago•13 comments

Ask HN: How's Business These Days for Fiverr Freelancers?

12•burnerToBetOut•17h ago•5 comments

Tell HN: Microsoft Edge self-destroys updating it in Debian based distros

7•usr1106•1d ago•1 comments

What web businesses will continue to make money post AI?

15•surume•2d ago•30 comments

Ask HN: What happens after the AI bubble bursts?

38•101008•1d ago•40 comments

Ask HN: Info on the 1982 Apple 2 text game Abuse?

6•jmount•3d ago•2 comments

Ask HN: How do you audit LLM code in programming languages you don't know?

13•syx•5d ago•14 comments

Ask HN: Share your vibe coded project

5•firefoxd•2d ago•9 comments

Ask HN: Stripe is asking for bank statements to check financial health

10•kinj28•3d ago•8 comments

Ask HN: LLMs helping you read papers and books

8•amelius•2d ago•4 comments

Ask HN: Want to move to use a "dumb" phone. How to make the switch?

12•absoluteunit1•2d ago•12 comments

Ask HN: We're building a saving app for European savers and need GTM advice

6•AlePra00•4d ago•16 comments

Ask HN: Better hardware means OpenAI, Anthropic, etc. are doomed in the future?

5•kart23•4d ago•10 comments

Ask HN: What explains the recent surge in LLM coding capabilities?

12•orange_puff•2d ago•8 comments

Ask HN: What's the best realtime, local, TTS solution? Live call interpretation

6•Wright007•2d ago•1 comments

Tadpole the Language for Scraping 0.2.0 – Complex Control Flow, Stealth and More

6•zachperkitny•1d ago•2 comments

Ask HN: Exceptionally well-written research papers in CS/ML/AI?

5•b3rkus•3d ago•1 comments