How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?
I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.
The most interesting incident for me is having someone take our Discourse thread, paste it into AI to validate their feelings being hurt (I took a follow up prompt to go full sycophancy), and then posting the response back that lambasted me. The mods handled that one before I was aware, but I then did the same thing, giving different prompts, and never sharing the output. It was an intriguing experience and exploration. I've since been even more mindful of my writing, sometimes using similar prompts to adjust my tone or call me out. I still write the first pass myself, rarely relying on AI for editing.
I've found success having sidebar conversations with the colleague (e.g., not in the main public thread where they pasted slop), explaining why it was disruptive and suggesting how they might alter their behavior. It may also be useful to see if you can propose or contribute to a broader policy on appropriate AI use/contribution with AI, and leverage that policy as the conversation justification?
Pattern rather than person? General team reviews or the like. As long as it's not tech leadership pressing for it..
Make them realise they're replacing themselves if they continue down that path. "What value do you have if you're just acting as a pipe to the AI?"
I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.
And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.
Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset.
Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention.
And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were).
But it doesn't? I'm more than capable of using Google and chatgpt myself. If I was looking for a machine generated answer to my question I would have already found it myself and never made the post in the first place. If I went to the effort of posting the question, it means that either the slop answer is not sufficient for some reason or that I want to hear from actual humans that have subjective experiences that an LLM cannot.
Posting an AI response verbatim basically says "I think you're too stupid to click a couple of buttons, so let me show you how it's done". I think it's very reasonable to get upset at the implication.
I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated.
(the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text)
Well, cat videos make people happy.
It is easy to do in social media because the context is global but in enterprises it is a bit harder.
Something like "flagged as very likely untrue by AI" is something I would really appreciate.
I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.
It's also very impolite to dump 5 pages of text on someone, because now you're asking _them_ to validate it.
When I ask a question in Slack I want people's input. Part of my work is also consulting the GPTs and see if the information makes sense.
And it shows up the most with people who answer questions in domains they're not a 100% familiar with.
They did disclose AI usage which is good: https://github.com/ahgraber/stopsloppypasta?tab=readme-ov-fi...
However, the essay and the guidelines were all human-written!
I'm possibly too jaded / cynical already...
So trace* through ninerealmlabs and ahgraber and sure enough:
I used AI:
- to help build this website.
- to help generate examples of sloppypasta
based on my original guidance
- to proofread and review the human-written
copy to provide a critical review
- to improve my arguments and ensure clarity.
Kudos for being forthright.---
* Turns out clicking "Open Source" bottom right gets there faster!
Happy to take suggestions on this!
They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.
They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.
But I think the deeper problem is that sloppypasta is a symptom of something we haven't named yet: the collapse of the signal that someone has thought about something. Before LLMs, a long, detailed response in Slack implied the person had spent time thinking. Now it implies nothing — it could be 30 seconds of prompting. We've lost the ability to distinguish effort from output, and that breaks the social contract of professional communication.
The fix isn't etiquette guides (the people who need them won't read them). It's cultural norms enforced through friction — the same way code review catches sloppy PRs. If your team starts routinely asking "did you verify this?" when someone pastes a wall of text, the behavior self-corrects fast.
namnnumbr•6h ago
sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.
ares623•28m ago