frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Stop Sloppypasta

https://stopsloppypasta.ai/
61•namnnumbr•6h ago

Comments

namnnumbr•6h ago
Tired of people at work pasting raw ChatGPT output into chats, I coined the term "sloppypasta" and have written this rant to explain why it's rude and some guidelines for what to do instead

sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

ares623•28m ago
I'm glad that the term "slop" really caught on. It's such a succinct way to describe the phenomenon, and at the same time it's so malleable. Sloppypasta, Microslop, Workslop, Ensloppification, etc.
stabbles•1h ago
I wouldn't call "ChatGPT says" an equivalent of LMGTFY. The former is people in awe with the oracle, the latter is people tired of having to look something up for others.
verdverm•1h ago
I would say LMAAFY is like LMGTFY, where as the sloppypasta is more like pasting search results list without vetting them. That is, there are two phases to this phenomenon, query and results.
uniq7•1h ago
This article's proposal for stopping sloppypasta is to convince the people who does it to stop doing it, but I am more interested on what someone who receives sloppypasta can do.

How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.

verdverm•1h ago
I've had some luck pointing out where the AI is wrong in their sloppypasta, delicate as one can. Avoiding shame or embarrassment can be a powerful motivator.

The most interesting incident for me is having someone take our Discourse thread, paste it into AI to validate their feelings being hurt (I took a follow up prompt to go full sycophancy), and then posting the response back that lambasted me. The mods handled that one before I was aware, but I then did the same thing, giving different prompts, and never sharing the output. It was an intriguing experience and exploration. I've since been even more mindful of my writing, sometimes using similar prompts to adjust my tone or call me out. I still write the first pass myself, rarely relying on AI for editing.

namnnumbr•1h ago
I wrote this intending it to be directly sharable and/or to provide a framework for how to have that discussion, kind of like a nohello.net or dontasktoask.com.

I've found success having sidebar conversations with the colleague (e.g., not in the main public thread where they pasted slop), explaining why it was disruptive and suggesting how they might alter their behavior. It may also be useful to see if you can propose or contribute to a broader policy on appropriate AI use/contribution with AI, and leverage that policy as the conversation justification?

kace91•59m ago
>How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

Pattern rather than person? General team reviews or the like. As long as it's not tech leadership pressing for it..

userbinator•44m ago
How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

Make them realise they're replacing themselves if they continue down that path. "What value do you have if you're just acting as a pipe to the AI?"

incognito124•1h ago
Related: https://news.ycombinator.com/item?id=44617172
namnnumbr•1h ago
100% - was inspired by and quote "It's rude to show AI output to people" in this. Thanks for linking the discussions!
madrox•1h ago
I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. We need better tools as content consumers to filter content. Ironically, AI is what may actually make this possible.

I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.

And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.

valicord•58m ago
> I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral.

Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset.

madrox•52m ago
> If I'm asking humans, I want to see human responses I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y"

Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention.

And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were).

valicord•47m ago
> It shouldn't matter as long as it addresses your ask, yet it does.

But it doesn't? I'm more than capable of using Google and chatgpt myself. If I was looking for a machine generated answer to my question I would have already found it myself and never made the post in the first place. If I went to the effort of posting the question, it means that either the slop answer is not sufficient for some reason or that I want to hear from actual humans that have subjective experiences that an LLM cannot.

Posting an AI response verbatim basically says "I think you're too stupid to click a couple of buttons, so let me show you how it's done". I think it's very reasonable to get upset at the implication.

namnnumbr•42m ago
I acknowledge that those likely to copypaste slop aren't likely to find this article themselves, but I built the page to be shared or guide discussions around etiquette like nohello.net or dontasktoask.com. IMO a common understanding of AI etiquette would provide social pressure to halt some of these behaviors.

I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated.

(the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text)

Aeolun•19m ago
Yes, I can replace the link to nohello in my automated responses now :)
mcphage•34m ago
> We smiled and laughed for years that all of this technology and power is just being used to share cat videos.

Well, cat videos make people happy.

jjgreen•26m ago
This Firefox extension replaces Daily Mail pages by pictures of kittens https://addons.mozilla.org/en-US/firefox/addon/kitten-block/
simianwords•48m ago
I've been thinking about this, what if AI runs autonomously and finds things to criticise that are factually incorrect?

It is easy to do in social media because the context is global but in enterprises it is a bit harder.

Something like "flagged as very likely untrue by AI" is something I would really appreciate.

I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.

OptionOfT•46m ago
It's very weird how many people take the output of ChatGPT/Gemini/Claude as gospel, and don't question it at all.

It's also very impolite to dump 5 pages of text on someone, because now you're asking _them_ to validate it.

When I ask a question in Slack I want people's input. Part of my work is also consulting the GPTs and see if the information makes sense.

And it shows up the most with people who answer questions in domains they're not a 100% familiar with.

Aeolun•17m ago
I don’t mind this so much if they don’t know anything about the subject themselves. What bothers me is when they then copy it at domain experts as if it makes them qualified to talk.
rrr_oh_man•40m ago
It's ironic, because the site has all the hallmarks of an LLM generated website.
spondyl•24m ago
I think Claude Code's frontend design is quite a fan of serif fonts from what I've seen in the past.

They did disclose AI usage which is good: https://github.com/ahgraber/stopsloppypasta?tab=readme-ov-fi...

namnnumbr•23m ago
Oh, I 100% acknowledge the site itself was LLM generated. I'm not a web designer, so I needed a lot of help making a visually appealing site, even if that design language is at this point LLM trope.

However, the essay and the guidelines were all human-written!

rrr_oh_man•11m ago
Credit to you for your candor!

I'm possibly too jaded / cynical already...

Terretta•11m ago
Hits you in the first row of buttons with the classic gen-AI slop "Why It Matters".

So trace* through ninerealmlabs and ahgraber and sure enough:

  I used AI:
  - to help build this website.
  - to help generate examples of sloppypasta
    based on my original guidance
  - to proofread and review the human-written
    copy to provide a critical review
  - to improve my arguments and ensure clarity.
Kudos for being forthright.

---

* Turns out clicking "Open Source" bottom right gets there faster!

namnnumbr•5m ago
I talked myself in circles on that "why it matters" heading but ultimately couldn't come up with a better one. "The problem" has similar ai-slop feel, and "the rant" // "the rules" didn't really evoke the feeling I wanted.

Happy to take suggestions on this!

chewbacha•40m ago
When you must remind someone to “think” when using a technology because the least resistant path is to not think… it feels like the technology isn’t really helping.

They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.

They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.

paseante•22m ago
The asymmetry is the core issue and it maps perfectly to a concept from economics: externalities. The sender gets 100% of the benefit (appears knowledgeable, responds quickly) and externalizes 100% of the cost (verification, parsing, filtering) to the recipient. It's pollution — you're dumping cognitive waste into someone else's attention.

But I think the deeper problem is that sloppypasta is a symptom of something we haven't named yet: the collapse of the signal that someone has thought about something. Before LLMs, a long, detailed response in Slack implied the person had spent time thinking. Now it implies nothing — it could be 30 seconds of prompting. We've lost the ability to distinguish effort from output, and that breaks the social contract of professional communication.

The fix isn't etiquette guides (the people who need them won't read them). It's cultural norms enforced through friction — the same way code review catches sloppy PRs. If your team starts routinely asking "did you verify this?" when someone pastes a wall of text, the behavior self-corrects fast.

Canada's bill C-22 mandates mass metadata surveillance of Canadians

https://www.michaelgeist.ca/2026/03/a-tale-of-two-bills-lawful-access-returns-with-changes-to-war...
123•opengrass•2h ago•33 comments

Chrome DevTools MCP

https://developer.chrome.com/blog/chrome-devtools-mcp-debug-your-browser-session
261•xnx•4h ago•124 comments

The 49MB web page

https://thatshubham.com/blog/news-audit
223•kermatt•4h ago•140 comments

LLM Architecture Gallery

https://sebastianraschka.com/llm-architecture-gallery/
186•tzury•7h ago•11 comments

LLMs can be exhausting

https://tomjohnell.com/llms-can-be-absolutely-exhausting/
15•tjohnell•2h ago•6 comments

//go:fix inline and the source-level inliner

https://go.dev/blog/inliner
92•commotionfever•4d ago•32 comments

A new Bigfoot documentary helps explain our conspiracy-minded era

https://www.msn.com/en-us/news/us/a-new-bigfoot-documentary-helps-explain-our-conspiracy-minded-e...
18•zdw•1h ago•2 comments

Separating the Wayland compositor and window manager

https://isaacfreund.com/blog/river-window-management/
194•dpassens•8h ago•83 comments

What makes Intel Optane stand out (2023)

https://blog.zuthof.nl/2023/06/02/what-makes-intel-optane-stand-out/
168•walterbell•8h ago•112 comments

Glassworm Is Back: A New Wave of Invisible Unicode Attacks Hits Repositories

https://www.aikido.dev/blog/glassworm-returns-unicode-attack-github-npm-vscode
201•robinhouston•10h ago•116 comments

C++26: The Oxford Variadic Comma

https://www.sandordargo.com/blog/2026/03/11/cpp26-oxford-variadic-comma
99•ingve•4d ago•54 comments

Learning athletic humanoid tennis skills from imperfect human motion data

https://zzk273.github.io/LATENT/
112•danielmorozoff•8h ago•21 comments

Nasdaq's Shame

https://keubiko.substack.com/p/nasdaqs-shame
58•imichael•1h ago•9 comments

Bus travel from Lima to Rio de Janeiro

https://kenschutte.com/lima-to-rio-by-bus/
106•ks2048•4d ago•40 comments

Stop Sloppypasta

https://stopsloppypasta.ai/
62•namnnumbr•6h ago•32 comments

A Visual Introduction to Machine Learning (2015)

https://r2d3.us/visual-intro-to-machine-learning-part-1/
303•vismit2000•13h ago•29 comments

In Memoriam: John W. Addison, my PhD advisor

https://billwadge.com/2026/03/15/in-memoriam-john-w-addison-jr-my-phd-advisor/
78•herodotus•8h ago•4 comments

Excel incorrectly assumes that the year 1900 is a leap year

https://learn.microsoft.com/en-us/troubleshoot/microsoft-365-apps/excel/wrongly-assumes-1900-is-l...
9•susam•16m ago•0 comments

Type systems are leaky abstractions: the case of Map.take!/2

https://dashbit.co/blog/type-systems-are-leaky-abstractions-map-take
8•tosh•4d ago•2 comments

Kangina

https://en.wikipedia.org/wiki/Kangina
36•thunderbong•1h ago•3 comments

Show HN: Open-source playground to red-team AI agents with exploits published

https://github.com/fabraix/playground
6•zachdotai•1h ago•0 comments

Show HN: GDSL – 800 line kernel: Lisp subset in 500, C subset in 1300

https://firthemouse.github.io/
52•FirTheMouse•8h ago•13 comments

Show HN: Free OpenAI API Access with ChatGPT Account

https://github.com/EvanZhouDev/openai-oauth
8•EvanZhouDev•2h ago•7 comments

Ask HN: How is AI-assisted coding going for you professionally?

204•svara•7h ago•345 comments

Show HN: Signet – Autonomous wildfire tracking from satellite and weather data

https://signet.watch
103•mapldx•11h ago•28 comments

Animated 'Firefly' Reboot in Development from Nathan Fillion, 20th TV

https://www.hollywoodreporter.com/tv/tv-news/animated-firefly-reboot-in-development-nathan-fillio...
142•Amorymeltzer•5h ago•29 comments

Show HN: What if your synthesizer was powered by APL (or a dumb K clone)?

https://octetta.github.io/k-synth/
74•octetta•10h ago•28 comments

Hollywood Enters Oscars Weekend in Existential Crisis

https://www.theculturenewspaper.com/hollywood-enters-oscars-weekend-in-existential-crisis/
113•RickJWagner•11h ago•367 comments

Grandparents are glued to their phones [video]

https://www.bbc.com/reel/video/p0n61dg3/grandparents-are-glued-to-their-phones-families-are-worried
179•tartoran•6h ago•117 comments

Autoresearch Hub

http://autoresearchhub.com/
30•EvgeniyZh•1d ago•15 comments