The AI slop epidemic has a simple cause: people are asking LLMs to create instead of using them to amplify. When you prompt "write me a poem about loss," you get generic output because there's no complexity to work with—garbage in, garbage out. But when I fed Claude my raw, messy 33-post Bluesky poem and asked it to unpack what I'd written, something different happened. Like rubber duck debugging, the act of articulating my fragmented ideas to the LLM forced me to see patterns I'd missed, contradictions I'd avoided, emotional layers I couldn't access alone. The LLM didn't create anything—it amplified what was already there by giving me a structured way to externalize and examine my own thinking. The more entropy (complexity, density, messiness) I provided, the more useful the output became. LLMs aren't steam engines that create energy from nothing; they're amplifiers that can only magnify what you feed them. If your AI output is slop, check your input first. The breakthrough isn't in the model—it's in learning to articulate your problem densely enough that the solution emerges in the telling.
_phnd_•1h ago