a) Quantity > Quality if it prints $$$.
or
b) Quality > Quantity if it feels like the right thing to do.
Witnessing type A at scale is a first-class ticket into misanthropy.
> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?
“No I don’t need this formatted for Outlook Dave. Thanks for asking though!”
I wonder what others there are.
I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.
I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.
Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.
[1] most recently https://news.ycombinator.com/item?id=44482876
Also, that "cow-orkers" doesn't look like AI-generated slop at all..? Just scrolling down a bit shows that most of them are three years and older.
Might be incorrectly saved in some spell check software and occasionally rearing it's head
This goes back a loooooong while.
Here's my take: these forums will drive good writers away or at least discourage them, leaving discourses the worse for it. What they really end up saying — "we don't care whether you use an LLM, just remove the damn em-dash" — indicates it's not a forum hosting riveting discussions in the first place.
“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”
This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.
Folks who are new to AI are just posting away with their December 2022 because it's new to them.
It is best to personally understand your own style(s) of communication.
one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.
Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.
My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."
Change is inevitable. Most people just won't like it.
A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years.
"Change always triggers backlash" does not imply "all backlash is unwarranted."
> What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
But like the article explains about why it's rude: the less thought you put into it, the less chance the message is well communicated. The less thought you put into the code you ship, the less chance it will solve the problem reliably and consistently.
You aren't replying to "don't use LLM tools" you're replying to "don't just trust and forward their slop blindly."
Why does it matter where the legal claims came from if a judge accepts them?
Why does it matter where the sound waves came from if it sounds catchy?
Why does it matter?
Why does anything matter?
Sorry, I normally love debating epistemology but not here on Hacker News. :)
It does not seem to matter where the code nor the legal argument came from. What matters is that they are coherent.
And it stays much closer to how they are writing.
So have your Siri talk to my Cortana and we'll work things out.
Is this a colder world or old people just not understanding the future?
I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.
Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.
People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.
Please be honest. If it’s slop or they have incorrect information in the message, then my bad, stop reading here. Otherwise…
I really hope people like this with holier than thou attitude get filtered out. Fast.
People who don’t adapt to use new tools are some of the worst people to work around.
"my bad" and what next? The reader just wasted time and focus on reading, it doesn't sound like a fair exchange.
Most of the time people just like getting triggered that someone sent them a —— in their message and blame AI instead of adopting it into their workflows and moving faster.
The holier than thou people are the ones who are telling us genAI is inevitable, it's here to stay, we should use it as a matter of rote, we'll be left out if we don't, it's going to change everything, blah blah blah. These are articles of faith, and I'm sorry but I'm not a believer in the religion of AI.
Last time someone did this to me I sent them a few other answers by the same LLM to the same prompt, all different, with no commentary.
Cause all an LLM is, is a reflection of its input.
Garbage in garbage out.
If we're going to have this rule about AI, maybe we should have it about... everything. From your mom's last Facebook post, to what is said by influencers to this post...
Say less. Do more.
Now that's no longer the case and there are lazy or unthoughtful people that simply pass along AI outputs, raw and completely unprocessed, as cognitive work for other human beings to deal with.
You can have a very thoughtful LLM prompt and get a garbage response if the model fails to generate a solid, sound answer to your prompt. Hard questions with verifiable but obscure answers, for instance, where it generates fake citations.
You can have a garbage prompt and get not-garbage output if you are asking in a well-understood area with a well-understood problem.
And the current generation of company-provided LLMs are VERY highly trained to make the answer look non-garbage in all cases, increasing the cognitive load on you to figure out.
We're already seeing the social contract around hosting your own blog change due to the constant indexing from AI crawlers.
When I got squggily written cursive letters from my grandma I could be pretty sure those were her words though up by herself, for the effort to accurately reproduce the consistent mess she made would have been great. But the moment we moved to the typewriter and then other digital means uniformly printed out on paper or screens you've really just assumed that it was written by the human you were expecting.
Furthermore, the vast majority of communications done in business long before now were not done by 'people' per say. They were done by processes. In the vast majority of business email that I type out there is a large amount of process that would not occur if I were talking to a friend. Moreso this communication is facilitative to some other end goal. If the entire process that existed could be automated away humanity would be better off as some mostly useless work would be eliminated.
Do you know why people are so willing to use AI to communicate with each other? Because at the end of the day they don't give two shits about communicating with you. It's an end goal of receiving a paycheck. There is no passion, no deep interest, no entertainment for them in doing so. It's a process demanded of them because of how we integrate Moloch into our modern lives.
Sending half-baked, run-on, unvetted writing, when you easily could have chosen otherwise, is in fact the disrespectful choice.
I would avoid that world at any cost of I was allowed a choice, but the point is that it's used as a weapon against you. Consent appears to be unnecessary.
I cherish the unique humanity in every voice. Forced robotic uniformity feels like an imposition, not a choice—and consent matters deeply.
The output is the the opposite of how you describe it, and vastly more persuasive than your own words. When it's persuasion that matters, use all tools available.
I stopped there and replied that if you don't care enough to test if it works, then clearly you don't actually want the feature, and closed the ticket.
I have gotten other PRs that are more in the form of "hey I don't know what I'm doing. I used GPT but and it seems to work but I don't understand this part". I'm happy to help point in the right direction for those. Because an least they're trying. And seem like this is part of their learning.
... Or they just asked jippity to make it seem that way.
I mean, it's basically cheating. I get a task, and instead of working my way through the task, which might be tedious, you take the shorter route and receive instant gratification. I can understand how that is causing some kind of rush of endorphines, much like eating a bar of chocolate will. So, yeah - I would agree, altough I do not have any studies that support the hypothesis.
That is not an excuse for it being poorly done or unvetted (which I think is the crux of the point), but it’s important to state any sources used.
If i don’t want to receive AI generated content, i can use the attribution to filter it out.
I wonder how long it will be before LLM-text trademarks become seen as a sign of bad writing or laziness instead? And then maybe we'll have an arms race of stylistic changes.
---
Completely agree with the author:
Earlier this week I asked Claude to summarize a bunch of code files since I was looking for a bug. It wrote paragraphs and had 3 suggestions. But when I read it, I realized it was mostly super generic and vague. The conditions that would be required to trigger the bug in those ways couldn't actually exist, but it put a lot of words around the ideas. I took longer to notice that they were incorrect suggestions as a result.
I told it "this won't happen those ways [because blah blah blah]" and it gave me the "you are correct!" compliment-dance and tried again. One new suggestion and a claimed reason about how one of its original suggestions might be right. The new suggestion seemed promising, but I wasn't entirely convinced. Tried again. It went back to the first three suggestions - the "here's why that won't happen" was still in the context window, but it hit some limit of its model. Like it was trying to reconcile being reinforcement-learning'd into "generate something that looks like a helpful answer" with "here is information in the context window saying the text I want to generate is wrong" and failing. We got into a loop.
It was a rare bug so we'll see if the useful-seeming suggestion was right or not but I don't know yet. Added some logging around it and some other stuff too.
The counterfactuals are hard to evaluate:
* would I have identified that potential change quicker without asking it? Or at all?
* would I have identified something else that it didn't point out?
* what if I hadn't noticed the problems with some other suggestions and spent a bunch of time chasing them?
The words:information ratio was a big problem in spotting the issues.
So was the "text completion" aspect of "if you're asking about a problem here, there must be a solution I can offer" RL-seeming aspect of its generated results. It didn't seem to be truly evaluating the code then deciding so much as saying "yes, I will definitely tell you there are things we can change, here are some that seem plausible."
Imagine if my coworker had asked me the question and I'd just copy-pasted Claude's first crap attempt to them in response? Rude as hell.
I don't want my theories parroted back to me on why something went wrong. I want to have ideas challenged in a way that forces me to think and hopefully lead me to a new perspective that I otherwise would have missed.
Perhaps a large portion of people do enjoy the agreeableness, but this becomes a problem not only because I think there are larger societal issues that stem from this echo-chamber like environmental but also simply that companies training these models may interpret agreeableness as somehow better and something that should be optimized for.
And I go back and forth sometimes between correcting its devils advocate responses and “steel man” responses.
What's even more infuriating is that he won't take "I've checked and that submenu doesn't exist" for an answer and insists to check again. Had to step away for a fag a few times in fear of putting his face through the desk.
Generally the AI summaries I see are more topical and accurate than the many other comments in the thread.
In general it raises the mean accuracy and info of a given thread.
Its like self driving cars.
I don't see any problem sharing a human-reviewed LLM output.
(I also figure that human review may not be that necessary in a few years.)
It's like pointing to a lmgtfy link. That's _intentionally_ rude, in that it's normally used when the question isn't worth the thought. That's what pasting a chatbot response says.
The former is like "hey, I had this experience, here's what it was about, what I learned and how it affected me" which is a very human experience and totally valid to share. The latter is like "I created some input, here's the output, now I want you reflect and/or act on it".
For example I've used Claude and ChatGPT to reflect and chat about life experiences and left feeling like I gained something, and sometimes I'll talk to my friends or SO about it. But I'd never share the transcript unless they asked for it.
It feels really interesting to the person who experienced it, not so much to the listener. Sometimes it can be fun to share because it gives you a glimmer of insight into how someone else's mind works, but the actual content is never really the point.
If anything they share the same hallucinatory quality - ie: hallucinations don't have essential content, which is kind of the point of communication.
On the other hand, emailing your prompt and the result you got can be instructive to others learning how to use LLMs (aren't we all?) We may learn effective prompt techniques or decide to switch to that LLM because of the quality of the answer.
There is an alternative interpretation - "the LLM put it so much better than I ever could, so I copied and pasted that" - but precisely because of the ambiguity, you don't want to be sneaky about it. If you want me to have a look at what the LLM said, make it clear.
A meta-consideration here is that there is just an asymmetry of effort when I'm trying to formulate arguments "manually" and you're using an LLM to debate them. On some level, it might be fair game. On another, it's pretty short-sighted: the end game is that we both use LLMs that endlessly debate each other while drifting off into the absurd.
Subjecting people to such slop is rude. All the "I asked chatbot and it said..." comments are rude because they are excessively boring and uninteresting. But it gets even worse than just boring and uninteresting when presenting chatbot text as something they wrote themselves, which is a form of lying / fraud.
No in fact I disabled my TabNine Llm until I can either train my own similar model and run everything locally or not at all.
Furthermore the whole selling point has been that anyone can use them _without needing to learn anything_.
So I apologized and began actually using LLMs while making sure the prompt included style guides and rules to avoid the tell tale signs of AI. Then some of these geniuses thanked me for being more genuine in my response.
A lot of this stuff is delusional. You only find it rude because you’re aware it’s written by AI. It’s the awareness itself that triggers it. In reality you can’t tell the difference.
This post, for example.
I too use an LLM to help me get rid of generic filler and I do have my own style of technical writing and editing. You would never know I use an LLM.
I'm a non-native English speaker who writes many work emails in English. My English is quite good, but still, it takes me longer to write email in English because it's not as natural. Sometimes I spend a few minutes wondering if I'm getting the tone right or maybe being too pushy, if I should add some formality or it would sound forced, etc., while in my native language these things are automatic. Why shouldn't I use an LLM to save those extra minutes (as long as I check the output before sending it)?
And being non-native with a good English level is nothing compared to people who might have autism, etc.
Tell people there are 10000 license plate scanners tracking their every move across the US and you get a mild chuckle, but god forbid someone access the shit they put into some for profit companies database under terms they never read.
Hell I would rather just read their reply in Spanish and if they need to write it out really fast without struggling trying to translate it and I use my own B1 level Spanish comprehension than read AI generated slop.
> Whoa, let me stop you right here buddy, what you're doing here is extremely, horribly rude.
How is it any different from "I read book <X> and it said that..."? Or "Book <X> has the following quote about that:"?
I definitely want to know where people are getting their info. It helps me understand how trustworthy it might be. It's not rude, it's providing proper context.
ChatGPT and similar have not earned a presumption of reality for me, and the same question may get many different answers, and afaik, even if you ask it for sources, they're not necessarily real either.
IMHO, it's rude to use ChatGPT and share it with me as if it's informative; it disrespects my search for truth. It's better that you mention it, so I can disregard the whole thing.
It mostly means you don't respect the other person's time and it's making them do the vetting. And that's the rude part.
But you can’t assume positive intent or any intent from an LLM.
I always test the code, review it for corner cases, remove unnecessary comments, etc just like I would a junior dev.
For facts, I ask it to verify whatever says based on web source. I then might use it to summarize it. But even then I have my own writing style I steer it toward and then edit it.
Isaac, if you're reading this - stop sending me PDFs generated by Perplexity!
> "I asked ChatGPT and this is what it said: <...>". ... > "I vibe-coded this pull request in just 15 minutes. Please review"
This is even nice. You have outlined here an actual warning. Usually there is none.
When you get this type of request you are pretty much debugging AI code on the spot without any additional context.
You can just see when text/code is AI generated or not. No matter text or code. No tools needed.
"I hand-typed this close message in just 15 seconds. Please refrain."
Yet today, we both cringe at forgettable food Instagrams and marvel at the World Press Photo of the Year.
I do fully agree with the conclusions on etiquette. Just like it's rude to try to pass a line-traced photo as a freehand drawing, decompressing very little information into a wall of text without a disclaimer is rude.
Just the other day I witnessed in a chat someone commenting that another (who previously sent an AI summary of something) had sent a "block of text" which they wouldn't read because it was too much, then went to read it when they were told it was from Quora, not generated. It was a wild moment for me, and I said as much.
Such a great point and one which I hadn't considered. With LLMs, we've flipped this equation, and it's having all sorts of weird consequences. Most obviously for me is how much more time I'm spending on code reviews. Its massively increased the importance of making the PR as digestible as possible for the reviewer, as now both author and reviewer are much closer to equal understanding of the changes compared to if the author had written the PR solely by themselves. Who knows what other corollaries there are to this reversal of reading vs writing
Humanity has survived and adapted, and all in all, I'm glad to live in a world with photography in it.
That said, part of that adaptation will probably involve the evolution of a strong stigma against undeclared and poorly verified/curated AI-generated content.
My current day to day problem is that, the PRs don't come with that disclaimer; The authors won't even admit if asked directly. Yet I know my comments on the PR will be fed to the cursor so it makes more crappy edits, and I'll be expecting an entirely different PR in 10 minutes to review from scratch without even addressing the main concern. I wish I could at least talk to the AI directly.
(If you're wondering, it's unfortunately not in my power right now to ignore or close the PRs).
Another perspective I’ve found to resonate with people is to remind them — if you’re not reviewing the code or passing it through any type of human reasoning to determine its fit to solving the business problem - what value are you adding at all? If you just copy pasta through AI, you might as well not even be in the loop, because it’d be faster for me to do it directly, and have the context of the prompts as well.
This is a step change in our industry and an opportunity to mentor people who are misusing it. If they don’t take it, there are plenty of people who will. I have a feeling that AI will actually separate the wheat from the chaff, because right now, people can hide a lack of understanding and effort because the output speed is so low for everyone. Once those who have no issue with applying critical thinking and debugging to the problem and work well with the business start to leverage AI, it’ll become very obvious who’s left behind.
I’m willing to mentor folks, and help them grow. But what you’re describing sounds so exhausting, and it’s so much worse than what “mentorship” meant just a few short years ago. I have to now teach people basic respect and empathy at work? Are we serious?
For what it’s worth: sometimes ignoring this kind of stuff is teaching. Harshly, sure - but sometimes that’s what’s needed.
I haven't personally been in this position, but when I think about it, looping all your reviews through the cursor would reduce your perceived competence, wouldn't it? Is giving them a negative performance review an option?
But yeah, to a boss or something, that would be rude. They hired you to answer a question.
Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.
Once it's verified, I add it to my own documentation library so that I can refer to it later on.
Quote is from Mark Twain and perfectly encapsulates the sentiment. Writing something intended for another person to read was previously an effort. Some people were good at it, some were less good. But now, everyone can generate some median-level effort.
How is this more plausible than the scrambler's own lack of knowledge of potential specifications for these messages?
In any case, there's obviously more explanations than the "coded nonsense" hypothesis.
If you're offered an AI output it should be taken as one of two situations: (a) the person adopts the output, and maybe put a fair amount of effort into interacting with the LLM to get it just right, but can't honestly claim ownership (because who can), or (b) the output is outside their domain of expertise and functioning as a toehold or thumbnail in some esoteric topic that no single resource they know can, and probably the point is so specific that such a resource doesn't exist.
The tenor of the article makes me confused about what people have been doing, specifically , with ChatGPT that so alienated the author. I guess the point is there are some topics LLMs are fundamentally incompetent to perform? Maybe its more the perception that the LLM is being treated as an oracle than a tool for discovery?
Then I get even more annoyed when they decide to actually use their own prompt, and then read back to me the answer.
I would much prefer the answer "I don't know".
I think it all goes to crap when there is some economic incentive: e.g. blogspam that is profitable thanks to ads and anyone that stumbles upon it, alongside being able to generate large amounts of coherent sounding crap quickly.
I have seen quite a few sites like that in the first pages of both Google and DuckDuckGo which feels almost offensive. At the same time, posts that promise something and then don't go through with it are similarly bad, regardless of AI generated or not.
For example, recently I needed to look up how vLLM compares with Ollama (yes, for running the very same abominable intelligence models, albeit for more subjectively useful reasons) because Qwen3-30B-A3B and Devstral-24B both run pretty badly on Nvidia L4 cards with Ollama, which feels disappointing given their price tags and relatively small sizes of those models.
Yet pretty much all of the comparisons I found just regurgitated high level overviews of the technologies, like 5-10 sites that felt almost identical and could have been copy pasted from one another. Not a single one of those had a table of various models and their tokens/s on a given bit of hardware, for both Ollama and vLLM.
Back in the day when nerds got passionate about Apache2 vs Nginx, you'd see comparisons with stats and graphs and even though I wouldn't take all of those at face value (since with Apache2 you should turn off .htaccess and also tweak the MPM settings for more reasonable performance), at least there would sometimes be a Git repo.
I don't care what chatgpt or deepseek thinks about the proposal. I care what _you_ think about it - that's why I'm sending it to you.
(Present a solution/output proposal to team)
> Did you ask chatgpt?
vouaobrasil•4h ago
To me, someone pasting in an AI answer says: I don't care about any of that. Yeah, not a person I want to interact with.
gharper•4h ago
MattGaiser•4h ago
It wouldn't surprise me if "let me Google that for you" is an unstated part of many conversations.
jddj•3h ago
accrual•3h ago
ghjnut•2h ago
Now I'm the 40-year-old ops guy fielding those questions. I'll write up an LLM question emphasizing what they should be focused on, I'll verify the response is in sync with my thoughts, and shoot it to them.
It seems less passive aggressive than LMGTFY and sometimes I learn something from the response.
Arainach•1h ago