a) Quantity > Quality if it prints $$$.
or
b) Quality > Quantity if it feels like the right thing to do.
Witnessing type A at scale is a first-class ticket into misanthropy.
> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?
“No I don’t need this formatted for Outlook Dave. Thanks for asking though!”
I wonder what others there are.
I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.
I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.
Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.
[1] most recently https://news.ycombinator.com/item?id=44482876
Also, that "cow-orkers" doesn't look like AI-generated slop at all..? Just scrolling down a bit shows that most of them are three years and older.
I bet if you did the same through the API, you’d get the results you want.
Might be incorrectly saved in some spell check software and occasionally rearing it's head
This goes back a loooooong while.
Here's my take: these forums will drive good writers away or at least discourage them, leaving discourses the worse for it. What they really end up saying — "we don't care whether you use an LLM, just remove the damn em-dash" — indicates it's not a forum hosting riveting discussions in the first place.
“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”
This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.
LLM dates back to 2017, Google added that to internal gmail back then. Not sure when linkedin added it so you might be right, but the tech is much older than most thinks.
Folks who are new to AI are just posting away with their December 2022 because it's new to them.
It is best to personally understand your own style(s) of communication.
one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.
Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.
My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."
Change is inevitable. Most people just won't like it.
A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years.
"Change always triggers backlash" does not imply "all backlash is unwarranted."
> What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
But like the article explains about why it's rude: the less thought you put into it, the less chance the message is well communicated. The less thought you put into the code you ship, the less chance it will solve the problem reliably and consistently.
You aren't replying to "don't use LLM tools" you're replying to "don't just trust and forward their slop blindly."
Why does it matter where the legal claims came from if a judge accepts them?
Why does it matter where the sound waves came from if it sounds catchy?
Why does it matter?
Why does anything matter?
Sorry, I normally love debating epistemology but not here on Hacker News. :)
It does not seem to matter where the code nor the legal argument came from. What matters is that they are coherent.
You haven't read enough incoherent laws, I see.
https://www.sevenslegal.com/criminal-attorney/strange-state-...
I'm sure you can make a coherent argument for "It is illegal to cry on the witness stand", but not a reasonable one for actual humans. You're in a formal setting being asked to recall potentially traumatic incidents. No decent person is going to punish an emotional reaction to such actions. Then there are laws simply made to serve corporate interests (the "zoot suit", for instance within that article. Jaywalking is another famous one).
There's a reason an AI Judge is practically a tired trope in the cyberpunk genre. We don't want robots controlling human behavior.
1. code can be correct but non-performant, be it in time or space. A lot of my domain is fixing "correct" code so it's actually of value.
2. code can be correct, but unmaintainable. If you ever need to update that code, you are adding immense tech debt with code you do not understand.
3. code can be correct, but not fit standards. Non-standard code can be anywhere from harder to read, to subtly buggy with some gnarly effects farther down the line.
4. code can be correct, but insecure. I really hope cryptographers and netsec aren't using AI for anymore than generating keys.
5. code can be correct, but not correct in the larger scheme of the legacy code.
6. code can be correct, but legally vulnerable. A rare, but expensive edge case that may come up as courts catch up to LLM's.
7. and lastly (but certainly not limited to), code can be correct. But people can be incorrect, change their whims and requirements, or otherwise add layers to navigate through making the product. This leads more back to #2, but it's important to remember that as engineers we are working with imperfect actors and non-optimal conditions. Our job isn't just to "make correct code", it's to navigate the business and keep everyone aligned on the mission from a technical perspective.
You largely won't know such conversations are happening.
And it stays much closer to how they are writing.
So have your Siri talk to my Cortana and we'll work things out.
Is this a colder world or old people just not understanding the future?
I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.
>Change is inevitable. Most people just won't like it.
people love saying this and never taking the time to consider if the change is good or bad. Change for change's sake is called chaos. I don't think chaos is inevitable.
>And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
I don't think I ever heard that argument until now. And to be frank that argument says more about the arguer than the subject or LLM's.
Have you simply considered 3) LLM's don't have context and can output wrong information? If you're spending more time correcting the machine than communicating, we're just adding more beauracracy to the mix.
Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.
People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.
The future is here, and even if you don't like it, and even if it's worse, you'll take it anyway. Because it's the future. Because... some megalomaniacal dweeb somewhere said so?
When does this hype train get to the next station, so everyone can take a breath? All this "future" has us hyperventilating.
In this case, presenting arguments you yourself do not even understand is dishonest, for multiple reasons. And I thought we went past the "thesaurus era" of communication where we just proliferate a comment with uncommon words to sound smarter.
I fully agree. However, the original comment was about helping people express an idea in a language they're not proficient in, which seems very different.
> And I thought we went past the "thesaurus era" of communication where we just proliferate a comment with uncommon words to sound smarter.
I wish. Until we are, I can't blame anyone for using tools that level the playing field.
Yes, but I see it as a rare case. Also, consider tha mindset of someone learning a language:
You probably often hear "I'm sorry about my grammar, I'm not very good at English" and their communication is better than half your native peers. They are putting a lot more effort into trying to communicate while the natives take it for granted. That effort shows.
So in the context of an LLM: if they are using it to assist with their communication, they also tend to take more time to look at and properly tweak the output instead of posting it wholesale, at least without the sloppy queries that were not part of the actual output. That effort is why I'm more lenient to those situations.
I do agree about this push for inevitable. in small ways this is true. But it doesn't need to take over every aspect of humanity. We have calculators but we still at the very least do basic mental math and don't resort to calculators for 5 + 5. It's been long established as rude to do more than quick glances at your phone when physically meeting people. We leaned against posting google search/wiki links as a response in forums.
Culture still shapes a lot of how we use the modern tools we have.
Please be honest. If it’s slop or they have incorrect information in the message, then my bad, stop reading here. Otherwise…
I really hope people like this with holier than thou attitude get filtered out. Fast.
People who don’t adapt to use new tools are some of the worst people to work around.
"my bad" and what next? The reader just wasted time and focus on reading, it doesn't sound like a fair exchange.
Most of the time people just like getting triggered that someone sent them a —— in their message and blame AI instead of adopting it into their workflows and moving faster.
Those types of coworkers tend to be a drain on not just productivity, but entire team morale. Someone who can't take responsibility or in worst cases have any sort of empathy. And tools are a force multiplier. It amplifies productivity, but that also means it amplifies this anchor behavior as well.
I was replying to THAT person, and my message was that IF the person they're dealing with who uses AI happens to be giving them constant slop (not ME!!! not my message) THEN ignore what I have to say in that message THEREAFTER.
So if that person is dealing with others who are giving them slop, and not just being triggered that it reads like GPT..
The holier than thou people are the ones who are telling us genAI is inevitable, it's here to stay, we should use it as a matter of rote, we'll be left out if we don't, it's going to change everything, blah blah blah. These are articles of faith, and I'm sorry but I'm not a believer in the religion of AI.
The overall impact on the system makes it much less efficient, despite all those "saving [their] time" by abusing LLMs.
consider 3 scaenarios:
1. misinformation. This is the one you mention so I don't need to elaborate. 2. lack of understanding. The message may be about something they do not fully understand. If they cannot understand their own communication, then it's no longer a 2-way street. This is why AI-generated code in reviews is so infuriating. 3. Effort. Some people may use it to enhance their communication, but others use it as a shortcut. You shouldn't take a shortcut around actions like communicating with your coulleages. As a rising sentiment goes: "If it's not worth writing (yourself), it's not worth reading".
For your tool metaphor, it's like discovering supeglue. then using it to stick everything together. Sometimes you see a nail and instead glue the nail to the wall instead of hammering it in. Tools can, have, and will be misused. I think it's best to try and correct that early on before we have a lot of sticky nails.
Last time someone did this to me I sent them a few other answers by the same LLM to the same prompt, all different, with no commentary.
Cause all an LLM is, is a reflection of its input.
Garbage in garbage out.
If we're going to have this rule about AI, maybe we should have it about... everything. From your mom's last Facebook post, to what is said by influencers to this post...
Say less. Do more.
Now that's no longer the case and there are lazy or unthoughtful people that simply pass along AI outputs, raw and completely unprocessed, as cognitive work for other human beings to deal with.
You can have a very thoughtful LLM prompt and get a garbage response if the model fails to generate a solid, sound answer to your prompt. Hard questions with verifiable but obscure answers, for instance, where it generates fake citations.
You can have a garbage prompt and get not-garbage output if you are asking in a well-understood area with a well-understood problem.
And the current generation of company-provided LLMs are VERY highly trained to make the answer look non-garbage in all cases, increasing the cognitive load on you to figure out.
We're already seeing the social contract around hosting your own blog change due to the constant indexing from AI crawlers.
Though that is an optimistic steady state, I still think we're going to see a lot more of "my AI talking to your AI" to some unhealthy degree
When I got squggily written cursive letters from my grandma I could be pretty sure those were her words though up by herself, for the effort to accurately reproduce the consistent mess she made would have been great. But the moment we moved to the typewriter and then other digital means uniformly printed out on paper or screens you've really just assumed that it was written by the human you were expecting.
Furthermore, the vast majority of communications done in business long before now were not done by 'people' per say. They were done by processes. In the vast majority of business email that I type out there is a large amount of process that would not occur if I were talking to a friend. Moreso this communication is facilitative to some other end goal. If the entire process that existed could be automated away humanity would be better off as some mostly useless work would be eliminated.
Do you know why people are so willing to use AI to communicate with each other? Because at the end of the day they don't give two shits about communicating with you. It's an end goal of receiving a paycheck. There is no passion, no deep interest, no entertainment for them in doing so. It's a process demanded of them because of how we integrate Moloch into our modern lives.
Sending half-baked, run-on, unvetted writing, when you easily could have chosen otherwise, is in fact the disrespectful choice.
I would avoid that world at any cost of I was allowed a choice, but the point is that it's used as a weapon against you. Consent appears to be unnecessary.
I cherish the unique humanity in every voice. Forced robotic uniformity feels like an imposition, not a choice—and consent matters deeply.
The output is the the opposite of how you describe it, and vastly more persuasive than your own words. When it's persuasion that matters, use all tools available.
My voice is MY VOICE and if you don't like it I couldn't care any less cause I speak and think for myself always.
Run AI on everything anyone says to you if you never want to have the difficulty of disagreeable critical thought again. I can't stop you.
If you believe that then there quite a few things you may be confused about the nature of your being.
Your voice is the assembly of society and people around you. If you actually thought your for yourself always you'd never get anything done in your life as you've had hundreds of millions of years to thinking from first principles to catch up on.
There are no great AI artists (artists who are AIs) or great AI artworks. Yet there are still loads of people throughout history whose individualism led them to ideas and accomplishments that we celebrate. People have the ability to think critically which allows us to create new understanding from existing knowledge, even and especially when there are flaws or contradictions in that knowledge (which if you look closely enough there almost always are).
I stopped there and replied that if you don't care enough to test if it works, then clearly you don't actually want the feature, and closed the ticket.
I have gotten other PRs that are more in the form of "hey I don't know what I'm doing. I used GPT but and it seems to work but I don't understand this part". I'm happy to help point in the right direction for those. Because an least they're trying. And seem like this is part of their learning.
... Or they just asked jippity to make it seem that way.
I mean, it's basically cheating. I get a task, and instead of working my way through the task, which might be tedious, you take the shorter route and receive instant gratification. I can understand how that is causing some kind of rush of endorphines, much like eating a bar of chocolate will. So, yeah - I would agree, altough I do not have any studies that support the hypothesis.
That is not an excuse for it being poorly done or unvetted (which I think is the crux of the point), but it’s important to state any sources used.
If i don’t want to receive AI generated content, i can use the attribution to filter it out.
- agree
- but people expect text, not bullets
- cultural issue
I wonder how long it will be before LLM-text trademarks become seen as a sign of bad writing or laziness instead? And then maybe we'll have an arms race of stylistic changes.
---
Completely agree with the author:
Earlier this week I asked Claude to summarize a bunch of code files since I was looking for a bug. It wrote paragraphs and had 3 suggestions. But when I read it, I realized it was mostly super generic and vague. The conditions that would be required to trigger the bug in those ways couldn't actually exist, but it put a lot of words around the ideas. I took longer to notice that they were incorrect suggestions as a result.
I told it "this won't happen those ways [because blah blah blah]" and it gave me the "you are correct!" compliment-dance and tried again. One new suggestion and a claimed reason about how one of its original suggestions might be right. The new suggestion seemed promising, but I wasn't entirely convinced. Tried again. It went back to the first three suggestions - the "here's why that won't happen" was still in the context window, but it hit some limit of its model. Like it was trying to reconcile being reinforcement-learning'd into "generate something that looks like a helpful answer" with "here is information in the context window saying the text I want to generate is wrong" and failing. We got into a loop.
It was a rare bug so we'll see if the useful-seeming suggestion was right or not but I don't know yet. Added some logging around it and some other stuff too.
The counterfactuals are hard to evaluate:
* would I have identified that potential change quicker without asking it? Or at all?
* would I have identified something else that it didn't point out?
* what if I hadn't noticed the problems with some other suggestions and spent a bunch of time chasing them?
The words:information ratio was a big problem in spotting the issues.
So was the "text completion" aspect of "if you're asking about a problem here, there must be a solution I can offer" RL-seeming aspect of its generated results. It didn't seem to be truly evaluating the code then deciding so much as saying "yes, I will definitely tell you there are things we can change, here are some that seem plausible."
Imagine if my coworker had asked me the question and I'd just copy-pasted Claude's first crap attempt to them in response? Rude as hell.
I don't want my theories parroted back to me on why something went wrong. I want to have ideas challenged in a way that forces me to think and hopefully lead me to a new perspective that I otherwise would have missed.
Perhaps a large portion of people do enjoy the agreeableness, but this becomes a problem not only because I think there are larger societal issues that stem from this echo-chamber like environmental but also simply that companies training these models may interpret agreeableness as somehow better and something that should be optimized for.
And I go back and forth sometimes between correcting its devils advocate responses and “steel man” responses.
What's even more infuriating is that he won't take "I've checked and that submenu doesn't exist" for an answer and insists to check again. Had to step away for a fag a few times in fear of putting his face through the desk.
Generally the AI summaries I see are more topical and accurate than the many other comments in the thread.
In general it raises the mean accuracy and info of a given thread.
Its like self driving cars.
I don't see any problem sharing a human-reviewed LLM output.
(I also figure that human review may not be that necessary in a few years.)
It's like pointing to a lmgtfy link. That's _intentionally_ rude, in that it's normally used when the question isn't worth the thought. That's what pasting a chatbot response says.
The former is like "hey, I had this experience, here's what it was about, what I learned and how it affected me" which is a very human experience and totally valid to share. The latter is like "I created some input, here's the output, now I want you reflect and/or act on it".
For example I've used Claude and ChatGPT to reflect and chat about life experiences and left feeling like I gained something, and sometimes I'll talk to my friends or SO about it. But I'd never share the transcript unless they asked for it.
Sadly many people don't seem interested in even admitting the existence of the distinction.
It feels really interesting to the person who experienced it, not so much to the listener. Sometimes it can be fun to share because it gives you a glimmer of insight into how someone else's mind works, but the actual content is never really the point.
If anything they share the same hallucinatory quality - ie: hallucinations don't have essential content, which is kind of the point of communication.
With ChatGPT, it's the output of the pickled brains of millions of past internet users, staring at the prompt from your brain and free-associating. Not quite the same thing!
On the other hand, emailing your prompt and the result you got can be instructive to others learning how to use LLMs (aren't we all?) We may learn effective prompt techniques or decide to switch to that LLM because of the quality of the answer.
There is an alternative interpretation - "the LLM put it so much better than I ever could, so I copied and pasted that" - but precisely because of the ambiguity, you don't want to be sneaky about it. If you want me to have a look at what the LLM said, make it clear.
A meta-consideration here is that there is just an asymmetry of effort when I'm trying to formulate arguments "manually" and you're using an LLM to debate them. On some level, it might be fair game. On another, it's pretty short-sighted: the end game is that we both use LLMs that endlessly debate each other while drifting off into the absurd.
Edit: I'm 67 so ChatGPT is especially helpful in pointing out where my possible unconscious dinosaur attitudes may be offensive to Millennials and Gen Z.
Subjecting people to such slop is rude. All the "I asked chatbot and it said..." comments are rude because they are excessively boring and uninteresting. But it gets even worse than just boring and uninteresting when presenting chatbot text as something they wrote themselves, which is a form of lying / fraud.
No in fact I disabled my TabNine Llm until I can either train my own similar model and run everything locally or not at all.
Furthermore the whole selling point has been that anyone can use them _without needing to learn anything_.
So I apologized and began actually using LLMs while making sure the prompt included style guides and rules to avoid the tell tale signs of AI. Then some of these geniuses thanked me for being more genuine in my response.
A lot of this stuff is delusional. You only find it rude because you’re aware it’s written by AI. It’s the awareness itself that triggers it. In reality you can’t tell the difference.
This post, for example.
I too use an LLM to help me get rid of generic filler and I do have my own style of technical writing and editing. You would never know I use an LLM.
And then "echoborgs": https://en.wikipedia.org/wiki/Echoborg
On the whole it's considered bad to mislead people. If my love letter to you is in fact a pre-written form, "my darling [insert name here]", and you suspect, but your suspicion is just baseless paranoia and a lucky guess, I suppose you're being delusional and I'm not being rude. But I'm still doing something wrong. Even if you don't suspect, and I call off the scam, I was still messing with you.
But the definition of being "misleading" is tricky, because we have personas and need them in order to communicate, which in any context at all is a kind of honest, sincere play-acting.
I'm a non-native English speaker who writes many work emails in English. My English is quite good, but still, it takes me longer to write email in English because it's not as natural. Sometimes I spend a few minutes wondering if I'm getting the tone right or maybe being too pushy, if I should add some formality or it would sound forced, etc., while in my native language these things are automatic. Why shouldn't I use an LLM to save those extra minutes (as long as I check the output before sending it)?
And being non-native with a good English level is nothing compared to people who might have autism, etc.
Tell people there are 10000 license plate scanners tracking their every move across the US and you get a mild chuckle, but god forbid someone access the shit they put into some for profit companies database under terms they never read.
Hell I would rather just read their reply in Spanish and if they need to write it out really fast without struggling trying to translate it and I use my own B1 level Spanish comprehension than read AI generated slop.
The issue is that we both know 99% of output are not the result of this. AI is used to cut corners, not to cross your T's and dot your I's. It's similar to how having the answer banks for a textbook is a great tool to self-correct and reinforce correct learning. In reality these banks aren't sold publicly because most students will use it to cheat.
And I'm not even saying this in a shameful way per se; high schoolers are under so much pressure, used to be given hours of homework on top of 7+ hours of instruction, and in some regards the content is barely applicable to long term goals past keeping their GPA up. The temptation to cheat is enormous at that stage.
----
Not so much for 30 year old me who wants to refresh themselves on calculus concepts for an interview. There also really shouldn't be any huge pressure to "cheat" your co-workers either (there sometimes is, though).
> Whoa, let me stop you right here buddy, what you're doing here is extremely, horribly rude.
How is it any different from "I read book <X> and it said that..."? Or "Book <X> has the following quote about that:"?
I definitely want to know where people are getting their info. It helps me understand how trustworthy it might be. It's not rude, it's providing proper context.
ChatGPT and similar have not earned a presumption of reality for me, and the same question may get many different answers, and afaik, even if you ask it for sources, they're not necessarily real either.
IMHO, it's rude to use ChatGPT and share it with me as if it's informative; it disrespects my search for truth. It's better that you mention it, so I can disregard the whole thing.
It mostly means you don't respect the other person's time and it's making them do the vetting. And that's the rude part.
But you can’t assume positive intent or any intent from an LLM.
I always test the code, review it for corner cases, remove unnecessary comments, etc just like I would a junior dev.
For facts, I ask it to verify whatever says based on web source. I then might use it to summarize it. But even then I have my own writing style I steer it toward and then edit it.
Isaac, if you're reading this - stop sending me PDFs generated by Perplexity!
> "I asked ChatGPT and this is what it said: <...>". ... > "I vibe-coded this pull request in just 15 minutes. Please review"
This is even nice. You have outlined here an actual warning. Usually there is none.
When you get this type of request you are pretty much debugging AI code on the spot without any additional context.
You can just see when text/code is AI generated or not. No matter text or code. No tools needed.
"I hand-typed this close message in just 15 seconds. Please refrain."
Yet today, we both cringe at forgettable food Instagrams and marvel at the World Press Photo of the Year.
I do fully agree with the conclusions on etiquette. Just like it's rude to try to pass a line-traced photo as a freehand drawing, decompressing very little information into a wall of text without a disclaimer is rude.
Just the other day I witnessed in a chat someone commenting that another (who previously sent an AI summary of something) had sent a "block of text" which they wouldn't read because it was too much, then went to read it when they were told it was from Quora, not generated. It was a wild moment for me, and I said as much.
Now let's really asks ourselves how this works out in reality. Cut corners. People using LLM's are not using it to enhance their conversation; they are using it to get it over with.
It also doesn't help that yes, AI generated text tends to be overly verbose. Saying a lot of nothing. There are times where that formality is needed, but not in some casual work conversations. Just get to the point.
Get a short answer by including "keep answer short" or similar in the prompt. It just works.
Such a great point and one which I hadn't considered. With LLMs, we've flipped this equation, and it's having all sorts of weird consequences. Most obviously for me is how much more time I'm spending on code reviews. Its massively increased the importance of making the PR as digestible as possible for the reviewer, as now both author and reviewer are much closer to equal understanding of the changes compared to if the author had written the PR solely by themselves. Who knows what other corollaries there are to this reversal of reading vs writing
Humanity has survived and adapted, and all in all, I'm glad to live in a world with photography in it.
That said, part of that adaptation will probably involve the evolution of a strong stigma against undeclared and poorly verified/curated AI-generated content.
My current day to day problem is that, the PRs don't come with that disclaimer; The authors won't even admit if asked directly. Yet I know my comments on the PR will be fed to the cursor so it makes more crappy edits, and I'll be expecting an entirely different PR in 10 minutes to review from scratch without even addressing the main concern. I wish I could at least talk to the AI directly.
(If you're wondering, it's unfortunately not in my power right now to ignore or close the PRs).
Another perspective I’ve found to resonate with people is to remind them — if you’re not reviewing the code or passing it through any type of human reasoning to determine its fit to solving the business problem - what value are you adding at all? If you just copy pasta through AI, you might as well not even be in the loop, because it’d be faster for me to do it directly, and have the context of the prompts as well.
This is a step change in our industry and an opportunity to mentor people who are misusing it. If they don’t take it, there are plenty of people who will. I have a feeling that AI will actually separate the wheat from the chaff, because right now, people can hide a lack of understanding and effort because the output speed is so low for everyone. Once those who have no issue with applying critical thinking and debugging to the problem and work well with the business start to leverage AI, it’ll become very obvious who’s left behind.
I’m willing to mentor folks, and help them grow. But what you’re describing sounds so exhausting, and it’s so much worse than what “mentorship” meant just a few short years ago. I have to now teach people basic respect and empathy at work? Are we serious?
For what it’s worth: sometimes ignoring this kind of stuff is teaching. Harshly, sure - but sometimes that’s what’s needed.
Given that we (or at least, much of this community) seem to disagree with this article, that does indeed seem to be the case. "It's just a tool" "it's elitist to reject AI generated output". The younger generations learn from these behaviors too.
I haven't personally been in this position, but when I think about it, looping all your reviews through the cursor would reduce your perceived competence, wouldn't it? Is giving them a negative performance review an option?
But yeah, to a boss or something, that would be rude. They hired you to answer a question.
Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.
Once it's verified, I add it to my own documentation library so that I can refer to it later on.
Quote is from Mark Twain and perfectly encapsulates the sentiment. Writing something intended for another person to read was previously an effort. Some people were good at it, some were less good. But now, everyone can generate some median-level effort.
How is this more plausible than the scrambler's own lack of knowledge of potential specifications for these messages?
In any case, there's obviously more explanations than the "coded nonsense" hypothesis.
If you're offered an AI output it should be taken as one of two situations: (a) the person adopts the output, and maybe put a fair amount of effort into interacting with the LLM to get it just right, but can't honestly claim ownership (because who can), or (b) the output is outside their domain of expertise and functioning as a toehold or thumbnail in some esoteric topic that no single resource they know can, and probably the point is so specific that such a resource doesn't exist.
The tenor of the article makes me confused about what people have been doing, specifically , with ChatGPT that so alienated the author. I guess the point is there are some topics LLMs are fundamentally incompetent to perform? Maybe its more the perception that the LLM is being treated as an oracle than a tool for discovery?
Then I get even more annoyed when they decide to actually use their own prompt, and then read back to me the answer.
I would much prefer the answer "I don't know".
But the macho approach? They are bold, they are someone you want to follow. They do the thinking. Even if you walk off a cliff, you feel that person was a good leader. If you are assertive, you must be strong, after all. "Strong people" never get punished for failure, it's just "the cost of doing business"; time to move to the next business.
I think it all goes to crap when there is some economic incentive: e.g. blogspam that is profitable thanks to ads and anyone that stumbles upon it, alongside being able to generate large amounts of coherent sounding crap quickly.
I have seen quite a few sites like that in the first pages of both Google and DuckDuckGo which feels almost offensive. At the same time, posts that promise something and then don't go through with it are similarly bad, regardless of AI generated or not.
For example, recently I needed to look up how vLLM compares with Ollama (yes, for running the very same abominable intelligence models, albeit for more subjectively useful reasons) because Qwen3-30B-A3B and Devstral-24B both run pretty badly on Nvidia L4 cards with Ollama, which feels disappointing given their price tags and relatively small sizes of those models.
Yet pretty much all of the comparisons I found just regurgitated high level overviews of the technologies, like 5-10 sites that felt almost identical and could have been copy pasted from one another. Not a single one of those had a table of various models and their tokens/s on a given bit of hardware, for both Ollama and vLLM.
Back in the day when nerds got passionate about Apache2 vs Nginx, you'd see comparisons with stats and graphs and even though I wouldn't take all of those at face value (since with Apache2 you should turn off .htaccess and also tweak the MPM settings for more reasonable performance), at least there would sometimes be a Git repo.
I don't care what chatgpt or deepseek thinks about the proposal. I care what _you_ think about it - that's why I'm sending it to you.
(Present a solution/output proposal to team)
> Did you ask chatgpt?
AI responses seem to have very low information density by default, so for me the irritation is threefold—it requires very little mental effort for the sender (i.e., I often read responses that don't feel sufficiently considered); it is often poorly validated by the sender; and it is disrespectful to the reader's time.
Like some of the other commenters, I am also not in a position to change this at work, but I am disheartened by how some of my fellow engineers have taken to putting up low effort PRs with errors, as well as unreasonably long design docs. My entire company seems to have bought into the whole "AI-first company" thing without actually validating if the outputs are worth the squeeze. Maybe sometimes they are, but I get a sense that the path of least resistance tends toward accepting lower quality code and communication.
Sure, it can be frustrating that they don’t adapt to a user’s personal style. But for those of us who haven’t fully developed that stylistic sense (which is common among non-native speakers), it’s still a huge leap forward.
I spent a long time responding to each pro and con assuming they got this list from somewhere or another companies promotional material. Every point was wrong in different ways, not understanding. I was giving detailed responses to each point explaining how they are wrong. Initially I thought the list was obtained from someone in marketing who did not understand, after a while I thought maybe this was AI and asked… they told me they just asked the pros and cons of the product/program to ChatGPT and was asking me to verify it it was correct or not before communicating to customers.
If they had just asked me the pros and cons I could have responded in a much shorter amount of time. ChatGPT basically DOSed me because the time taken to produce the text was nothing compared to the time it took me to respond.
If LLMs can be fixed to the point where they are as reliable as 2005-2010 Google, maybe you can start blindly pasting output or telling people to "just chatgpt it", and it won't be so useless for the victim anymore. But I'm not convinced the hallucination problem and inability to properly cite sources will be solved anytime soon, given how non-deterministic LLMs are. And it appears it's just creating a brand new SEO spam issue, with Gemini results at the top of the page based on the contents of spammy results.
vouaobrasil•1d ago
To me, someone pasting in an AI answer says: I don't care about any of that. Yeah, not a person I want to interact with.
gharper•1d ago
MattGaiser•1d ago
It wouldn't surprise me if "let me Google that for you" is an unstated part of many conversations.
johnnyanmac•1d ago
jddj•1d ago
accrual•1d ago
ghjnut•1d ago
Now I'm the 40-year-old ops guy fielding those questions. I'll write up an LLM question emphasizing what they should be focused on, I'll verify the response is in sync with my thoughts, and shoot it to them.
It seems less passive aggressive than LMGTFY and sometimes I learn something from the response.
Arainach•1d ago