ChatGPT is free and available to everyone, and so are a dozen other LLMs. If the person making the comment wanted to know what ChatGPT had to say, they could just ask it themselves. I guess people feel like they’re being helpful, but I just don’t get it.
Though with that said, I’m happy when they at least say it’s from an LLM. At least then I know I can ignore It. Worse is replying as if it’s their own answer, but really it’s just copy pasted from an LLM. Those are more insidious.
The only workaround is to just text as-is and call it out when it's wrong/bad, AI-generated or otherwise, as we've done before 2023.
I think typically, the reason people are disclosing their usage of LLMs is that they want offload responsibility. To me it's important to see them taking responsibility for their words. You wouldn't blame Google for bad search results, would you? You can only blame the entity that you can actually influence.
Because if they'd actually read the output, then cross-checked it and developed some confidence in the opinion, they wouldn't put what they perceive as the most important part up front ("I used ChatGPT") - they'd put the conclusion.
My experience is that the vast majority of people do 0 research (AI assisted or not) before asking questions online. Questions that could have usually been answered in a few seconds if they had tried.
If someone preface a question by saying they've done their research but would like validation, then yes it's in incredibly poor taste.
When you put it that way I guess it kind of is.
> If someone preface a question by saying they've done their research but would like validation, then yes it's in incredibly poor taste.
100% agree with you there
The "let me Google that for you" was more trying to get people to look up trivial things on their own, rather than query some forum repeatedly.
they're more clueless than condescending
That's not like pasting in a screenshot or a copy/paste of an AI answer, it's being intentionally dismissive. You weren't actually doing the "work" for them, you were calling them lazy.
The way I usually see the AI paste being used is from people trying to refute something somebody said, but about a subject that they don't know anything about.
Which was just as irritating.
> What can be asserted without evidence can also be dismissed without evidence.
Becomes
> That which can be asserted without thought can be dismissed without thought.
Since no current AI thinks but humans do I’m just going to dismiss anything an AI says out of hand because you are pushing the cost of parsing what it said onto me and off you and nah, ain’t accepting that.
Hell, Feynman said as much in 1985. https://www.youtube.com/watch?v=ipRvjS7q1DI
I had a consultant I’m working with have an employee do that to me. I immediately insisted that every hour they’ve billed on that person’s name be refunded.
Fortunately
1. The person was transparent about it, even posting a link to the chat session
2. They had to follow on prompt to really engage the sycophancy
3. The forum admins stepped in to speak to this individual even before I was aware of it
I actually did what you suggested, fed everything back into another LLM, but did so with various prompts to test things out. The responses where... interesting, the positive prompt did return something quite good. A (paraphrased) quote from it
"LLMs are a powerful rhetorical tool. Bringing one to a online discussion is like bringing a gun to a knife fight."
That being said, how you prompt will get you wildly different responses from the same (other) inputs. I was able to get it to sycophant my (not actually) hurt feelings.
People did this with code as well. DDG used to show you the first Stackoverflow post that was close to what you searched. However sometimes this was obviously wrong, people have just copied and pasted that wholesale.
the other two are still incomparably better in practice though.
A simple solution would be to mandate that while posting coversations with AI in PR comments is fine, all actions and suggested changes should be human generated.
They human generated actions can’t be a lazy: “Please look at AI suggestion and incorporate as appropriate. ”, or “what do you think about this AI suggestion”.
Acceptable comments could be: - I agree with the AI for xyz reasons, please fix. - I thought about AIs suggestions, and here’s the pros and cons. Based on that I feel we should make xyz changes for abc reasons.
If these best practices are documented, and the reviewer does not follow them, the PR author can simply link to the best practices and kindly ask the reviewer to re-review.
I wrote before about just sending me the prompt[0], but if your prompt is literally my code then I don't need you at all.
If anyone gives me an opinion from an AI, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.
If this pisses you off, ask yourself why.
I would rather people go find the actual whitepaper or source in the footnotes and give me that, and/or give me their own opinion on it.
Not write "Wikipedia says..." and paste the entire article verbatim.
Why would it piss me off that you’re so closed minded about an incredible technology?
Like sure that is cool that is possible, but if I do not do the work myself I will not get stronger.
Our brains are the same way.
I also do not use a GPS because there are literally studies with MRI scans proving it makes an entire section of our brain go dark compared to London taxi drivers required by law to navigate with their brains.
I also navigate life without a smartphone at all, and it has given me what feels like focus super powers compared to those around me, when in reality probably most people had that level of focus before smartphones were a thing.
All said AI is super interesting when doing specialized work at scale no human has time for, like identifying cancer by training on massive datasets.
All tools have uses and abuses.
How many IQ points do you gain per year of subjecting yourself to this?
People are using LLMs to generate code without doing this.
If anyone gives me an opinion from a book, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.
If this pisses you off, ask yourself why.
(You can replace AI with any resource and it sounds just as silly :P)
It's so strange that pro-AI people don't see this obvious fact and keep trying to compare AI with things that are actually correct.
I find models vastly more useful than most technical books in my own work because I know how to feed in the right context and then ask them the right questions about it.
There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?
No one really cares how you found all those .permission_allowed() calls to replace - was it grep, or intense staring, or AI model. All that matters is you stand behind it, and act as an author. Original post said it very well:
> ChatGPT isn’t on the team. It won’t be in the post-mortem when things break. It won’t get paged at 2 AM. It doesn’t understand the specific constraints, tech debt, or your business context. It doesn’t have skin in the game. You do.
You're so close to realising why the book counter argument doesn't make any sense!
Those people exist and they’re wrong.
More frequently, however, I find I’m judging the model less than its user. If I get an email that smells of AI, I ignore it. That’s partly because I have the luxury to do so. It’s largely because engaging has commonly proven fruitless.
You see a similar effect on HN. Plenty of people use AI to think through problems. But the comments that quote it directly are almost always trash.
—-
It’s interesting that we have to respect human “stupid” opinions but anything from AI is discarded immediately.
I’d advocate for respecting any opinion. And consider good or at least good willed opinion.
This does not apply to AI of course. In most cases, if a person did an AI PR/comment once, they will keep doing AI PRs/comments, so your explanation will be forgotten next time they clear context. Might as well not waste your time and dismiss it right away.
Same as white people thought “black” were not worth listening to - a couple of hundred of years ago.
The fact that you're presenting this as a comically absurd comparison tells me that you know well that it's an absurd comparison.
It’s not the source that matters. It’s not the source that he’s complaining about. It’s the nature of the interaction with the source.
I’m not against watching video, but I won’t watch TikTok videos, because they are done in a way that is dangerously addictive. The nature of engagement with TikTok is the issue, not “I can’t learn from electrical devices.”
Each of us must beware of the side effects of using tools. Each kind of tool has its hazards.
Let’s try it with other stuff:
“Looking at solutions on stack overflow outsources your brain”
“Searching arxiv for literature on a subject outsources your brain”
“Reading a tutorial on something outsources your brain”
There’s nothing that makes ChatGPT et al appreciably different from the above, other than the tendency to hallucinate.
ChatGPT is a better search engine than search engines for me, since it gives links to cite what it’s talking about and I can check those, but it pays attention to precisely what I asked about and generally doesn’t include unrelated crap.
The only complaint I have is the hallucinations, but it just means I have to check its sources, which is exactly the case already for something as mundane as Wikipedia.
Ho hum. Maybe take some time to reevaluate your conclusions here.
I have recently started to use codex on the command line. Before I put the prompt in, I get an idea in my head of what should happen.
Then I give it the instructions, sometimes clarifying my own thoughts while doing it. These are high level instructions, not "change this file". Then it bumps away for minutes at a time, after which I diff the results and consider if it matches up to what I would expect. At that point lower level instructions if appropriate.
Consider whether it was a better solution or not, then ask questions around the edges that I thought were wrong.
It turns my work from typing code in to pretty much code design and review. These are the hard tasks.
Unfortunately that logic does not apply to models.
If your interaction with the junior dev is not much different than interacting with an LLM, something is off.
Training a junior dev will make you a better dev. Teaching is learning. And a junior dev will ask questions that challenge your assumptions.
It's the opposite of "outsourcing."
So, working with CLAUDE doesn't count. Gotcha.
> If this pisses you off, ask yourself why.
It doesn't piss me off, but your comment is disingenuous at best.
At my previous company they called it 'sparring with <name of the software>'. You don't 'work' with Claude.
You use the software, you instruct it what to do. And it gives you an output that you can then (hopefully) utilize. It's not human.
You don't have to outsource your thinking to find value in AI tools you just have to find the right tasks for them. The same as you would with any developer jr to you.
I'm not going to use AI to engineer some new complex feature of my system but you can bet I'm going to use it to help with refactoring or test writing or a second opinion on possible problems with a module.
> unlikely to have a future in this industry as they are so easily replaceable.
The reality is that you will be unlikely to compete with people who use these tools effectively. Same as the productivity difference between a developer with a good LSP and one without or a good IDE or a good search engine.
When I was a kid I had a text editor and a book and it worked. But now that better tools are around I'm certainly going to make use of them.
We work with junior engineers because we are investing in them. We will get a return on that investment. We also work with other humans because they are accountable for their actions. AI does not learn and grow anything like the satisfying way that our fellow humans do, and it cannot be held responsible for its actions.
As the OP said, AI is not on the team.
You have ignored the OP’s point, which is not that AI is a useless tool, but that merely being an AI jockey has no future. Of course we must learn to use tools effectively. No one is arguing with that.
You fanboys drive me nuts.
Yes, when someone builds a straw man you ignore it. There is a huge canyon between never use AI in engineer(op proposal) and only use AI for all your engineering(op complaint).
If you looked me or my work up, I think you would likely feel embarrassed by this statement. I have a number of world firsts under my belt that AI would have been unable to meaningfully help with.
It is also unlikely I would have every developed the skill to do any of that aside from doing everything the hard way.
Do you do all your coding in ed or are you already using technology to offload brain power and memory requirements in your coding?
Also I use VIM. Any FOSS tools with predictable deterministic behavior I can fully control are fine.
So your ok with using tools to offload thinking and memory as long as they are FOSS?
It took some iteration and hands on testing to get that right across multiple operating systems. Also to pass shellcheck, etc.
Even if an LLM -could- do that sort of thing as well as my team and I can, we would lose a lot of the arcane knowledge required to debug things, and spot sneaky bugs, and do code review, if we did not always do this stuff by hand.
It is kind of like how writing things down helps commit them to memory. Typing to a lesser extent does the same.
Regardless those scripts are like <1% of the repo and took a few hours to write by hand. The rest of the repo requires extensive knowledge of linux internals, compiler internals, full source bootstrapping, brand new features in Docker and the OCI specs, etc.
Absolutely 0 chance an LLM could have helped with bootstrapping a primitive c toolchain from 180 bytes of x86 machine code like this: https://codeberg.org/stagex/stagex/src/branch/main/packages/...
That took a lot of reasoning from humans to get right, in spite of the actual code being just a bunch of shell commands.
There are just no significant shortcuts for that stuff, and again if there were, taking them is likely to rob me of building enough cache in my brain to solve the edge cases.
Also yes, I only use FOSS tools with deterministic behavior I can modify, improve, and rely on to be there year after year, and thus any time spent mastering them is never wasted.
They believe that the entirety of human ingenuity should be theirs at no cost, and then they have the audacity to SELL their ill-gotten collation of that knowledge back to you? All the while persuading world governments that their technology is the new operating system of the 21st century.
Give me a dystopian break, honestly.
Stories full of nonsensical, clearly LLM-generated acceptance requirements containing implementation details which are completely unrelated to how the feature actually needs to work in our product. Fine, I didn't need them anyway.
PRs with those useless, uniformly-formatted LLM-generated descriptions which don't do what a PR description should do, with a half-arsed LLM attempt at summary of the code changes and links to the files in the PR description. It would have been nice if you had told me what your PR is for and what your intent as the author is, and maybe to call out things which were relevant to the implementation I might have "why?" questions about. But fine, I guess, being able to read, understand and evaluate the code is part of my job as a reviewer.
---- < the line
PRs littered with obvious LLM comments you didn't care enough to take out, where something minor and harmless, but _completely pointless_ has been added (as in if you'd read and understood what this code does, you'd have removed it), with an LLM comment left in above it AND at the end of the line, where it feels like I'm the first person to have tried to read and understand the code, and I feel like asking open-ended questions like "Why was this line added?" to get you to actually read and think about what's supposed to be your code, rather than a review comment explaining why it's not needed acting as a direct conduit from me to your LLM's "You're absolutely right!" response.
One example: Code reviews are inherently asymmetrical. You may have spent days building up context, experimenting, and refactoring to make a PR. Then the reviewer is expected to have meaningful insight in (generously) an hour? AI code reviews help bring balance; it may notice stuff a human wouldn't, and it's ok for the human reviewer to say "hey, chatgpt says this is an issue but I'm not sure - what do you think?"
We run all our PRs through automated (claude) reviews automatically, and it helps a LOT.
Another example: Lots of times we have several people debugging an issue and nobody has full context. Folks are looking at code, folks are running LLM prompts, folks are searching slack, etc. Sometimes the LLMs come up with good ideas but nobody is sure, because none of us have all the context we need. "Chatgpt says..." is a way of bringing it to everyone's attention.
I think this can be generalized to forum posts. "Chatgpt says" is similar to "Wikipedia says". It's not the end of the conversation, but it helps get everyone on the same page, especially when nobody is an expert.
Think of it as a dynamic opinion poll -- the probabilistic take on this thing is such and such.
As a bonus you can prime the respondent's persona.
// After posting, I see another comment at bottom opening with "Counterpoint:"... Different point though.
pavel_lishin•18h ago
I know ChatGPT exists. I could have fucking copied-and-pasted my question myself. I'm not asking you to be the interface between me and it. I'm asking you, what you think, what your thoughts and opinions are.