I don't really care if it's a person or the LLM getting it wrong, if you're sending me stuff that you checked or haven't checked but it's wrong/ambiguous anyways, I'm sending it back to you to fix.
You're nicer than some of us.
If it's an LLM getting it wrong, and it's not caught before it gets to you, then what value is the intermediary adding to the process?
But, as discussed in some other threads, the leverage provided by the LLM allows the miscreant to inundate you with slop by only pressing a few buttons.
And rejection is work. So they can produce more slop, requiring more rejections, faster than you can read the slop.
This is what's new. You reject it, they feed your rejection back into the LLM, and hand you something 5 minutes later with so many formatting changes that diff is unhelpful, and enough subtle substantive changes embedded in it that if you don't read the entire thing, you might have missed something important.
If you're ending up doing this back and forth with someone more than once, just outright refuse to work with someone so unprofessional who doesn't even validate their own work. It wouldn't fly in most workplaces I've worked in.
This
When I receive a PR, of course it’s natural an AI is involved.
The mortal sin is the rubber stamp.
If they haven’t read their own PR, I only have so many warnings in me. And yes, it is highly visible.
I assume they are working at a business to make money, not a school or a writing competition.
You can't know if it has been reviewed and checked for minimal sanity, or just chucked over the fence.
So you have to fully vet it.
And, if you have to fully vet it, then what value has the originator added? Might as well eliminate their position.
It's where we're headed.
You can just ask them if they reviewed it in detail.
Along the same lines as "A lie travels around the globe while the truth is putting on its shoes."
I’d add to that, long form AI output is really bad and basically unsuitable for anything.
Something like “I got GPT to make a few bullet points to structure the conversation” is probably acceptable in some cases if it’s short. The worst I can imagine is giving someone a “deep research” article to read as if that’s different from sending them to google.
If someone sends me incomplete work I will judge them for that, the history of the work relationship matters and I didn't see it in the blog post.
Situational.
I don't know this blogger or what the plan involved; but for sake of agument, let's say it was a business plan, and let's say in isolation it's really good, 99.9% chance of success with 10x returns kind of good.
Everyone in whatever problem space this is probably just got the same quality of advice from their own LLM prompting. That 99.9% is no longer "in isolation", it is a correlated failure where all the other people doing the same thing as you makes it less viable.
That's a good reason not to use a public tool, even when the output is good.
Correlated risk disguised as uncorrolated risk was a big part of the global financial crisis in the late 00s.
Look, it’s now like, email in 2004. You see spam, that it has found email. It doesn’t mean you refuse to interact with anyone by email, write geocities posts mocking email-users. You just acknowledge the technology (email) can be used for efficiency, results, and it also can be misused as a giant time-waster.
The author of the article here is basically saying “technology was used = work product is trash”. The ”spam” folks are seeing must be horrible to evoke this kind of condemnatory response.
Because of the difference in effort involved in generating it vs effort required to judge it.
Why are you entitled to "your" work being judged on its merits by a real human, when the work itself was not created by you, or any human? If you couldn't be bothered to write it, why should someone else be bothered to read it?
For example suppose that someone likes to work in Markdown using VSCode. To get the kind of Word document that everyone else expects, you just copy and paste into Word. AI isn't involved, but it will look exactly like AI to you.
And there are more complicated hybrids. For example my wife has a workflow where everything that she does, communications, and so on, wind up in Markdown in Obsidian. She adds information about who was at the meeting that includes basuc research into them done by an agent (company directory, title, LinkedIn, and so on - all good to know for someone working in sales). Her AI assistant then extracts out bullet points, cross references, and so on. She uses that to create summaries that she references whenever she goes back to that project. And if someone wants to know what has happened or is currently planned for that project, AI extracts that from the same repository.
There's lots of AI in this workflow. But the content and thought is mostly from her. (With facts from searches that an agent did.) The fact that she's automated a lot of her organizational scutwork to an AI doesn't make the output "AI slop".
There's been a lot of social contract undermining lately. Does anyone please know about something that can be done to try and revert back? Social contract of "F you. I got mine" isn't very appealing to me, but that seems to be the current approach.
It is not weakness, but strength, to make yourself (reasonably!) vulnerable to being taken advantage of. It is not strength, but weakness, to let bad behavior happen around you. You don't have to do everything, but you have to do something, or nothing changes.
We gotta spend less time explaining away (and tacitly excusing) bad behavior as unfortunate game theory, and more time coming down hard on people who violate trust.
Ante trust gladly, but come down hard on defectors.
For example:
"Sorry, yes, I know the report is due tomorrow, but I don't have time to review it again because I wasted 2 hours on the first version."
or
"I found these three problems on the first page and stopped reading."
What else?
I have never seen this team before and I'll "never" see this team after the fact. They might be contracted externally, they might leave before the second review.
Let's say I can sus out people doing this. I don't have the option of giving them the benefit of the doubt and they have the motivation to trick me.
I guess I've answered my own question a bit, such an environment isn't built to foster trust at all.
At least a "Generated by AI, reviewed and edited by xyz" tag would be some indicator of effort and accountability.
It may not be wrong to use AI to generate things whole cloth, but it definitely sidesteps something important and calls into question the "prompter's" contributions to the whole thing.
And if you think that at this point you could have done it yourself, then why don't you? The only important thing is that the document is fine, if it takes you too much to verify it then you need to trust your colleague, that was their job.
Signals of competence and diligence help build and reinforce trust.
Crafting a message for your known friend/ coworker almost always comes through in how it is written and structured, because it always weighs the arguments against the context of the business needs, communication norms, shared understanding of what's important, all implicit contacts about how we work together, the long term vision we shared over beers, the teams messages that the CEO sent three days ago, etc.
In a pure design doc - like a wiring diagram + 3 code snippets, this is a non-issue, so just ignore what I said (but consider it possibly).
In a doc for communication, especially of ideas, this is paramount.
The issue isn't using AI tools to write these "RFC" style docs. The issue is that in the very likely event that the output does not contain any of those very important bits (because how could it??), then we are in a situation where 1) this person I trust has lost some of that trust by not acknowledging any of the above or addressing it or structuring it in a useful way or 2) that person didn't try.
This is why communication is a valuable skill. It's always been implicit that effort slowly adds those and many more features of a good doc. Now it's explicitly not doing that, but still feigning effort with lots of formatting, etc. It moves the "add the important intangibles" from the writer to that reader, and like code review, that's laziness. We explicitly did not hire an AI, we explicitly hired a person, and that person should be filtering the world's noise through their valuable experience, and at least telling us that they did that. "I reviewed this and stand by it" is a very low bar to achieve, so I dont understand why there can be any pushback.
Is there a 3rd option here I'm missing?
EDIT: I can temper this a little bit. This is how I like to work. There might be a cadre of devs who are comfortable slinging 10 page unreviwed documents at each other. I'm fine with their existence I just think it's better to carefully review text from a close coworker because they deserve that time, and so I expect the writer would do at least one review themselves out of courtesy. I don't think any of this is arduous. If my boss told me to spend more time reviewing than the author was willing to spend writing, then I would either get comfortable with reduced dev output from this new DDOS, or find a new job.
EDIT2: Actually, it occured to me that everything I would say is well articulated here: https://rfd.shared.oxide.computer/rfd/0576 which made the HN ruonds recently.
However there's another aspect that irks me, and it's the idea that the prompt was much shorter than the document itself. Well if this is the case, then the problem isn't much the use of LLMs, but rather that you consider obvious that your documents are mostly fluff that can be compressed to a bullet list. If the final document is much harder to verify than the information it contains then it means that you're wasting time and resources to ofbuscate rather than clarify.
That's exactly what I'm saying, so we agree perfectly there.
> However there's another aspect that irks me, and it's the idea that the prompt was much shorter than the document itself.
That's not part of my argument, it's an assumption on your part.
> If the final document is much harder to verify than the information it contains ...
This is precisely the problem. LLMs generate far too much text for their information content, and often contain subtle errors etc etc (I dont need to rehash two whole HN threads here, the point is made). If a coworker sent me the bullets, I'd be happier. Because the cost of generating arguments is now effectively zero, it's imperative to use restraint to get back to the concise, high SNR message. Precisely because otherwise it becomes a DDOS on the reader.
Personally, I'd love to see most of this stuff disappear from services that advertise it on par with human generated media like spotify and amazon (though I'll also admit to having a soft spot for the soul style AI covers of 50 cent and others).
Yes, Thaler v. Perlmutter.
I'm pretty sure, even though that's recent, that it fully comports with decades old law on patents, as well.
I can't find an older case, but Thaler v. Vidal is a recent patent case.
Your original complaint was that humans were saying "I wrote this", and those people are definitely going to be claiming copyright for it in court at some point... In fact, Thaler v. Perlmutter only makes that more likely as AI programs definitely cannot claim copyright themselves.
Hence my confusion. In principle I definitely agree with your original point though- people should produce content to express themselves, rather than becoming an expression of AI.
Not at all. Thaler wasn't asking that the AI hold the copyright. He wanted to hold the copyright of a work _authored_ by a machine.
But a machine cannot be an author, under law. And a machine cannot be an inventor, under law.
The distinction may seem subtle, but patent law and copyright law both make a distinction between the inventor/author, and the holder of the patent/copyright. For example, most software companies require that any patents by employees be assigned to them.
I found the earlier patent case I was thinking of, Beech Aircraft v EDO, but the appellate ruling in Thaler is quite readable.
https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-523...
> Humans can still claim copyright if they put their name on a largely AI produced work.
That will certainly be a developing area of law, but it will probably have limited applicability, depending on how much creative input the human actually had into the work.
Let's say that someone asks DALL-E to create a picture of a cat juggling chainsaws. They then copyright it. Someone sees the picture, says "Hey, that's cool! Hey, DALL-E! Make me a picture of a cat juggling chainsaws!" and then they happen to get substantially the same image.
The entire purpose of copyright (from an author's perspective) (in the US, where there are no "moral rights") is to be able to sue infringers. Can the first guy sue the second guy?
It seems unlikely he would win, because copyright does not protect ideas, and the idea is all that the first guy supplied to DALL-E.
Maybe the first guy can win simply because the second image was created after DALL-E sucked in the first image in its next go-round of appropriating the entire web. But then that begs the question of what the first image is infringing, doesn't it? If DALL-E settles all authorship litigation and can proceed, then the second image should be as non-infringing as the first.
> Your original complaint was that humans were saying "I wrote this",
No, my original complaint is the too many people don't bother to figure out who wrote what.
> In principle I definitely agree with your original point though- people should produce content to express themselves, rather than becoming an expression of AI.
Wasn't my point.
In any case, here's an interesting take on the current state of affairs from the perspective of patents.
https://www.iplawgroup.com/staking-out-a-claim-for-inventors...
This is _exactly_ how I feel. Any time saved by precooking a "plan" (typically halfbaked ideas) with AI isn't really time saved, it is a transfer of work from the planner to whoever is going to implement the plan.
Later, at someone else's desk:
"Chat, summarize these 10 pages into 3 points."
Because the prompter is basically gaslighting reviewers into doing work for them. They put their marks of authorship on the AI slop when they've barely looked at it at all which convinces the reviewer to look. When the comments come back, they pump the feedback into the LLM, more slop falls out and around we go again. The prompter isn't really doing work at all—the reviewers are.
Each can be seen as using a tool to add false legitimacy. But ultimately they are just tools.
Edit: to clarify, people were judged by the clarity of their handwriting in the past and these tools made that impossible. Similarly, LLMs spackle over higher level language issues.
These things are not remotely comparable.
Example: Donald Trump did not write Art of the Deal
All these tools provide leverage to the author, but only one of these tools provides non-deterministic leverage.
It's not like typewriters -- in a written work the content is the entire point, not the handwriting. So unlike previous tools, this one is replacing you for the part that actually matters.
People use these tools for a variety of reasons (as diverse as people’s experiences). One can use an LLM to help express a perspective or develop and opinion (very important for those who struggle to communicate), or one can fake a picture or voice for fraud, or a million other purposes. It’s just a tool. How it gets used is about the people, not the tool.
I feel like more time is wasted trying to catch your coworkers using AI vs just engaging with the plan. If it's a bad plan say that and make sure your coworker is held accountable for presenting a bad plan. But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.
The coworker should just give me the five bullet points they put into ChatGPT. I can trivially dump it into ChatGPT or any other LLM myself to turn it into a "plan."
Asking for the prompt is also far more hostile than your coworker providing LLM-assisted word docs.
I had a coworker schedule a meeting to discuss a technical design of an upcoming feature, I didn't have much time so I only checked the research doc moments before the meeting, it was 26 pages long with over 70 references, of which about 30+ were reddit links. This wasn't a huge architectural decision so I was dumbfounded, seemed he barely edited the document to his own preferences, the actual meeting was maybe my most awkward meeting I've ever attended as we were expected to weigh in on the options presented but no one had opinions, not even the author, on the whole thing. It was just too much of an AI document to even process.
In most of my work contexts, people want more formal documents with clean headings titles, detailed risks even if it's the same risks we've put on every project.
If it's fiction writing or otherwise an attempt at somewhat artful prose, having an LLM write for you isn't cool (both due to stolen valor and the lame, trite style all current LLMs output), but for relatively low-stakes white collar job tasks I think it's often fine or even an upgrade. Definitely not always, and even when it's "fine" the slopstyle can be grating, but overall it's not that bad. As the LLMs get smarter it'll be less and less of an issue.
That's the thing. It actually really matters whether the ideas presented are coming from a coworker, or the ideas are coming from LLM.
I've seem way too many scenarios where I'm asking a coworker, if we should do X or Y, and all I get is a useless wall of spewed text, with a complete disregard to the project and circumstances on hand. I need YOUR input, from YOUR head right now. If I could ask Copilot I'd do that myself, thanks.
If they answer your question with irrelevant context, then that's the problem, not that it was AI
It's all about the utility provided. That's the only thing that matters in the end.
Some people seem to think work is an exchange of suffering for money, and omg some colleagues are not suffering as much as they're supposed to!
The plan(or any other document) has to be judged on its own merits. Always. It doesn't matter how it was written. It really doesn't.
Does that mean AI usage can never be problematic? Of course not! If a colleague feeds their tasks to a LLM and never does anything to verify quality, and frequently submits poor quality documents for colleagues to verify and correct, that's obviously bad. But think about it: a colleague who submits poor quality work is problematic regardless of if they wrote it themselves or if they had an AI do it.
A good document is a good document. And a bad one is a bad one. Doesn't matter if it was written using vim, Emacs or Gemini 3
Agree with the premise but this part is off. When I find a project online, I assume it will be abandoned within a year unless I see evidence of a substantive team and/or prior long-term time investments.
I look at the output and ask it to re-re-verify its results, but at the end of the day the LLM is doing the work and I am handing that off to others.
I've started having AI write those documents. Each one used to take me a full week to produce, now it's maybe one day, including editing. I don't feel bad about it. I'm ecstatic about it, actually; this shouldn't be part of my job, so reducing its footprint in my life is a blessing. Someday, someone will realize that such documents do not need to exist in the first place, but that's not the world we live in right now, and I can't change it. I'm just glad AI exists for this kind of pointless yeoman's work.
Almost an inverse Kafka universe; there are tools that can empower you to work the system in such a way that the effects of the externalities are very diffuse.
Still not good, but better than a typical Catch-22.
Because everyone uses a different 10%.
I write these documents too and I’ve watched people “read” them. They all do the same thing: flip to the conclusions and then if there is a need they will skim the section that’s relevant to their role.
The project manager cares only about the risks, costs, and time estimates.
The architect just wants to see the diagram and maybe check that the naming conventions have been followed.
Sysops just wants to know what they’re on the hook for after go-live.
None of them read the whole document, but the whole document ends up being read.
PS: I’ve found I have to take care of distributing documents myself. All organisations big and small are shockingly bad at disseminating information. Help them!
The whole llm paranoia is devolving into hysteria. Lots of finger pointing without proof, lots of shoddy evidence put forward and nuance missing points.
My stance is this: I don't really care whether someone used an llm or wrote it themselves. My observation is that in both cases people were mostly wrong and required strict reviews and verification, with the exception of those who did Great Work.
There are still people who do Great Work, and even when they use llms the output is exceptional.
So my job hasn't changed much, I'm just reading more emojis.
If you find yourself becoming irrationally upset by something that you're encountering that's largely outside of your control, consider going to therapy and not forming a borderline obsession with purity on something that has always been a bit slippery (creative originality ).
Sure, but LLMs allow people to be wronger faster now, so they could conceivably inundate the reviewer with a new set of changes requiring a new two hour review, by only pressing buttons for two minutes.
> If you find yourself becoming irrationally upset by something that you're encountering that's largely outside of your control, consider going to therapy and not forming a borderline obsession with purity on something that has always been a bit slippery (creative originality ).
Maybe your take on it is slightly different because your job function is somewhat different?
I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.
> I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.
My point is, I've been in the game for coming up on 16 years, mostly in large corporate FAANG-adjacent environments. People have always been functionally incorrect and not to be trusted. It used to be a meme said with endearment, "don't trust my code, I'm a bug machine!" Zero trust. That's why we do code reviews.
> Sure, but LLMs allow people to be wronger faster now, so they could conceivably inundate the reviewer...
With respect, "conceivably" is doing a lot of work here. I don't see it happening. I see more slop code, sure. But that doesn't mean I _have_ to review it with the same scrutiny.
My experience thus far has been that this is solved quite simply: After a quick scan, "Please give this more thought before resubmitting. Consider reviewing yourself, take a pass at refining and verify functionality."
> Maybe your take on it is slightly different because your job function is somewhat different? > I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.
Interestingly, I see the opposite in the online space. First of all, as an aside, I don't see many people complaining at all in real life (other than the common commiseration of getting slop PRs, which has replaced the common commiseration of getting normal PRs of sub-par quality).
I primarily see people coming to the defense of human creativity and becoming incensed by reading (or I should say, "viewing" more generally) something that an llm has touched.
It appears that mostly people have accepted that llms are a useful tool for producing code and that when used unethically (first pass llm -> production), of course they're no good.
There is a moral outrage and indigence that I've observed however (on HN, and elsewhere) when an LLM has been used for the creative arts.
The author does not mention whether the generated project plan actually looked good or plausible. If it is, where is the harm? Just that the manager had their feelings hurt?
1. If the output is solid, does it matter?
2. The author could simply have done the research, created the plan, and then gave an LLM the bullet list points of research and told it to "make this into a presentable plan". The author does the heavy work and actually does the creative work, and outsources the manual formatting to the LLM. My Wife speaks English as a second language, she much prefers telling an LLM what she is trying to say and to generate a business friendly email from this than writing it herself and letting in grammatical mistakes.
3. If I were to write a paper in my favorite text editor and then put it through pandoc to generate a word doc it would do the same thing.
The creation of a plan also implies that some work has gone into making sure it's a good one. That's one human (the author) asserting that it's solid. But now you're not even sure if that one vote exists.
Until AI is used to fake that, too.
1) For things made with LLMs: 1a) The fact that older versions aren't online forever. You literally might never be able to put the original prompt in and get the same result. 1b) A minor change in input prompt can result in a huge output change, rendering the original prompt practically meaningless, especially if modifications were required for the output of the LLM.
2) For things made the old-fashioned way, most history is boring and not useful. The best git repos have carefully curated history, with cohesive change sets that are both readable, and usable when bisecting the commit history for regressions.
And I don’t care if it’s boring, it has to be available. Crime scene details or forgery details are mundane and boring too, but for the investigators they are essential.
Strong language, strong nope.
Demand to see shit I didn't even think was important when I was busy building stuff? Sucks to be you.
But even a rejection is work. So if they're generating more bs faster, they are generating more work for you. And, in some organizations, they will receive rewards for occasionally pressing buttons and inundating you with crap.
> a lot people are expressing distaste for tools when they should be expressing distaste for fools.
I'm pretty sure that the original article, and most of the derogatory comments here, are expressing distaste for fools rather than tools. Specifically, tool-using fools.
It used to be that a well-written document was a proof-of-work that the author thought things through (or at least spent some time thinking about it).
I'm all for AI--I use it all the time. But I think our current style of work needs to change to adapt to both the strengths and weaknesses of AI.
I think you hit the nail on the head here. The problem isn't so much that people can do bad work faster than ever now, its that we can no longer rely on the same heuristics for quickly assessing a given piece of work. I dont have a great answer. But I do still think it has something to do with trust and how we build relationships with each other.
When used right, ideas could be distilled not extrapolated into slop. -- So maybe its not ALL BAD?
I propose a new quotation system, the 3 quote marker to disclose text written or assisted by ai:
'''You are absolutely right'''
Maybe we need a different document structure--something that has verification/justification built in.
I'd like to see a conclusion up front ("We should invest $x billion on a new factory in Malaysia") followed by an interrogation dialogue with all the obvious questions answered: "Why Malaysia and not Indonesia?", "Why $x and not $y billion?", etc.
At that point, maybe I don't care if the whole thing was produced by AI. As long as I have the justification in front of me, I'm happy. And this format makes it easy to see what's missing. If there's a question I would have asked that's not in the document, then it's not ready.
This was before vibe coding, around the days of GPT 3.5. At the time I just thought it was a challenging topic and my colleague was probably preoccupied with other things so we parked the talk.
A few weeks later, while exploring ways to use GPT for technical tasks I suddenly remembered that slack chat and realised the person had been copy pasting my messages to gpt and back. I really felt bad at that moment, like… how can you do this to someone…? It’s not bad that you try tools to find information or whatever, but not disclosing that you’re effectively replacing your agency with that of a bot is just very suboptimal and probably disrespectful.
People who claim that they are disrupting with disintermediation, but actually simply replace the old intermediary with their own?
Those people get filthy rich.
People who _should_ be making things but are trying this intermediation technique themselves will most likely find that it's like other forms of lying. Go big or go home.
> My own take on AI etiquette is that AI output can only be relayed if it's either adopted as your own or there is explicit consent from the receiving party.
If someone just generates an incredibly detailed plan in one go, that destroys the process. Others now are wasting time looking at details in something that may not even be a good idea if you step back.
The successive refinement flow doesn't preclude consideration of input from AI.
I was later asked why is it taking so long to complete the task when the document had a step by step recipe. I had to explain why the AI was solving the wrong problem in the wrong place. The PMs did not understand and scheduled more meetings to solve the problem. All they knew is that tickets were not moving on the board.
I suddenly realized that nobody had any idea of what’s going on at all on a technical level. Their contribution was to fret about target dates and executive reports. It’s like a pyramid scheme of technical ignorance. The consequence is some ICs forced to do uncompensated overtime to actually make working software.
These are the unintended consequences of the AI hype that CEOs are evangelizing.
Why aren't people using LLM to shorten rather than lengthen their plans? You know what you meant so can validate whether the shorter version still hits the points you care about. Whereas if I use an LLM to shorten your email there is always a risk I've now missed your main point.
Cleaning up grammar, punctuation spelling etc is a good thing worth doing but adding padding is exclusively irritating.
This comment was generated by chatgpt (inspired by me).
messe•1mo ago
This isn't always a great indicator.
I can't stand Google Docs as an interface to write with, so use VIM and the copy/paste the completed document into it.
GaryBluto•1mo ago
NitpickLawyer•1mo ago
When you use these tools you get a knack for what they do in "vanilla" situations. If you're doing a quick prompt, no guidance, no context and no specifics, you'll get a type of answer that checks many of the "smells" above. Getting the same over and over again gets you to a point where you can "spot" this pretty effectively.
pessimizer•1mo ago
The rest of the blog is just random subjective morality wank with implications of larger implications, constructed by borrowing the central points of a series of popular articles in their entirety and adding recently popular clichés ("why should I bother reading it if you couldn't bother to write it?")
No other explanations about why this was a bad document, or this particular event at all, but lots of self-debate about how we should detect, deal with, and feel about bad documents. All documents written by LLM are assumed to be bad, and no discussion is attempted about degrees of LLM assistance.
If I used AI to write some long detailed plan, I'd end up going back and forth with it and having it remove, rewrite, rethink, and refactor multiple times. It would have an edit history, because I'd have to hold on to old drafts in case my suggested improvements turned out not to be improvements.
The weirdest thing about the article is that it's about the burden of "verification," but it thinks that what people should be verifying is that LLMs had no part in what they've received. The discussion I've had about "verification" when it comes to LLMs is the verification that the content is not buggy garbage filled with inhuman mistakes. I don't care if it's LLM-created or assisted, other than a lot of people aren't reading and debugging their LLM code, and LLMs are dumb. I'm not hunting for em-dashes.
-----
edit: my 2¢; if you use LLMs to write something, you basically found it. If you send it to me, I want to read your review of it i.e. where you think it might have problems and why you think it would help me. I also want to hear about your process for determining those things.
People are confusing problems with low-effort contributors with problems with LLMs. The problem with low-effort contributors is that what they did with the LLM was low-effort and isn't saving you any work. You can also spend 5 minutes with the LLM. If you get some good LLM output that you think is worth showing to me, and you think it would take significant effort for me to get it myself, give me the prompts. That's the work you did, and there's nothing wrong with being proud of it.
satisfice•1mo ago
If you order a meal at a restaurant and later discover that the chicken you ate was recycled from another diner’s table (waste not want not!) you would likely be outraged. It doesn’t matter if it tasted good.
As soon as you tell me you used AI to produce something, you force me review it carefully, unless your reputation for excellent review of your own is well established. Which it probably isn’t— because you are the kind of guy who uses AI to do his work.
jandrese•1mo ago
It would be interesting to see the history where the whole document is dumped in the file at once, but then edits and corrections are applied top to bottom to that document. Using AI isn't so much the problem as trusting it blindly.
like_any_other•1mo ago
This also happens if one first writes in an editor without spellchecking, then pastes into the Google Doc (or HN text box) that does have spellchecking.
plorkyeran•1mo ago
Lerc•1mo ago
There was an article the other day where the writer said something along the lines of it suddenly occurred to them that others might read content they had access to. They described thenselves as a security researcher. I couldn't imagine a security researcher having that occur to them, I would think that it is a concept continually present in their concept of what data is. I am not a security researcher and it certainly something I'm fairly constently aware of.
Similarly I'm not convinced the "shouldn't this plan be better" question is in good faith either. Perhaps it just betrays a fundamental misunderstanding of the operation being performed by a model, but my intuition is that they never expected it to be very good and are feigning surprise that it is not.
pgwhalen•1mo ago
SkyeCA•1mo ago
exe34•1mo ago
clickety_clack•1mo ago
zephen•1mo ago
Right. Certainly not dispositive.
> use VIM and the copy/paste the completed document into it.
But he did mention tables. You'd think if they weren't just ASCII art, there'd be _some_ google docs history about fixing them up.
Izkata•1mo ago
jadedtuna•1mo ago
fragmede•1mo ago
el_benhameen•1mo ago
like_any_other•1mo ago
Don't forget about typing patterns, that could be used to deanonymize you across different platforms (anywhere that you type into a webpage that runs javascript):
https://www.bleepingcomputer.com/forums/t/759050/improve-ink...
kashyapc•1mo ago
guerrilla•1mo ago
elgertam•1mo ago
necubi•1mo ago
(I have the same workflow, via Obsidian)
twothamendment•1mo ago
QuercusMax•1mo ago
yjftsjthsd-h•1mo ago
tlavoie•1mo ago
superultra•1mo ago
jchw•1mo ago
LanceH•1mo ago
The simple fact is that the reader has no business reading the edit history, and the ability to make this happen should probably be far more prominent in document applications like Word or Google Docs.
kianN•1mo ago
mystifyingpoi•1mo ago
nereye•1mo ago
Veen•1mo ago
Plus, I want to deliver the completed document, not my edit history. Even on the occasions that I have written directly in Google Docs, I've copied the doc to obliterate the version history.
el_benhameen•1mo ago