frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Year in Books: 2022

https://notes.eatonphil.com/2023-01-12-year-in-books.html
1•ibobev•1m ago•0 comments

Kimchi, made in China: how South Korea's national dish is being priced out

https://www.theguardian.com/world/2025/dec/22/kimchi-south-korea-national-dish-priced-out-china-e...
1•n1b0m•1m ago•0 comments

Capital One Is Losing Its Travel Dominance: A Review of the Discover Debit Cards

https://testinginprod.blog/capital-one-is-losing-its-travel-dominance-a-review-abroad-of-their-ne...
1•charliebwrites•1m ago•0 comments

In Pursuit of Clancy Sigal (2021)

https://yalereview.org/article/in-pursuit-of-clancy-sigal
2•dang•2m ago•0 comments

Trying to Fit Exponential Data

https://www.johndcook.com/blog/2025/12/22/fit-exponential-data/
1•ibobev•3m ago•0 comments

Please give some tips to shorten the Linux device name of NVMe (2023)

https://old.reddit.com/r/linuxquestions/comments/1biouhv/please_give_some_tips_to_shorten_the_nam...
1•transpute•5m ago•0 comments

Infinite Ethics [pdf]

https://nickbostrom.com/ethics/infinite.pdf
1•ChadNauseam•6m ago•0 comments

Dad's Fitness May Be Packaged and Passed Down in Sperm RNA

https://www.quantamagazine.org/how-dads-fitness-may-be-packaged-and-passed-down-in-sperm-rna-2025...
1•ibobev•6m ago•0 comments

Vince Zampella, Developer of Call of Duty and Battlefield, Dead at 55

https://comicbook.com/gaming/news/vince-zampella-developer-of-call-of-duty-and-battlefield-dead-a...
2•superpupervlad•11m ago•0 comments

Glia-associated amyloid β oligomer subtype and rescue from reactive astrogliosis

https://alz-journals.onlinelibrary.wiley.com/doi/10.1002/alz.70968
1•bookofjoe•13m ago•0 comments

I built a retro mini PC which uses game cartridges [video]

http://youtube.com/watch?v=iJbJDowBfi4
2•abeisgreat•15m ago•1 comments

Genesis Open Source Embodied AGI Simulation, Rust (Mamba-3, Not Transformers)

1•RGBra•17m ago•0 comments

Reddit comment led police to identify Brown University shooter

https://9to5mac.com/2025/12/22/reddit-comment-led-police-to-identify-brown-university-shooter/
1•akyuu•18m ago•0 comments

Free to Post, Impossible to Hide: The End of Anonymous Marketplaces

https://medium.com/@lea.leumassart/free-to-post-impossible-to-hide-the-end-of-anonymous-marketpla...
1•pbacdf•19m ago•0 comments

Meilisearch: Make the S3-streaming snapshots an Enterprise Edition feature

https://github.com/meilisearch/meilisearch/pull/6057
1•iruoy•21m ago•0 comments

How AI Collapses and Rebuilds Marketplace Moats

https://www.caseyaccidental.com/p/when-agents-attack-how-ai-collapses
1•gmays•22m ago•0 comments

Banana is Generating Antimatter, And I Detected It [video]

https://www.youtube.com/watch?v=ZOTsDmeM0No
2•jmward01•23m ago•1 comments

Show HN: Cardly – a tiny card-first app to capture people's Gift Cards

https://www.cardlyai.app/
1•Pastaza•23m ago•1 comments

Show HN: Making SVG Sparkline Component with an Agent to Graph Token Usage

https://bsky.app/profile/verdverm.com/post/3makhu3nbm22n
1•verdverm•23m ago•0 comments

Show HN: Find games with few -but positive- reviews based on games that you like

https://www.notsoaaa.com/
1•AmbroseBierce•27m ago•0 comments

Show HN: LLVM-jutsu: Anti-LLM obfuscation pass

https://github.com/thebabush/llvm-jutsu
1•babush•27m ago•0 comments

Welcome to Kenya's Great Carbon Valley a bold new gamble to fight climate change

https://www.technologyreview.com/2025/12/22/1130153/geothermal-energy-carbon-capture-kenya-climat...
1•rbanffy•28m ago•0 comments

How the Cybertruck's design may have trapped crash survivors in flames

https://www.washingtonpost.com/technology/interactive/2025/cybertruck-crash-design-lawsuit/
2•Jtsummers•28m ago•0 comments

Paperbacks and TikTok

https://calnewport.com/on-paperbacks-and-tiktok/
1•zdw•28m ago•0 comments

Lua 5.5 Released

https://www.lua.org/manual/5.5/readme.html#changes
2•todsacerdoti•29m ago•1 comments

Best way to annotate large parquet LLM logs without full rewrites?

1•platypii•31m ago•0 comments

The Program 2025 annual review: How much money does an audio drama podcast make?

https://programaudioseries.com/the-program-results-7/
3•I-M-S•31m ago•1 comments

ChatGPT Is a Search Engine

https://queryburst.com/blog/how-chatgpt-works/
1•AznHisoka•32m ago•0 comments

Power outage paralyzes Waymo robotaxis when traffic lights go out

https://arstechnica.com/cars/2025/12/power-outage-paralyzes-waymo-robotaxis-when-traffic-lights-g...
2•chirau•33m ago•1 comments

Reducing contrails reduces CO2 effect of air travel 73%, adds only 0.08% to cost [video]

https://www.youtube.com/watch?v=QoOVqQ5sa08
1•CGMthrowaway•37m ago•0 comments
Open in hackernews

I know you didn't write this

https://ammil.industries/i-know-you-didnt-write-this/
79•cjlm•1h ago

Comments

messe•1h ago
> Suspicions aroused, I clicked on the “Document History” button in the top right and saw a clean history of empty document – and then wham – fully-formed plan, as if it had just spilled out of someone’s brain, straight onto the screen, ready to share.

This isn't always a great indicator.

I can't stand Google Docs as an interface to write with, so use VIM and the copy/paste the completed document into it.

GaryBluto•1h ago
It's bizarre to me that this didn't occur even slightly to the post author.
NitpickLawyer•1h ago
As with many other things (em dashes, emojis, bullet lists, it's-not-x-it's-y constructs, triple adjectives, etc) seeing any one of them isn't a tell. Seeing all of them, or many of them in a single piece of content, is probably the tell.

When you use these tools you get a knack for what they do in "vanilla" situations. If you're doing a quick prompt, no guidance, no context and no specifics, you'll get a type of answer that checks many of the "smells" above. Getting the same over and over again gets you to a point where you can "spot" this pretty effectively.

pessimizer•26m ago
The author did not do this. The author thought it was wonderful, read the entire thing, then on a lark (they "twigged" it) checked out the edit history. They took the lack of it as instant confirmation ("So it’s definitely AI.")

The rest of the blog is just random subjective morality wank with implications of larger implications, constructed by borrowing the central points of a series of popular articles in their entirety and adding recently popular clichés ("why should I bother reading it if you couldn't bother to write it?")

No other explanations about why this was a bad document, or this particular event at all, but lots of self-debate about how we should detect, deal with, and feel about bad documents. All documents written by LLM are assumed to be bad, and no discussion is attempted about degrees of LLM assistance.

If I used AI to write some long detailed plan, I'd end up going back and forth with it and having it remove, rewrite, rethink, and refactor multiple times. It would have an edit history, because I'd have to hold on to old drafts in case my suggested improvements turned out not to be improvements.

The weirdest thing about the article is that it's about the burden of "verification," but it thinks that what people should be verifying is that LLMs had no part in what they've received. The discussion I've had about "verification" when it comes to LLMs is the verification that the content is not buggy garbage filled with inhuman mistakes. I don't care if it's LLM-created or assisted, other than a lot of people aren't reading and debugging their LLM code, and LLMs are dumb. I'm not hunting for em-dashes.

-----

edit: my 2¢; if you use LLMs to write something, you basically found it. If you send it to me, I want to read your review of it i.e. where you think it might have problems and why you think it would help me. I also want to hear about your process for determining those things.

People are confusing problems with low-effort contributors with problems with LLMs. The problem with low-effort contributors is that what they did with the LLM was low-effort and isn't saving you any work. You can also spend 5 minutes with the LLM. If you get some good LLM output that you think is worth showing to me, and you think it would take significant effort for me to get it myself, give me the prompts. That's the work you did, and there's nothing wrong with being proud of it.

jandrese•1h ago
Or the tell that the guy who usually writes fairly succinctly suddenly dumps five thousand words with all of the details that most people wouldn't bother to write down.

It would be interesting to see the history where the whole document is dumped in the file at once, but then edits and corrections are applied top to bottom to that document. Using AI isn't so much the problem as trusting it blindly.

like_any_other•1h ago
> the whole document is dumped in the file at once, but then edits and corrections are applied top to bottom to that document

This also happens if one first writes in an editor without spellchecking, then pastes into the Google Doc (or HN text box) that does have spellchecking.

plorkyeran•25m ago
Dumping the entire file into google docs and then editing and corrections applied top to bottom is exactly my normal workflow. I do my writing in vim, paste it into google docs, and then do a final editing pass while fixing the formatting.
Lerc•56m ago
I have seen a number of write ups where I think the only logical explanation is that they are not conveying what literally happened but spinning narrative to express their point.

There was an article the other day where the writer said something along the lines of it suddenly occurred to them that others might read content they had access to. They described thenselves as a security researcher. I couldn't imagine a security researcher having that occur to them, I would think that it is a concept continually present in their concept of what data is. I am not a security researcher and it certainly something I'm fairly constently aware of.

Similarly I'm not convinced the "shouldn't this plan be better" question is in good faith either. Perhaps it just betrays a fundamental misunderstanding of the operation being performed by a model, but my intuition is that they never expected it to be very good and are feigning surprise that it is not.

pgwhalen•46m ago
It probably did, but they didn't feel the need to fully explain why they were confident it was AI generated, since that's not the point of the article.
exe34•1h ago
yep, emacs, version control that doesn't suck, all my notes in one place. I'll copy and paste what I need to share into whatever hellscape you want to live in, but my copy will remain safe.
clickety_clack•1h ago
I also interact with Google Docs as little as possible. I draft in Notes or Obsidian and copy the text in. I just hate the platform.
zephen•1h ago
> This isn't always a great indicator.

Right. Certainly not dispositive.

> use VIM and the copy/paste the completed document into it.

But he did mention tables. You'd think if they weren't just ASCII art, there'd be _some_ google docs history about fixing them up.

Izkata•1h ago
Different-sized headers too.
jadedtuna•1h ago
Fairly certain e.g. copy-pasting from Obsidian will copy over the tables as well.
fragmede•1h ago
every other time, paste with styling is the devil, but ever so occasionally, I do actually want that
el_benhameen•1h ago
Yep. I do this because I explicitly do not want a third party to see my thought process. If I wanted the reader to see my edits and second thoughts, I would have included them in the final document.
like_any_other•1h ago
> my thought process

Don't forget about typing patterns, that could be used to deanonymize you across different platforms (anywhere that you type into a webpage that runs javascript):

https://www.bleepingcomputer.com/forums/t/759050/improve-ink...

guerrilla•1h ago
Me too.
elgertam•1h ago
I do something similar. I write markdown, then render it and copy-paste that in
necubi•1h ago
You can now paste markdown directly into Google docs (Edit -> Paste From Markdown)

(I have the same workflow, via Obsidian)

twothamendment•1h ago
Another copy/paste reason - I can't count the number of times I've written up something for work on my own google account by mistake, then paste it into a new doc on the work account so I can share it.
QuercusMax•1h ago
You really should use separate browser profiles...
yjftsjthsd-h•1h ago
Or separate machines. It's not impossible to maintain sufficient separation in software, but it's a lot easier to skip the whole mess.
superultra•1h ago
I don’t need everyone seeing the dirty laundry of my first draft and edits. I too work in a working doc and then when completed I drop it at once into the final google doc.
jchw•1h ago
On another similar but different note, I don't think I've ever uploaded any code written by LLMs to GitHub, but I do sometimes upload fully complete projects under my "initial commit". Some people may legitimately just hide the edit history on purpose just because they don't want to "show their work". It's not really a particularly good habit, but I think a lot of us can relate.
LanceH•1h ago
A legit reason to hide your edit history is you might not remember what was in there. Say you have a moment of frustration and type out "this is an absolute garbage assignment by a braindead professor". Or you jot a quick note from the doctor because it happens to be open.

The simple fact is that the reader has no business reading the edit history, and the ability to make this happen should probably be far more prominent in document applications like Word or Google Docs.

kianN•1h ago
I do the exact same thing and this was my first thought. To be fair, I would probably not be able to format tables in a single cope/paste
mystifyingpoi•1h ago
Same here. Confluence web editor has a thousand options but no option to comfortably edit text. I always write the entire document in Neovim and then format it later (or never, in case of yet another "please explain this thing only you know but we will ignore this page and call you anyway when it breaks").
nereye•1h ago
Also, in some countries (e.g. Germany) applications explicitly do not track that information (such as how long a documented was edited) for legal reasons related to privacy laws.
Veen•36m ago
Yes, I write everything in Obsidian and use "Paste from Markdown" in Google Docs. It's a habit I picked up years ago when Docs was much less reliable and lost work.

Plus, I want to deliver the completed document, not my edit history. Even on the occasions that I have written directly in Google Docs, I've copied the doc to obliterate the version history.

el_benhameen•33m ago
Oh, another fun one: I once got an offer letter via Docs. The edit history included the original paste from another candidate’s offer letter, including their name and salary. Useful for benchmarking!
andy99•1h ago
I don’t really find it better when someone ads a disclaimer. What am I supposed to do then? There’s still an expected default behavior of reading it, and if I don’t I need to confront them and say “I don’t care what you got an LLM to say, why not give me your view”. It’s inappropriate under any circumstances imo
embedding-shape•1h ago
When I receive something that either I suspect an LLM wrote 90% of, or the author is up front about it, I always ask "Did you check all of this yourself to verify before I pick it up?" and maybe half of the times I get a no, and then they return a day later with a new document and some fixes. Other half the time people say yes, I start digging into it, start finding bunch of weird stuff and send it back to the person.

I don't really care if it's a person or the LLM getting it wrong, if you're sending me stuff that you checked or haven't checked but it's wrong/ambiguous anyways, I'm sending it back to you to fix.

zephen•1h ago
> I don't really care if it's a person or the LLM getting it wrong,

You're nicer than some of us.

If it's an LLM getting it wrong, and it's not caught before it gets to you, then what value is the intermediary adding to the process?

embedding-shape•1h ago
I meant it in a way that I'll blame the person regardless of who actually wrote the text. The LLM messed up and you failed to notice? I blame you. You messed up and failed to notice? I still blame you.
zephen•53m ago
I get that.

But, as discussed in some other threads, the leverage provided by the LLM allows the miscreant to inundate you with slop by only pressing a few buttons.

And rejection is work. So they can produce more slop, requiring more rejections, faster than you can read the slop.

This is what's new. You reject it, they feed your rejection back into the LLM, and hand you something 5 minutes later with so many formatting changes that diff is unhelpful, and enough subtle substantive changes embedded in it that if you don't read the entire thing, you might have missed something important.

thatjoeoverthr•1h ago
“Any time saved by (their) AI prompting gets consumed by verification overhead, …”

This

When I receive a PR, of course it’s natural an AI is involved.

The mortal sin is the rubber stamp.

If they haven’t read their own PR, I only have so many warnings in me. And yes, it is highly visible.

SoftTalker•1h ago
This will all devolve to people submitting LLM work to people who can't tell (or don't care) that it's LLM work.
axus•1h ago
Why can't the plan be judged on its merits? Rigorous verification of the idea is a good thing that should happen anyways. The main potential problem I see is transmission of privileged information to a third party.

I assume they are working at a business to make money, not a school or a writing competition.

zephen•1h ago
The unstated elephant in the room is that you can't possibly know how much thought the originator has given to this.

You can't know if it has been reviewed and checked for minimal sanity, or just chucked over the fence.

So you have to fully vet it.

And, if you have to fully vet it, then what value has the originator added? Might as well eliminate their position.

xeckr•1h ago
>Might as well eliminate their position.

It's where we're headed.

dj_mc_merlin•1h ago
> The unstated elephant in the room is that you can't possibly know how much thought the originator has given to this.

You can just ask them if they reviewed it in detail.

recursive•1h ago
The problem comes from the asymmetry between the effort that went into generating and judging. You can have one person spinning out documents that can keep a whole team busy and dragging everyone down.

Along the same lines as "A lie travels around the globe while the truth is putting on its shoes."

dj_mc_merlin•1h ago
If the documents they're putting out are bad, then they're doing bad work and that eventually comes with consequences from your coworkers and superiors. If they're doing good work, then great! Who cares if an LLM wrote most of it and they just edit it? That's not super different than the current relationship between senior and line workers.
andy99•1h ago
Because AI can generate meritless works far faster than anyone can judge their merits. Asking someone to read your AI thing is basically asking someone to do the work for you. If you respect your colleagues time, you should be sharing your best version of inputs, not raw material. Not only that, you should have thought about and be able to defend it. Throwing some AI thing over the fence, you haven’t thought about it either, why would you expect your colleague to?

I’d add to that, long form AI output is really bad and basically unsuitable for anything.

Something like “I got GPT to make a few bullet points to structure the conversation” is probably acceptable in some cases if it’s short. The worst I can imagine is giving someone a “deep research” article to read as if that’s different from sending them to google.

axus•1h ago
Yes I made the assumption that the person who "put the plan together" did their own diligence of reviewing it before emailing, but maybe that is too charitable for an "AI plagiarist".

If someone sends me incomplete work I will judge them for that, the history of the work relationship matters and I didn't see it in the blog post.

tediousgraffit1•59m ago
This is a trust issue. If someone I trust hands me a big pr, I focus on the important details. If someone i dont trust hands me a big pr, i just reject it and ask them to break the problem down further. I dont waste my time on this kind of thing, regardless of whether it was hand written or generated.
acedTrex•1h ago
Because judging something on its merit is intrinsically tied to judging the underlying amount of effort that was put into something.
ben_w•1h ago
> Why can't the plan be judged on its merits? Rigorous verification of the idea is a good thing that should happen anyways.

Situational.

I don't know this blogger or what the plan involved; but for sake of agument, let's say it was a business plan, and let's say in isolation it's really good, 99.9% chance of success with 10x returns kind of good.

Everyone in whatever problem space this is probably just got the same quality of advice from their own LLM prompting. That 99.9% is no longer "in isolation", it is a correlated failure where all the other people doing the same thing as you makes it less viable.

That's a good reason not to use a public tool, even when the output is good.

Correlated risk disguised as uncorrolated risk was a big part of the global financial crisis in the late 00s.

unyttigfjelltol•1h ago
So many technologists offended at the use of technology. Next they’ll insist on pen-on-paper for truly authentic work product, and after that, 3 days’ wilderness meditation on it, to prove you really internalized it.

Look, it’s now like, email in 2004. You see spam, that it has found email. It doesn’t mean you refuse to interact with anyone by email, write geocities posts mocking email-users. You just acknowledge the technology (email) can be used for efficiency, results, and it also can be misused as a giant time-waster.

The author of the article here is basically saying “technology was used = work product is trash”. The ”spam” folks are seeing must be horrible to evoke this kind of condemnatory response.

bakugo•1h ago
> Why can't the plan be judged on its merits?

Because of the difference in effort involved in generating it vs effort required to judge it.

Why are you entitled to "your" work being judged on its merits by a real human, when the work itself was not created by you, or any human? If you couldn't be bothered to write it, why should someone else be bothered to read it?

btilly•1h ago
There is knowing, and then there is knowing.

For example suppose that someone likes to work in Markdown using VSCode. To get the kind of Word document that everyone else expects, you just copy and paste into Word. AI isn't involved, but it will look exactly like AI to you.

And there are more complicated hybrids. For example my wife has a workflow where everything that she does, communications, and so on, wind up in Markdown in Obsidian. She adds information about who was at the meeting that includes basuc research into them done by an agent (company directory, title, LinkedIn, and so on - all good to know for someone working in sales). Her AI assistant then extracts out bullet points, cross references, and so on. She uses that to create summaries that she references whenever she goes back to that project. And if someone wants to know what has happened or is currently planned for that project, AI extracts that from the same repository.

There's lots of AI in this workflow. But the content and thought is mostly from her. (With facts from searches that an agent did.) The fact that she's automated a lot of her organizational scutwork to an AI doesn't make the output "AI slop".

ArcHound•1h ago
> If you know in your heart of hearts that you didn’t put the work in, you’re undermining the social contract between you and your reader.

There's been a lot of social contract undermining lately. Does anyone please know about something that can be done to try and revert back? Social contract of "F you. I got mine" isn't very appealing to me, but that seems to be the current approach.

jdashg•1h ago
We literally have to be willing to get taken advantage of sometimes, and we have to come down hard on the "don't hate the player, hate the game" f-you-got-mine assholes.

It is not weakness, but strength, to make yourself (reasonably!) vulnerable to being taken advantage of. It is not strength, but weakness, to let bad behavior happen around you. You don't have to do everything, but you have to do something, or nothing changes.

We gotta spend less time explaining away (and tacitly excusing) bad behavior as unfortunate game theory, and more time coming down hard on people who violate trust.

Ante trust gladly, but come down hard on defectors.

zephen•57m ago
Upvoted because this is true, but we need to establish coping mechanisms for this.

For example:

"Sorry, yes, I know the report is due tomorrow, but I don't have time to review it again because I wasted 2 hours on the first version."

or

"I found these three problems on the first page and stopped reading."

What else?

ArcHound•30m ago
Consider this situation: security review before a project go-live.

I have never seen this team before and I'll "never" see this team after the fact. They might be contracted externally, they might leave before the second review.

Let's say I can sus out people doing this. I don't have the option of giving them the benefit of the doubt and they have the motivation to trick me.

I guess I've answered my own question a bit, such an environment isn't built to foster trust at all.

jvanderbot•1h ago
If I discover you fed me AI output, directly from AI, it really makes me wonder what you are doing here. What did you add to this equation when I could have done it myself?

At least a "Generated by AI, reviewed and edited by xyz" tag would be some indicator of effort and accountability.

It may not be wrong to use AI to generate things whole cloth, but it definitely sidesteps something important and calls into question the "prompter's" contributions to the whole thing.

Sharlin•1h ago
When it comes to LLMs, the only thing I hate more than the "I don't know, the AI wrote it" people is the "I wrote this" crowd. No you didn't, you asked someone else to write it. If you couldn't claim copyright for it in an IP court, you did not write it. Period.
zdragnar•1h ago
Has this actually been tried? Plenty of people have released AI generated (in part or nearly whole) media as their own, especially in music and fiction literature.

Personally, I'd love to see most of this stuff disappear from services that advertise it on par with human generated media like spotify and amazon (though I'll also admit to having a soft spot for the soul style AI covers of 50 cent and others).

zephen•1h ago
> Has this actually been tried?

Yes, Thaler v. Perlmutter.

I'm pretty sure, even though that's recent, that it fully comports with decades old law on patents, as well.

I can't find an older case, but Thaler v. Vidal is a recent patent case.

lbrito•1h ago
>Regardless of their intent I realised something subtle had happened. Any time saved by (their) AI prompting gets consumed by verification overhead, the work just gets passed along to someone else – in this case me.

This is _exactly_ how I feel. Any time saved by precooking a "plan" (typically halfbaked ideas) with AI isn't really time saved, it is a transfer of work from the planner to whoever is going to implement the plan.

eric-p7•1h ago
"Chat, expand these 3 points into 10 pages."

Later, at someone else's desk:

"Chat, summarize these 10 pages into 3 points."

teeray•1h ago
> So it’s definitely AI. I felt betrayed and a little foolish. But why?

Because the prompter is basically gaslighting reviewers into doing work for them. They put their marks of authorship on the AI slop when they've barely looked at it at all which convinces the reviewer to look. When the comments come back, they pump the feedback into the LLM, more slop falls out and around we go again. The prompter isn't really doing work at all—the reviewers are.

zephen•44m ago
Not sure why this is being downvoted. It accurately and succinctly describes a likely reason for a _feeling_.
mlhpdx•1h ago
Is writing with LLM assistance that different than writing with a typewriter 100+ years ago? Than using a computer and printer 30 years ago?

Each can be seen as using a tool to add false legitimacy. But ultimately they are just tools.

QuercusMax•1h ago
Those two things aren't even comparable. Both of those are using technology to physically imprint letters onto the page, but in both cases those are still your own ideas in your own words.
zephen•1h ago
Yes, it's different.

All these tools provide leverage to the author, but only one of these tools provides non-deterministic leverage.

umanwizard•1h ago
Yes, it is obviously fundamentally different.
ascendantlogic•1h ago
The content of the document matters too. I don't really care if someone was AI-assisted writing a project plan. As long as it's sane and clear I'm not gonna lose sleep over that. However for my performance review I definitely want my manager to put in the effort and actually tell me nuanced thoughts on my performance. I don't want AI output for that part.
SoftTalker•1h ago
Wait until you find out that most managers write feedback using copy/paste boilerplate with maybe a few tweaks to personalize it. And this was happening long before LLMs.
ascendantlogic•59m ago
Oh I'm well aware. When I was an EM for a bit last year a bunch of colleagues told me they used ChatGPT to write their reviews. It was gross and I always hand crafted, small batch artisanal reviews when I'm in the managers chair.
a1j9o94•1h ago
I know I'm an outlier on HN, but I really don't care if AI was used to write something I'm reading. I just care whether or not the ideas are good and clear. And if we're talking about work output 99% of what people were putting out before AI wasn't particularly good. And in my genuine experience AI's output is better than things people I worked with would spend hours and days on.

I feel like more time is wasted trying to catch your coworkers using AI vs just engaging with the plan. If it's a bad plan say that and make sure your coworker is held accountable for presenting a bad plan. But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.

skwirl•1h ago
>But it shouldn't matter if he gave 5 bullets to Chat gpt that expanded it to a full page with a detailed plan.

The coworker should just give me the five bullet points they put into ChatGPT. I can trivially dump it into ChatGPT or any other LLM myself to turn it into a "plan."

dj_mc_merlin•1h ago
If ChatGPT can make a good plan for you from 5 bullet points, why was there a ticket for making a plan in the first place? If it makes a bad plan then the coworker submitted a bad plan and there's already avenues for when coworkers do bad work.
poemxo•54m ago
How do you know the coworker didn't bully the LLM for 20 minutes to get the desired output? It isn't often trivial to one-shot a task unless it's very basic and you don't care about details.

Asking for the prompt is also far more hostile than your coworker providing LLM-assisted word docs.

mrisoli•36m ago
I feel the same way, if all one is doing is feeding stuff into AI without doing any actual work themselves, just include your prompt and workflow into how you got AI to spit this content out, it might be useful for others to learn how to use these LLMs and shows train of thought.

I had a coworker schedule a meeting to discuss a technical design of an upcoming feature, I didn't have much time so I only checked the research doc moments before the meeting, it was 26 pages long with over 70 references, of which about 30+ were reddit links. This wasn't a huge architectural decision so I was dumbfounded, seemed he barely edited the document to his own preferences, the actual meeting was maybe my most awkward meeting I've ever attended as we were expected to weigh in on the options presented but no one had opinions, not even the author, on the whole thing. It was just too much of an AI document to even process.

a1j9o94•19m ago
Honestly if you have a working relationship/communication norms where that's expected, I agree just send the 5 bullets.

In most of my work contexts, people want more formal documents with clean headings titles, detailed risks even if it's the same risks we've put on every project.

meowface•1h ago
Ever since some non-native-English-speaking people within my company started using LLMs, I've found it much easier to interact and communicate with them in Jira tickets. The LLM conveys what they intend to say more clearly and comprehensively. It's obviously an LLM that's writing but I'm overall more productive and satisfied by talking to the LLM.

If it's fiction writing or otherwise an attempt at somewhat artful prose, having an LLM write for you isn't cool (both due to stolen valor and the lame, trite style all current LLMs output), but for relatively low-stakes white collar job tasks I think it's often fine or even an upgrade. Definitely not always, and even when it's "fine" the slopstyle can be grating, but overall it's not that bad. As the LLMs get smarter it'll be less and less of an issue.

mystifyingpoi•59m ago
> I just care whether or not the ideas are good and clear

That's the thing. It actually really matters whether the ideas presented are coming from a coworker, or the ideas are coming from LLM.

I've seem way too many scenarios where I'm asking a coworker, if we should do X or Y, and all I get is a useless wall of spewed text, with a complete disregard to the project and circumstances on hand. I need YOUR input, from YOUR head right now. If I could ask Copilot I'd do that myself, thanks.

a1j9o94•16m ago
I would argue that's just your coworker giving you a bad answer. If you prompt a chatbot with the right business context, look at what it spits out, and layer in your judgement before you hit send, then it's fine if the AI typed it out.

If they answer your question with irrelevant context, then that's the problem, not that it was AI

amarant•48m ago
Agreed! I've reached the conclusion that a lot of people have completely misunderstood why we work.

It's all about the utility provided. That's the only thing that matters in the end.

Some people seem to think work is an exchange of suffering for money, and omg some colleagues are not suffering as much as they're supposed to!

The plan(or any other document) has to be judged on its own merits. Always. It doesn't matter how it was written. It really doesn't.

Does that mean AI usage can never be problematic? Of course not! If a colleague feeds their tasks to a LLM and never does anything to verify quality, and frequently submits poor quality documents for colleagues to verify and correct, that's obviously bad. But think about it: a colleague who submits poor quality work is problematic regardless of if they wrote it themselves or if they had an AI do it.

A good document is a good document. And a bad one is a bad one. Doesn't matter if it was written using vim, Emacs or Gemini 3

juujian•1h ago
> But if you ship it and people use it, you’ve created an implicit promise: that you can maintain, debug, and extend what you’ve built. If AI assembled it and you can’t answer basic questions about how it works, you’ve misled users about what they can depend on.

Agree with the premise but this part is off. When I find a project online, I assume it will be abandoned within a year unless I see evidence of a substantive team and/or prior long-term time investments.

meowface•1h ago
I will admit to being an LLM workslopper. I don't ever send anything written by an LLM (because anyone who's seen enough LLM writing will recognize it's an LLM) without rewriting it by hand first - with exceptions for parts of READMEs - but for any other task it's pretty much 100% LLM.

I look at the output and ask it to re-re-verify its results, but at the end of the day the LLM is doing the work and I am handing that off to others.

karaterobot•1h ago
I'm sometimes asked to produce meaningless 30-page documents that nobody ever reads. I mean literally nobody, since I can see the history of who has accessed it. Me and a proof-reader, and occasionally someone will open it up to check that it exists. But nobody reads them, let alone reads them closely. Not the distant funder who added it as a line-item requirement to their grant (their job is adding line items to grants, not reading documents), nor the actual people involved in the project, who don't have time to read a meaningless document, and don't need to. It's of use to no one, it's just something that must be done because we live in a stupid world.

I've started having AI write those documents. Each one used to take me a full week to produce, now it's maybe one day, including editing. I don't feel bad about it. I'm ecstatic about it, actually; this shouldn't be part of my job, so reducing its footprint in my life is a blessing. Someday, someone will realize that such documents do not need to exist in the first place, but that's not the world we live in right now, and I can't change it. I'm just glad AI exists for this kind of pointless yeoman's work.

zephen•31m ago
It's like burning fuel to till the soil so you can plant corn to make ethanol.

Almost an inverse Kafka universe; there are tools that can empower you to work the system in such a way that the effects of the externalities are very diffuse.

Still not good, but better than a typical Catch-22.

macrael•1h ago
I think it quickly needs to become good manners to indicate when text was written by AI rather than a person. I read that text differently and I shouldn't have to spend my time guessing.
patrickmay•53m ago
A footnote with the prompt used would be even more polite. Then I can just read that and skip the generated text.
potsandpans•1h ago
It will probably be unpopular here, where people appear to have drawn the lines and formed unyielding positions, but...

The whole llm paranoia is devolving into hysteria. Lots of finger pointing without proof, lots of shoddy evidence put forward and nuance missing points.

My stance is this: I don't really care whether someone used an llm or wrote it themselves. My observation is that in both cases people were mostly wrong and required strict reviews and verification, with the exception of those who did Great Work.

There are still people who do Great Work, and even when they use llms the output is exceptional.

So my job hasn't changed much, I'm just reading more emojis.

If you find yourself becoming irrationally upset by something that you're encountering that's largely outside of your control, consider going to therapy and not forming a borderline obsession with purity on something that has always been a bit slippery (creative originality ).

SoftTalker•1h ago
If I find an emoji in a work document I'm rejecting it without further review.
zephen•40m ago
> My observation is that in both cases people were mostly wrong and required strict reviews and verification, with the exception of those who did Great Work.

Sure, but LLMs allow people to be wronger faster now, so they could conceivably inundate the reviewer with a new set of changes requiring a new two hour review, by only pressing buttons for two minutes.

> If you find yourself becoming irrationally upset by something that you're encountering that's largely outside of your control, consider going to therapy and not forming a borderline obsession with purity on something that has always been a bit slippery (creative originality ).

Maybe your take on it is slightly different because your job function is somewhat different?

I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.

potsandpans•27m ago
If it's important to the argument, my title is "Principal Software Engineer MTS". I review code, ADRs, meeting summaries, design docs, PRDs etc...

> I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.

My point is, I've been in the game for coming up on 16 years, mostly in large corporate FAANG-adjacent environments. People have always been functionally incorrect and not to be trusted. It used to be a meme said with endearment, "don't trust my code, I'm a bug machine!" Zero trust. That's why we do code reviews.

> Sure, but LLMs allow people to be wronger faster now, so they could conceivably inundate the reviewer...

With respect, "conceivably" is doing a lot of work here. I don't see it happening. I see more slop code, sure. But that doesn't mean I _have_ to review it with the same scrutiny.

My experience thus far has been that this is solved quite simply: After a quick scan, "Please give this more thought before resubmitting. Consider reviewing yourself, take a pass at refining and verify functionality."

> Maybe your take on it is slightly different because your job function is somewhat different? > I assume that many people complaining here about the LLM slop are more worried about functional correctness than creative originality.

Interestingly, I see the opposite in the online space. First of all, as an aside, I don't see many people complaining at all in real life (other than the common commiseration of getting slop PRs, which has replaced the common commiseration of getting normal PRs of sub-par quality).

I primarily see people coming to the defense of human creativity and becoming incensed by reading (or I should say, "viewing" more generally) something that an llm has touched.

It appears that mostly people have accepted that llms are a useful tool for producing code and that when used unethically (first pass llm -> production), of course they're no good.

There is a moral outrage and indigence that I've observed however (on HN, and elsewhere) when an LLM has been used for the creative arts.

acedTrex•1h ago
There is nothing worse than this feeling, like fantastic, now i have to go read through this slop with incredible care and minutia. I may as well not read the slop and go redo all the work/thought myself, it will be easier that way.
turnsout•1h ago
Just a hot take, but if you ask someone to complete a rote task that AI can do, you should not be surprised when they use AI to do it.

The author does not mention whether the generated project plan actually looked good or plausible. If it is, where is the harm? Just that the manager had their feelings hurt?

ecshafer•1h ago
Lets steel man this:

1. If the output is solid, does it matter?

2. The author could simply have done the research, created the plan, and then gave an LLM the bullet list points of research and told it to "make this into a presentable plan". The author does the heavy work and actually does the creative work, and outsources the manual formatting to the LLM. My Wife speaks English as a second language, she much prefers telling an LLM what she is trying to say and to generate a business friendly email from this than writing it herself and letting in grammatical mistakes.

3. If I were to write a paper in my favorite text editor and then put it through pandoc to generate a word doc it would do the same thing.

phyzome•57m ago
How can you tell the output is solid?

The creation of a plan also implies that some work has gone into making sure it's a good one. That's one human (the author) asserting that it's solid. But now you're not even sure if that one vote exists.

EGreg•1h ago
I think that we should have revision control for intermediate stages - for code, documents, even paintings. So we can at least have some idea of provenance, how it's made.

Until AI is used to fake that, too.

zephen•46m ago
There are some issues with this.

1) For things made with LLMs: 1a) The fact that older versions aren't online forever. You literally might never be able to put the original prompt in and get the same result. 1b) A minor change in input prompt can result in a huge output change, rendering the original prompt practically meaningless, especially if modifications were required for the output of the LLM.

2) For things made the old-fashioned way, most history is boring and not useful. The best git repos have carefully curated history, with cohesive change sets that are both readable, and usable when bisecting the commit history for regressions.

tediousgraffit1•1h ago
Nowhere in here does it indicate that the generated plan was wrong or broken. I dont care if you use ai to write. I care if you write well. If the author trusted the other person, then it shouldn't matter. If the author didn't trust the other person, then they'd have to validate their output anyway. Granted the tech allows people I dont trust to generate a lot more bs, a lot faster. But i just reject and move on with my life in that case. I am no ai booster but a lot people are expressing distaste for tools when they should be expressing distaste for fools.
zephen•1h ago
> Granted the tech allows people I dont trust to generate a lot more bs, a lot faster. But i just reject and move on with my life in that case.

But even a rejection is work. So if they're generating more bs faster, they are generating more work for you. And, in some organizations, they will receive rewards for occasionally pressing buttons and inundating you with crap.

> a lot people are expressing distaste for tools when they should be expressing distaste for fools.

I'm pretty sure that the original article, and most of the derogatory comments here, are expressing distaste for fools rather than tools. Specifically, tool-using fools.

GMoromisato•59m ago
It's just that AI gives fools more power.

It used to be that a well-written document was a proof-of-work that the author thought things through (or at least spent some time thinking about it).

I'm all for AI--I use it all the time. But I think our current style of work needs to change to adapt to both the strengths and weaknesses of AI.

rbbydotdev•1h ago
I would likely feel betrayed in a situation like this, but _sometimes_ there may be a sentence or two, capable of being more succinctly expressed via ai. This has happened to me personally. I have written something, came to a 'tip of the tongue' moment, then had ai help me to express it.

When used right, ideas could be distilled not extrapolated into slop. -- So maybe its not ALL BAD?

I propose a new quotation system, the 3 quote marker to disclose text written or assisted by ai:

'''You are absolutely right'''

samamou•1h ago
Google docs Document history can also be turned off https://support.google.com/docs/answer/7378739?hl=en&co=GENI...
GMoromisato•1h ago
Before AI, if someone submitted a well-formatted, well-structured document, we could assume they spent a lot of time on it and probably got the substance right. It's like the document is a proof-of-work that means I can probably trust the results.

Maybe we need a different document structure--something that has verification/justification built in.

I'd like to see a conclusion up front ("We should invest $x billion on a new factory in Malaysia") followed by an interrogation dialogue with all the obvious questions answered: "Why Malaysia and not Indonesia?", "Why $x and not $y billion?", etc.

At that point, maybe I don't care if the whole thing was produced by AI. As long as I have the justification in front of me, I'm happy. And this format makes it easy to see what's missing. If there's a question I would have asked that's not in the document, then it's not ready.

isodev•1h ago
Once, I had a very frustrating slack chat with a fellow developer. We were discussing edge cases for a new feature and the experience from my perspective was that for each of my messages, I’d get a “in case of … how about …” style reply. The topic was focused around iOS vs. Android app lifecycle. Every now and then my colleague would suggest APIs or events that simply don’t exist.

This was before vibe coding, around the days of GPT 3.5. At the time I just thought it was a challenging topic and my colleague was probably preoccupied with other things so we parked the talk.

A few weeks later, while exploring ways to use GPT for technical tasks I suddenly remembered that slack chat and realised the person had been copy pasting my messages to gpt and back. I really felt bad at that moment, like… how can you do this to someone…? It’s not bad that you try tools to find information or whatever, but not disclosing that you’re effectively replacing your agency with that of a bot is just very suboptimal and probably disrespectful.

teaearlgraycold•48m ago
Anyone doing this should be fired. Both for the lack of trust they bring to the team but also because they’re just making themselves a middle man to an LLM. Why not cut out the middle man?
zephen•34m ago
People who make things don't make any money.

People who claim that they are disrupting with disintermediation, but actually simply replace the old intermediary with their own?

Those people get filthy rich.

People who _should_ be making things but are trying this intermediation technique themselves will most likely find that it's like other forms of lying. Go big or go home.

kipple•1h ago
Similar article = https://distantprovince.by/posts/its-rude-to-show-ai-output-...

> My own take on AI etiquette is that AI output can only be relayed if it's either adopted as your own or there is explicit consent from the receiving party.

kazinator•46m ago
Plans usually start a short lists of ideas without a lot of detail. When people discuss and agree on things, then choices are decided upon. The branches of the "search tree" which are not being taken are pruned away and detail is added to the path taken.

If someone just generates an incredibly detailed plan in one go, that destroys the process. Others now are wasting time looking at details in something that may not even be a good idea if you step back.

The successive refinement flow doesn't preclude consideration of input from AI.

ln809•45m ago
Many responses (correctly) identifying that edit history is not a reliable "tell" but this misses the broader point of the original article
uragur27754•20m ago
I recently joined a project where the manager greeted me cheerfully that my new task is fully described and specified in a technical architecture doc. The manager left shortly after and considered my onboarding complete. It took the next couple of weeks to realize that the docs were AI generated, surprisingly detailed and accurate. However they were largely irrelevant to the actual problem the client has.

I was later asked why is it taking so long to complete the task when the document had a step by step recipe. I had to explain why the AI was solving the wrong problem in the wrong place. The PMs did not understand and scheduled more meetings to solve the problem. All they knew is that tickets were not moving on the board.

I suddenly realized that nobody had any idea of what’s going on at all on a technical level. Their contribution was to fret about target dates and executive reports. It’s like a pyramid scheme of technical ignorance. The consequence is some ICs forced to do uncompensated overtime to actually make working software.

These are the unintended consequences of the AI hype that CEOs are evangelizing.