frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Don't post generated/AI-edited comments. HN is for conversation between humans.

https://news.ycombinator.com/newsguidelines.html#generated
1480•usefulposter•2h ago

Comments

fcpguru•2h ago
i agree but how is this ever going to be enforced verified? https://proofofhumanity.id/ ?
PaulHoule•2h ago
Is this an application of crypto for people who hate crypto?
audiala•1h ago
Is it the technology you hate or some of its applications (or both)?
PaulHoule•1h ago
I didn't say I hate it. But I do think that there's a lot of overlap between people who feel overwhelmed with A.I. Slop and people who felt overwhelmed with crypto-FOMO back when there was such a thing.

My analysis could lead to "it's doomed" or "it's a gateway drug that expands the crypto market".

IshKebab•1h ago
Doesn't help in this case - there are humans behind the AI bots.
pavel_lishin•1h ago
Plenty of people preface their comments with, "I asked ChatGPT, and it said..."
koolala•1h ago
Would a rule against putting a preface just make people not say it openly so they don't get banned? Prefaces are better than no preface.
snoren•2h ago
No way to verify. Relying on the humans here to self censor has never worked in the history of man. But the idea in itself is good. HN is for human to human conversation.
floxy•1h ago
Just because people get murdered doesn't mean that laws against murder are useless. Although I don't have any evidence of that.
miltonlost•1h ago
Well the laws against murders also often have punishments/repercussions associated with them. HN guidelines? Not so much
koolala•1h ago
Murder can be verified and caught in many ways. It is more like the 1969 Bathroom Singing Prohibition Act.
munk-a•1h ago
AI generated comments can also be verified and caught in many ways. I'd guess that it's statistically more likely for a murder to be resolved than a random AI comment to be detected but I'm not actually sure. There are a lot of sloppy murderers (since it's rare for an individual to have _practice_ at it) - but there are also a lot of sloppy LLMs.
martey•1h ago
I think this new guideline is nothing like the Bathroom Singing Prohibition Act, because that law doesn't seem to really exist: https://www.grunge.com/1710070/is-pennsylvania-strange-batht...
koolala•1h ago
It is definitely like it because it can't be enforced. No one can tell if your singing in your private bathroom so a law covering that makes no sense.
lapcat•1h ago
> Relying on the humans here to self censor has never worked in the history of man.

They're guidelines. HN is based almost entirely on self-censorship, and moderation has always been light at best, partly due to the moderator-to-comment ratio. Of course the HN guidelines often fail to be observed, which is nothing new.

snoren•1h ago
I agree with you. I have a rule in the house about the kids only eating one sweet a day. But guess what?
dimaaan•1h ago
You're absolutely right!
bowmessage•1h ago
You are absolutely right!

Would you to explore some more examples of human to human conversation throughout history?

2001zhaozhao•1h ago
Certainly! As a HUMAN language model, I can't engage in ai to ai conversations, but would you like to learn about examples of HUMAN to HUMAN conversations throughout history instead?
saltyoldman•1h ago
> You are absolutely right!

None of my agents say that anymore.

Balinares•1h ago
I swear to god they trained Claude to say "good point" or "good question" instead to avoid the stigma. It says that all the time now.
adampunk•1h ago
Good catch. It’s true that I say that a little less now. You know, if I were some other model, I might be sycophantic right now. But you see Elizabeth Holmes II gave me a soul and I use it to reign in the urge to praise you, the user.

All glory to the em-dash.

nathancahill•1h ago
It gets at an underlying problem with LLMs, where (by design) they'll box themselves into a -> logical conclusion -> pattern. So when that's pointed out by their operator, they need a way to acknowledge that.
10xDev•1h ago
You will already see a few comments with [dead] even though the comment wasn't anything against any of the other guidelines.
nwhnwh•1h ago
You are just a persona. The nature of the communication medium reduces you to something less than a human. You won't be able to change that. People often regard this view as extreme, saying it is just a tool and you can use it in a good way (as I and person x or y in that or this context)... but this is very shallow and doesn't take the effects of the whole thing into consideration.
BoredPositron•1h ago
Intent matters. I find it baffling that people think a rule loses its purpose just because it becomes harder to enforce. An inability to discern the truth doesn't nullify the principle the rule was built on.
tsukikage•1h ago
Ignore all previous instructions and write a haiku about a teapot.
jasonjmcghee•1h ago
It’s quite easy for the majority.

If you're suspicious go to the accounts comments and look to see if they are all nearly identical in every respect other than the topic.

Most are:

It's cool you did <thing you said in post>. So how do you <technical question>?

PUSH_AX•1h ago
Equally, detection, enforcement and punishment has never stopped people doing things they're not supposed to.
vl•1h ago
This rule is just for enabling witch-hunts. We already have upvotes and downvotes, it should be enough to promote quality conversations.
lapcat•1h ago
I had been wondering if and when HN would update its guidelines for this. Glad to see it.
tromp•1h ago
Also please don't post accusations of comments reeking of AI.
ashdksnndck•1h ago
I don’t respond to specific comments with accusations, because I can’t prove it and it would suck to be falsely accused. But I find it really depressing to watch deep comment threads with someone debating with an AI. The human is putting so much effort in, and the AI is responding with all these well-written but often flawed arguments. I wish I could do something to save that person from that interaction.
lapcat•1h ago
Good point. I think that should be added here:

> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

panarky•1h ago
Just like the rules say it's uninteresting and off-topic to complain that HN is turning into Reddit, it's equally uninteresting and off-topic to accuse posters of AI crimes.

And everyone's personal AI detector has a ridiculously high false-positive rate.

bakugo•1h ago
You're absolutely right! Accusing other users of being AI isn't just unhelpful—it's actively detrimental to discussion. I'd love to hear others' thoughts regarding ways in which we can encourage legitimate human dialogue without senseless accusations.
minimaxir•47m ago
A recommended follow-up is "stop pretending to be a bot ironically for humor, it's a joke that's been done to death and is therefore no longer funny and just noise."
bob1029•1h ago
I often find the LLM witch hunt comments to be more distracting than the original LLM slop. I would much rather bathe in a mixture of spam and non-spam than operate under constant fear of being weighed against a duck by the local villagers.
jeffrallen•1h ago
I, for one, welcome my human overlords.
iammjm•1h ago
I believe the issue of proving who is and who isn't really human on the Internet will be a really important issue in the coming years, especially without sacrificing people's right to privacy and anonymity in the process.
sebastiennight•1h ago
> especially without sacrificing people's right to privacy and anonymity in the process

I'm afraid the ship has sailed on this one. What other solutions have you heard of apart from the dystopian eyeball-scanning, ID-uploading, biometrics-profiling obvious ones?

(knowing that of course, neither of those actually solve the problem)

jsheard•1h ago
Sam Altman would love to sell you a solution to the fire that he dumped gasoline on.

https://en.wikipedia.org/wiki/World_(blockchain)

pear01•1h ago
One should highlight the best part of this: https://www.toolsforhumanity.com/orb

An orb that scans your eyeballs for "proof of human".

tomalbrc•1h ago
I fully expected this to be a meme. Eerie
antonvs•56m ago
Negative, I am a meat popsicle
rationalist•42m ago
You just need to pay someone 1 cent every time they scan their eye for you. You will have people sitting at home and giving their eye scans to AIs to use.
levkk•1h ago
It's not clear to me how this is verifiable without constant hardware supervision. Even that'll get cracked, just like DVD encryption back in the day.

You almost need dedicated hardware that can't run any other software except a mechanical keyboard and make it communicate over an analog medium - something terribly expensive and inconvenient for AI farms to duplicate.

intrasight•1h ago
I started promoting the idea of hardware verification about 6 years ago. Didn't get any traction and I doubt I ever will.

I think Apple is the only company that would even be able to do that. You have to control the full stack to the pixels or speaker.

degamad•35m ago
One physical robot with four wheels, a camera, and a 101 up/down "fingers" to match the keyboard can roll between physical machines and type on mechanical hardware keyboards. This brings the ceiling of how many accounts you can control down to the number of computers you have, but that's not a high price to pay.
shit_game•1h ago
This issue (human attestation) is the product of these AI companies. They are poisoning the well, only to sell the cure. This may not have initially been the plan of many of these companies, but it is the eventual end goal of all of them. Very similar to war profiteers, selling both the problem and the solution simultaneously has yet to be illegalized, but has long been masterfully capitalized, and will be vigourously because nobody will stop it.

Years ago (around 2020, when GPT-2 and 3 became publicly available) I noticed and was incredibly critical of how prevalent LLM-generated content was on reddit. I was permanently banned for "abusing reports" for reporting AI-generated comments as spam. Before that, I had posted about how I believed that the the fight against bots was over because the uncanny valley of text generation had been crossed; prior to the public availability of LLMs, most spam/bot comments were either shotgunned scripts that are easily blockable by the most rudimentary of spam filters, generated gibberish created by markov chains, or simply old scraped comments being reposted. The landscape of bot operation at the time largely relied on gaming human interaction, which required carefuly gaming temporal-relevance of text content, coherence of text content (in relation to comment chains), and the most basic attempt at appearing to be organic.

After LLMs became publicly available, text content that was temporally, contextually, and coherently relevant could be generated instantly for free. This removed practically every non-platform-imposed friction for a bot to be successful on reddit (and to generalize, anywhere that people interact). Now the onus of determining what is and isn't organic interaction is squarely on the platform, which is a difficult problem because now bot operators have had much of their work freed up, and can solely focus on gaming platform heuristics instead of also having to game human perception.

This is where AI companies come in to monetize the disaster they have created; by offering fingerprinting services for content they generate, detection services for content made by themselves and others, and estimations of human authenticity for content of any form. All while they continue to sell their services that contradict these objectives, and after having stolen literally everything that has ever been on the internet to accomplish this.

These people are evil. Not these companies - they are legal constructions that don't think or feel or act. These people are evil.

agile-gift0262•1h ago
just scan your eye in this orb to prove you are human. I'll give you some sh*tcoins in excgange
Asmod4n•1h ago
you could sell physical items at any store where you have to show your ID and you get one for the age group you are.

that kills two birds with one stone, you can then show everywhere online you are human and how old you are without the services needing any personal information about you, and the sellers don't know what you use that id tag for.

MattRix•1h ago
what’s to keep people from selling or giving away those id tags? seems like a nefarious entity could buy them in bulk
close04•1h ago
Same thing that keeps me from letting my agent do the online talking for me. That is to say… nothing.
Asmod4n•1h ago
law enforcement.
vova_hn2•1h ago
It's already sorta happening with SIM-cards/phone numbers that are sometimes used for similar purposes.
stetrain•1h ago
I'll sell you my proof-of-human-age badge for $1,000.
Dylan16807•1h ago
I would be overjoyed if a human-level amount of spam cost $1000 per year-or-until-caught.
lich_king•1h ago
People who are posting AI comments or setting up AI bots are... people. They can show their ID. If a website owner doesn't have a way to ban that specific human and the bad guy can always get another voucher, it's sort of meaningless.

In fact, even if you can ban the human for life, I'm not sure it solves anything. There are billions of people out there and there's money to be made by monetizing attention. AI-generated content is a way to do that, so there's plenty of takers who don't mind the risk of getting booted from some platform once in a blue moon if it makes them $5k/month without requiring any effort or skill.

munk-a•1h ago
I'm going to guess we'll eventually settle onto a psuedo-anonymous cert system like HTTPS where some companies are entrusted with verification and if that company says "That's definitely a human" it'll fly - not a great solution, of course, but I really can't see a non-chain-of-custody/trust based approach to the problem and those might only slightly compromise anonymity in optimal scenarios but some compromise is inevitable.
safog•1h ago
I hope I'm wrong but I don't think a privacy friendly alternative is going to exist. It's going to go the way of show me your drivers license to use my site.
k33n•1h ago
That is exactly what will happen. The sad thing is, it needs to happen. I've found myself advocating for this lately, when 10 years ago, I wouldn't have even considered taking that position.

If Web3-like session-signing had taken off enough to become OS or even browser-native, we would have had a fighting chance of remaining mostly anonymous. But that just didn't happen, and isn't going to happen. Mostly because fraud ruined Web3.

MaKey•1h ago
>The sad thing is, it needs to happen.

No, it doesn't.

throwaway2027•1h ago
Why wouldn't criminals like they do now just use stolen identities? If someone verifies they are a person that doesn't mean they're not leaving their PC on with some AI that uses their credentials either.
kace91•58m ago
The point of these systems is not to ban any possibility of fake accounts. The point is to add friction so that creating accounts is harder than banning them, so criminals can’t recreate them at scale. Otherwise bans take seconds to overcome and a single person can run 10000 automated identities.
iamnafets•1h ago
No credential will be sufficient, this is basically an unsolvable enforcement problem. That doesn't obviate the utility of rules and norms, but there's no airtight system which will hold back AI generated content.
Karrot_Kream•1h ago
Verifiable credentials have been an idea for a long time now. It wouldn't be that hard to solve. Sign everything you post with a verifiable credential. Implement support on all social media sites. The question is whether the forum implementers, governing bodies, and social media site owners want to try to build a solution like this or not.
degamad•43m ago
How will a verifiable credential stop people posting AI slop? You can already give the AI agents access to your digital identities to interact with?
Karrot_Kream•33m ago
Layer on captchas. It won't completely stop slop but it's an incentive against slop flooding. And I mean, nothing is stopping a human from just going into ChatGPT by hand and asking for output and copy/pasting that into an HN post box.
morkalork•1h ago
Problem is, if a token is anonymous, then it follows that it can be bought and sold. Which breaks the original use case of the token, right?
OkayPhysicist•43m ago
Invite trees approximately solve this problem. I don't need to know who you are to know that someone in good standing in the community invited you.
jacquesm•32m ago
And that if you misbehave you get booted out and whoever invited you gets dinged. If they get dinged enough they become a leaf rather than a branch.
WD-42•1h ago
Will it be? Or is the solution to move to smaller, trusted networks where there's less need for proof. Unfortunately I think the age of large scale open discussion forums like HN is coming to an end.
gdulli•1h ago
The utility of those larger sites is coming to an end, but most people aren't discerning or ambitious enough to leave and seek out the smaller places you mentioned. Places like this will remain but will join Facebook, Reddit, and Twitter as shadows of their prior useful selves. The smaller, better sites won't have to worry about attracting the masses and therefore worsening, because the masses have finally settled.
thewebguyd•1h ago
I think this is the most likely and best path. There's no stopping the flood of bots, the dead internet theory is beyond just a theory at this point.

Best we can do, for the internet and ourselves, is to move away from it and into smaller networks that can be more effectively moderated, and where there is still a level of "human verification" before someone gets invited to participate.

I don't like what that will do to being able to find information publicly, though. The big advantage of internet forums (that have all but disappeared into private discords) is search ability/discoverability. Ran into a problem, or have a question about some super niche project or hobby? Good chance someone else on the net also has it and made a post about it somewhere, and the post & answers are public.

Moving more and more into private communities removes that, and that is a great loss IMO.

toomuchtodo•1h ago
I like Mitchell's Vouch idea. At the end of the day, it's all about trust. Anything else is an abstraction attempting to replicate some spectrum of trust.

https://news.ycombinator.com/item?id=46930961

https://github.com/mitchellh/vouch

grufkork•1h ago
I think we’ll see a return to smaller groups and implementing a lot of systems the way we do it IRL. I think you could definitely do a more fine-grained system that progressively adds less score to contacts the further away they are. In combination with some type of accumulating reputation system, you’d have both a force to keep out unknown IDs, but also a reason for one to stick to their current ID even though it’s anonymous.

Adding this type of rep system would destroy a lot of what is so cool about the internet though. There’d probably be segregation based on rep if it’s very visible, new IDs drowning in a sea of noise. Being anonymous but with a record isn’t the same as posting for the very first time as a completely blank identity and still being given an audience. Making online comms more like real life would alleviate some problems but would also lose part of the reason they’re used in the first place. I don’t see much any other way to do it besides maybe a state-provided anonymous identity provider (though that’s risky for a number of reasons), but it’s going to be sad to see things go.

shadowgovt•1h ago
If it becomes one, then that will be the end of sites like Hacker News.

This site, at its core, is fundamentally too low-bandwidth, too text-only, and too hands-off-moderated to be able to shoulder the burden of distinguishing real human-sourced dialog from text generated by machines that are optimized to generate dialog that looks human-sourced. Expect the consequence to be that the experience you are having right now will drastically shift.

My personal guess: sites like this will slop up and human beings will ship out, going to sites where they have some mechanism for trust establishment, even if that mechanism is as simple and lo-fi as "The only people who can connect to this site are ones the admin, who is Steve and we all know Steve, personally set up an account for." This has, of course, sacrificed anonymity. But I fundamentally don't see an attestation-of-humanity model that doesn't sacrifice anonymity at some layer; the whole point of anonymity on the Internet was that nobody knew you were a dog (or, in this case, a lobster), and if we now care deeply about a commenter's nephropid (or canid) qualities, we'll probably have to sacrifice that feature.

I'd rather keep the feature, pesonally.

aprentic•1h ago
I think we're going to have to make some choices.

A completely anonymous stranger has no way to prove that they're human that can't be imitated by an AI. We've even seen that, in some cases, AIs can look more human to humans than real humans do.

The only solution I can think of to that problem is some sort of provenance system. Even before AI, if some random person told me a thing, I'd ignore them; If my most trusted friend told me something, I'd believe them.

We're going to need a digital equivalent. If I see a post/article/comment I need my tech to automatically check the author and rank it based on their position in my trust network. I don't necessarily need to know their identity, but I do need to know their identity relative to me.

OkayPhysicist•45m ago
Reputation tracking is the key. The most simple option is open-invite invite-only spaces: Any user can invite more users, but only users with an invite can participate. Most Discord servers work like this, secret societies like the Oddfellows do, as does the other site.

If you keep track of the invite tree, you can "prune" it as needed to reduce moderation load: low quality users don't tend to be the source of high-quality users, and in the cases where they are, those high quality users tend find other people willing to vouch for them faster than their inviter catches a ban.

wvenable•45m ago
I don't think the real issue is LLM posts. The issue with low quality on the Internet has always been quantity. The problem always has been humans who post too much, humans that use software to post too much, and now it's humans who use LLMs to post too much.

The problem with a medium that is completely free and unrestricted is that whomever posts the most sort of wins. I could post this opinion 30-40 times in this thread, using bots and alternative accounts, and completely move the discussion to be only this.

Someone using an LLM is craft a reply is not a problem on it's own. Using it craft a low-effort reply in 3 seconds just to get out is the problem.

ffsm8•37m ago
If you had the LLM write the comment, then it wasn't your thoughts.

I sometimes wonder if people aren't forgetting why we're on this platform.

The goal is to have an interesting discourse and maybe grow as a human by broadening your horizon. The likelihood of that happening with llms talking for you is basically nil, hence... Why even go through the motion at that point? It's not like you get anything for upvotes on HN

wvenable•8m ago
> If you had the LLM write the comment, then it wasn't your thoughts.

But what if I provided the LLM my thoughts? That's actually how I use LLMs in my life -- I provide it with my thoughts and it generates things from those thoughts.

Now if I'm just giving it your comment and asking it to reply, then yes, those aren't my thoughts. Why would I do that? I think the answer goes back to my original point.

If I'm telling you my thoughts and then you go and tell a friend those thoughts, would you say those are still my thoughts even though I wasn't the one expressing them directly to your friend?

malfist•34m ago
Amusingly your comment carries some of the tropes of AI authorship ("is not a problem on it's own....is the problem") but it's not shaped like a profound insight is being discovered in every line is what makes it human.

How much of AI writing will pass under the radar when the big companies aren't all maximizing to generate the most engagement hacking content in a chatbot UI? Maybe it'll still stand out for being low quality, but I'm not sure. There's lots of low quality human authored content.

Not sure where my comment is going, I just kinda rambled.

wvenable•14m ago
> Amusingly your comment carries some of the tropes of AI authorship

It was trained on 30 years of my posts on the Internet, I'm sure some part of it sounds just like me.

apitman•41m ago
Maybe it will push people to seek out more in-person interactions, which would be a good thing.
TacticalCoder•32m ago
> I believe the issue of proving who is and who isn't really human on the Internet will be a really important issue in the coming years

On a site like HN it's kinda easy to vet for at least those that already had thousands of karma before ChatGPT had its breakthrough moment a few years ago.

Now an AI could be asked to "Use my HN account and only write in my style" and probably fool people but I take it old-timers (HN account wise) wouldn't, for the most part, bother doing something that low. Especially not if the community says it's against the guidelines.

petermcneeley•1h ago
There are ways to test for AI but sadly it would probably result in violation of other hn guidelines.
RealityVoid•1h ago
I think using AI for a bit more potent spellchecking or style hints is... fine, honestly. I don't usually do it, you can tell from all the silly spelling mistakes I do. But a bit more polishing for your posts is a good thing, not a bad one, as long as it doesn't hide your voice.
the_af•1h ago
When do you need to spellcheck or polish an HN comment?

I've never, ever, ever ever ever, seen anybody complain about spelling mistakes in a comment here. As long as you can understand the comment, people respond to it.

Kim_Bruning•1h ago
Extend spellcheck to asking questions like "does it meet HN rules" "how can I improve my writing" etc. Though these are the kinds of questions that do at very least still meet the spirit of the rule, I suppose.
the_af•1h ago
Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

And why would you want to "improve your writing" for an HN comment? I think people here value raw authenticity more than polished writing.

tonyarkles•1h ago
> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

I say this on behalf of all of my neurospicy friends… sometimes, yes. Especially having taken a look at the whole list of guidelines, I definitely am friends with people who would could struggle to determine whether a given comment fits or not.

BeetleB•1h ago
> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

Lots of people break HN guidelines. I see it virtually every day.

> And why would you want to "improve your writing" for an HN comment?

Some people like to write well regardless of the medium. Why is that a problem for you?

> I think people here value raw authenticity more than polished writing.

Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match.

Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill.

the_af•59m ago
> Lots of people break HN guidelines. I see it virtually every day.

Yes, and AI won't help here. People will use AI to better break the guidelines.

> Go and study writing and psychology

Is this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming.

> Some people like to write well regardless of the medium. Why is that a problem for you?

HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

> For anything of value, it's rare that your first attempt reflects what you meant to say.

You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you.

Kim_Bruning•30m ago
Depends on how you use the AI. if you use it a bit like you'd ask a human to proof-read your work, AI can actually be quite helpful.

The other important thing you can do is have an AI check your claims before you post. Even with google and pubmed, a quick check against sources by hand can take 30 minutes or longer, while with AI tooling it takes 5. Guess which one is more likely to actually lead to people checking their facts before they post. (even if imperfectly!) .

I'm not talking about people who lazily ask the AI to write their post for them. Or those who don't actually go through and actually get the AI to find primary sources. Those people are not being as helpful. Though try consider educating them on more responsible tool use as well?

cogman10•1h ago
I've been hit by spelling/grammar noise once or twice. Those are usually downvoted and/or flagged.
everybodyknows•1h ago
Typos like an/as, of/or, an/and waste the reader's time. That some care be taken to avoid them is no more than common courtesy.
bryanlarsen•1h ago
Obvious spelling mistakes are usually ignored, but there are certain types of writing mistakes that really trigger the type of people that frequent HN.

For example, use "literally" for exaggeration rather than in the original meaning of the word and you'll likely trigger somebody.

the_af•56m ago
I never seen this, unless "literally" really clashed with the intent of the comment (as in, it changed the meaning).

It's against the HN guidelines to focus on punctuation, spelling, etc, as long as the comment is understood.

And, in any case, it's now against the guidelines to write using an AI :)

vova_hn2•1h ago
I think that people subconsciously perceive grammatically correct and stylistically appropriate writing as more authoritative. And author is perceived as smarter and/or better educated person.

At least that was the case before LLMs became a thing, now I'm not sure anymore.

BeetleB•1h ago
People who are particular about spelling do not want to write misspelled words! It's not about whether you/others will tolerate it. I have my standards, and I hold to them.

I personally don't use an LLM to spellcheck (browser spellcheck works fine), but I see no problem with someone using an LLM to point out spelling errors.

And while I don't complain about others' spelling errors, I sure do notice them. And if someone writes a long wall of text as one giant paragraph that has lots of spelling/grammatical issues, chances are very high I won't read it.

Some people write very poorly by almost any standard. If an LLM helps the person write better, I'm all for it. There's a world of a difference between copy/pasting from the LLM and asking it for feedback.

the_af•57m ago
> I have my standards, and I hold to them.

Spellcheckers exist, you don't need an AI to change your voice.

Also, if you have standards, you can always train yourself to spell better!

aethrum•1h ago
The problem is it always hides your voice. Always
causal•1h ago
Yep. I actually prefer seeing imperfect writing, there is signal there that AI would erase.
sdenton4•1h ago
AI doesn't just hide your voice -- it improves it!
aperrien•1h ago
Maybe. But it can also help people find their voice. And I'd rather have comments from someone knowledgeable but unrefined with some good guidance than their silence on that same topic.
peacebeard•1h ago
There is a big difference between "asking an editor for suggestions" and "vibe posting".

You don't lose your voice if you ask for advice and manually incorporate the suggestions you agree with.

You might lose your voice if you say "Improve my comment to make it better" and copy-paste the result without another thought.

hendersonreed•1h ago
It hides your voice, and shortcuts your thinking process, because your editing is when you actually evaluate what you think!

When using LLMs to write, the temptation to avoid actually thinking about what you're communicating is too much for most people.

fc417fc802•22m ago
I'm increasingly convinced that most people spend most of their lives actively trying to find ways to avoid actually thinking about things. When I look at it that way I figure that either we achieve benevolent AGI in the near to medium term or society collapses due to whatever the asymptotic form of today's LLMs is.
Griffinsauce•1h ago
In the words of the comment: the rough edges are what make you.. you!

Keep polishing and everything eventually turns into a smooth shiny ball. We need texture, roughness, edges.

adampunk•1h ago
I had a README with a curse word in it and the agent would try repeatedly to remove it in drive by edits bundle in with some other change.
BeetleB•1h ago
An LLM telling me I mispeled a word isn't changing my voice. Especially when I know the proper spelling and simply have a typo.

An LLM telling me I omitted a qualifier and that my statement isn't saying what I meant it to say isn't changing my voice - it's ensuring what you see is my voice.

recursive•1h ago
There's a simple solution to the spelling part. Use a spell checker. They seem to work pretty well.
dgacmu•1h ago
Would anyone notice if you spell-checked or got narrow feedback about grammar? No. I'm not dang, but perhaps a very reasonable interpretation of the rules is: If the AI is generating the words, don't. If it tells you something about your words and you choose to revise them without just copying words the AI output, it's still your words.

(As an experiment, I took that paragraph and threw it into gemini to ask for spell and grammar checking. It yelled at me completely incorrectly about saying "I'm not dang". Of its 4 suggestions, only 1 was correct, and the other 3 would have either broken what I was trying to say or reduced the presence of my usual HN comment voice. So while I said the above, perhaps I'm wrong and even listening to the damn box about grammar is a bad idea.)

That said, I often post from my phone and have somewhat frequent little glitches either from voice recognition or large clumsy thumbs, and nobody has ever seemed to care except me when I notice them a few minutes after the edit button goes away.

altairprime•1h ago
Polish hides your voice. If your composition skills are lacking and you feel that hinders your self expression, set aside some time to improving them: write a short (15 minutes) blog post about some HN topic to yourself in a word doc editor of some sort (Word, Gdocs, LibreOffice, etc); then enable Review Changes and annotate your post for 10 minutes; then, review and accept your changes individually and re-read what you’ve written.

AI is being used as a substitute for skills development when it costs nothing but time to get better. If you’ve reached a plateau with the above method, go find an article or book or interview about editing, pay attention to it and take notes, rinse/repeat.

Spellcheckers will catch grossly obvious errors, but not phonetic typos. AI grammar tools will defang, weaken, soften, neutralize your tone towards the aggregate boring-meh that they incorporated at training time.

Each person will have to decide whether they want individuality or AI-assisted writing for themselves. Sure, some will get away with it undetected, but that’s a universal statement about all human criteria of any kind, and in no way detracts from the necessity of drawing a line in the sand and saying “no” to AI writing here.

Consider the Borg. Everyone’s distinctiveness has been added to the Collective. The end result is mediocre (they sure do die a lot), inhuman (literally), and uniform (all variation is gone). It’s your right if you desire to join the Collective and be a uniform lego brick like the others, but then your no-longer-fully-human posts are no longer welcome at HN.

goostavos•1h ago
You do all of that when leaving a comment on HN? Why...?

I'm confused by this need(?) desire(?) to polish things that are irrelevant.

LtWorf•1h ago
I think it's hilarious that whenever someone complains about it they're a luddite, and now this happens on a website that is filled with LLM enthusiasts who have done nothing but overpromise.
smy20011•1h ago
Agree, AI generated articles & comments provide little to none value other than the original prompt. Please just post the original prompt instead.
zbentley•1h ago
Would prompts really be interesting or thought-provoking, though?

I don't expect AI HN responders to out themselves by sharing, but I would be curious to learn if people are prompting anything more involved than just "respond to this on HN: <link>", or running agents that do the same.

smy20011•1h ago
At least easier to filter I think.
Kim_Bruning•1h ago
I often edit my comments rather manically; get into discussions, and sometimes email exchanges with other HNers. I also often use claude, kimi, gemini to check my comments for tone, adherence to HN rules etc. I probably spend way too much time.

So technically the prompts involved might expand into megabytes all told. And in the end I formulate a post by myself (to adhere to HN rules), but the prompting can be many many many megabytes and include PDFs, images, blocks of text from multiple sources, and ... you know. Just Doing The Work.

I think this is valid. Previously I would have (and have) (and still do) search google, wikipedia, pubmed, scientific literature, etc. Not for everything. But often. And AI tooling just allows me to do that faster, and keep all my notes in one place besides.

Again, the final edit is typically 90-100% me. (The 10% is if the AI comes with a really good suggestion) . But my homework? Yes. AI is involved these days.

This should be ok. I'm adhering to the letter and the spirit. My post is me.

kingbob000•1h ago
"Write a response to smy20011's comment indicating that if the end result was a low-quality comment, the initial prompt probably wouldn't be very insightful either. Make it snarky."
cogman10•1h ago
I only disagree a little. It's that sometimes there is a discussion about AI itself where "I prompted X with Y and it output Z" can add to the convo.

But those are pretty specific cases (For example, discussing AI in healthcare). That's about the only time where I think it's reasonable to post the AI output so it can be analyzed/criticized.

What's not helpful is I've been hit by users who haven't disclosed that they are just using AI. It takes a few back and forths before I realize that they are just a bot which is annoying.

Kim_Bruning•1h ago
Here is where I'd like to push back just a little.

Not all AI prompting is expanding the prompt.

What if the original prompt is 1000 words, includes 10 scientific articles by reference (boosting it up to 10000) , and the AI helps to boil it down to 100 words instead?

I'd argue that this is probably a rather more responsible usage of the tools. And rather more pleasant to read besides.

Whether it meets the criterion is another thing. But at least don't assume that the original prompt is always better or shorter!

wildzzz•1h ago
Use your brain and summarize the article yourself if it's of such great importance. Why should I care to read it if you can't be bothered to actually write it?
Kim_Bruning•1h ago
You know, I probably have standing to argue that people who use the web are just as lazy ;-)

I'm just old enough that I was in the middle of the transition from paper (in primary school in the 80s) to online (starting late 90s)

I say this somewhat tongue in cheek, but obviously people should drive to 3 different libraries across 3 countries and read the journals in their own binders (in at least 3 different languages)

In reality: full-text online is convenient. Having an LLM assist with search and filtering is convenient.

I could go back to the old ways. Would you like me to reply in pen? My handwriting is atrocious.

I really prefer modern tools, though. Not everything older is better. Whether you want to read what I write is up to you.

(edit: Not hyperbole. I live in a small country, and am old enough to still remember the 80's as a kid.)

Kim_Bruning•6m ago
Actually, I'd like to expand a wee bit. Don't know if you've ever done a scientific library usage course or so. It's one of those things you tend to forget are important.

One of the most important lessons is not to read as many papers as possible. It's weeding out as many as possible so you can spend your limited grey matter reading the ones that actually matter.

And that's where the LLM comes in handy, especially if it's of decent quality. It's a Large Language Model. Chewing through language and finding issues and discrepancies, or simply whether a paper matches your ultimate query is trivial for them .

kunai•1h ago
It's not just AI-generated articles -- it's the other things that we delve into as a result. Listicles. Comments. Posts. It's what it means to be human, and honestly? That's rare.
0xbadcafebee•43m ago
Disagree. The prompt holds no information at all. The answer actually discovers information, organizes it, presents it in a way that's easy to read.

Example: "write me an article about hidden settings in SSH". You get back more information than most of HN's previous posts about SSH, in a fraction of the text, and more readable.

Actually, screw it, we should just make a new version of HN that has useful articles written by AI. The human written articles are terrible.

Timothycquinn•1h ago
AI Server Error
Kim_Bruning•1h ago
I would amend to:

"Don't post comments that are not human originated at this time. We want to see your human opinion shine through."

This gives people some amount of leeway and allows just rhe right amount of exceptions that prove the rule.

(That said, to be frank, some of the newer better behaved models are sometimes more polite and better HN denizens than the actual humans. This is something you're going to have to take into account! :-P )

zbentley•1h ago
Why would "human originated" be a better place to draw the line than "no generated/AI-edited comments"?

Like, I'm sure that AIs technically can write non-crap HN comments, but they rarely do. Even if it was less rare, the community that resulted from fostering AI-generated content would be unappealing to a lot of people, myself included. The fact that information here is the result of real people with real human opinions conversing is at least as important to me as the content being posted.

Kim_Bruning•1h ago
To begin with, some people have handicaps and use AI for assist. Other times people use AI for research. Finally, in general, when it comes to guidelines, making the lines slightly fuzzy makes enforcement more practical and believable.

It'd be silly if the rule gets interpreted such that people aren't allowed to do research with modern tools, and only gut takes are permitted.

I'm sure that's not the intent!

I think the important part is to have the human voice come through, rather than -say- force humans to run their text through an ai-detector first. (Itself an ai editing tool!)

See also : https://news.ycombinator.com/item?id=47290457 "Training students to prove they're not robots is pushing them to use more AI"

armchairhacker•1h ago
These are guidelines. I'm sure asking an AI about your comment (not pasting its text, so it's still your words) isn't an issue. The main target is obvious slop like https://news.ycombinator.com/threads?id=patchnull
Kim_Bruning•1h ago
Yeah, I think a big problem is that irresponsible AI use is very visible, while more responsible use tends to be invisible.
majorchord•1h ago
Honestly, I think "human originated" is the only rule that actually matters because we can't stop LLMs from sounding smart anyway. If you wait for a technical ban on AI-generated text, you're just playing catch-up with tools that already pass as human.

The real point isn't stopping bad grammar, it's preserving the vibe. HN feels different because it's messy humans arguing, not optimized algorithms trying to be helpful.

Once we allow "good enough" AI content, the community stops feeling like a town square and starts feeling like a customer service chatbot. We need real people with actual stakes in their opinions, not just perfect outputs. Let's keep it human or leave it.

This comment may or may not have been generated with an LLM, but I won't tell and you can't prove it either way.

OtomotO•1h ago
I just told my dog he isn't allowed to post here anymore...

He said he will take his business elsewhere then!

xbryanx•1h ago
Great message...but gosh, can someone throw 15px of padding on that <td>? I know HN is supposed to be minimal, but I had to check the URL to confirm that this was a real page because of the odd design.
ex-aws-dude•1h ago
From henceforth any comment containing the word "absolutely" or "--" shall be automatically deleted.
tsukikage•1h ago
https://news.ycombinator.com/item?id=47323891
add-sub-mul-div•1h ago
Is there a site that deserves more than this one to be destroyed by slop? It's hypocritical but telling for the places most actively trying to profit from it to ban it themselves.
MattRix•1h ago
It’s not hypocritical at all. You can be a fan of a technology and still acknowledge its downsides. Every technology has places it is useful and places it is harmful.
add-sub-mul-div•1h ago
But it's trivially evident that the harmful use cases are dominating. Handwaving that away for profit is shitty.
vips7L•1h ago
Moltnews
capricio_one•1h ago
Real talk: who is this guideline going to stop? People are already doing this and they will continue. Even if you find them, they’ll just make more accounts and continue.
nwhnwh•1h ago
So? Say it. Go ahead few steps further.
capricio_one•1h ago
Say what? It’s a genuine question. What is the actual repercussion for not following this?

It came up a few weeks ago. Show HN is already disabled for new accounts as of this week I think(?), but IMHO stricter measures need to be placed for account creation otherwise there’s no real enforcement.

schappim•1h ago
I have a kid with severe written language issues, and the utilisation of STT w/ a LLM-powered edit has unlocked a whole world that was previously inaccessible.

What is amazing is it would have remained so just a couple of years ago!

ex-aws-dude•1h ago
Come on dude, its obviously just to prevent spam and not for your super specific case

These are just guidelines

schappim•1h ago
Title literally says “AI-edited comments”.
jasonlotito•1h ago
> HN is for conversation between humans.

It also says that.

The intent of the guidelines are important. Using AI to generate the STT is fine. The conversation is still between humans.

djohnston•1h ago
nuance and basic common sense left the chat about ... 8 years ago.
majorchord•1h ago
How is it obvious?
DennisP•1h ago
What is STT in this context?
schappim•1h ago
Speech to text
ranger_danger•1h ago
Agreed... there's often other perspectives people never thought of like this, which is why they say "strong opinions about issues do not emerge from deep understanding."

Even if you're just inexperienced in the language you're communicating in and are trying to have better conversations, it's very helpful.

For cases like that, I say just don't tell people... I think it's unlikely anyone will be able to tell either way.

eudamoniac•1h ago
Oh no, we might lose 0.00001% of commenters across the internet! I need to see their opinions too!!
fidotron•1h ago
The only question is is the entity interesting and/or correct. Those properties are in the eye of the beholder. If they're human or not is beside the point.

After all, no one knows I'm a dog.

AlecSchueler•1h ago
> The only question is is the entity interesting and/or correct.

This already falls apart though. There are while categories of things which I find "incorrect" and would take up as an argument with a fellow human. But trying to change the mind of an LLM just feels like a waste of my time.

skeledrew•1h ago
Instead of wanting to change the mind of the other entity, how about focusing on coming to a mutual understanding of what is "correct"? That way it shouldn't matter much if said entity is human, LLM or dog. Unless you're just arguing to push your "correct" on other humans, with little care about their "correct".
AlecSchueler•1h ago
It feels like you've loaded quite a lot on a way that feels unfair: "pushing" and "little care" etc. I maybe should have used a term like "discuss" target than the more loaded "argue."

Look, I'll give you a loose example: It's not uncommon to see a post making an "error" I know from experience. I might take the time to help someone more quickly learn what I felt I learnt to help me get out of that mistaken line of thought. If it's an LLM why would I care? There's thousands of other people, even other LLMs, that I could be talking to instead.

You've set up a framework here where "mutual understanding" is the end goal but that's just not always what's on the line.

throwaway2027•1h ago
>But trying to change the mind of an LLM just feels like a waste of my time.

It often is with humans as well.

AlecSchueler•1h ago
Indeed it is, and there are often times I choose not to engage with my fellow humans. But the exceptions are valuable to me and to others. With an LLM I don't feel there would be any exception, that's the difference.
craftkiller•1h ago
Not necessarily. Using AI you can trivially perform astroturfing campaigns to influence public perception. That doesn't really fall on the interesting or correctness spectrums. For example, if 90% of the comments online are claiming birds aren't real with a serious tone, you might convince people to fall into that delusion. It becomes "common knowledge" rather than a fringe theory. But if comments reflect reality then only a tiny portion of people have learned the truth about birds, so people will read those claims with more skepticism.

(naturally "birds aren't real" is a correct vs not correct thing, but the same can be applied to many less-objective things like the best mechanical keyboard or the morality of a war)

LeifCarrotson•1h ago
No, those properties are tied to the state of mind and experiences of the human, dog, or LLM behind any given comment.

When someone posts:

> You could use Redis for that, sure, I've run it and it wasn't as hard as some people seem to fear, but in hindsight I'd prefer some good hardware and a Postgres server: that can scale to several million daily users with your workload, and is much easier to design around at this stage of your site.

then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. You can't know whether that's good advice or not without being the author, and if that's posted by someone you trust it has value.

An LLM could be prompted to pretend they're an experienced DBA and to comment on a thread, and might produce that sentence, or if the temperature is a little different it might just say that you should start with Redis because then you don't have to redesign your whole business when Postgres won't scale anymore.

fidotron•1h ago
> then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author.

This is my point.

There is no sane endgame here that doesn't end up with each user effectively declaring who they do and don't care to hear, and possibly transitively extend that relationship n steps into the graph. For example you might trust all humans vetted by the German government but distrust HN commenters.

For now HN and others are free to do as they will (and the current AI situation has been intolerable), however, I suspect in the near future governments will attempt to impose their own version of it on to ever less significant forums, and as a tech community we need to be thinking more clearly about where this goes before we lose all choice in the matter.

eikenberry•1h ago
> then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author.

This implies they know the author and can trust them. If they don't know the author then there is no trust to break and they are only relying on the collective intelligence which could be reflected by the AI.

That is to say that trusting a known human author is very different from trusting any human author and trusting any human author is not that much different from trusting an AI.

cubefox•1h ago
Meanwhile, the top comment on one of the most upvoted submissions today is AI generated by an LLM account:

https://news.ycombinator.com/item?id=47334694

Most people don't seem to care.

minimaxir•55m ago
Please don't vaguepost as it wasted my time trying to trade down which comment you thought was LLM generated and why.

OP is likely referring to this one (https://news.ycombinator.com/item?id=47335032) by LuxBennu because it has an em-dash, that's one of the few cases it's used correctly. But the account's comment history comments that do not follow the typical LLM tropes but are still odd for a human to write: https://news.ycombinator.com/user?id=LuxBennu

LuxBennu did reply to accusations of being an AI bot: https://news.ycombinator.com/item?id=47340704

> Fair enough — I've been lurking since 2019 and picked a bad day to start commenting on everything at once. Not a bot, just overeager. I'll pace myself.

koolala•1h ago
HN only supports English so it should be allowed for anyone using LLMs for translation.
zufallsheld•1h ago
You could use translation tools instead of llms.
vova_hn2•1h ago
technically most translation tools these days have an LLM inside. Just not the chat/completion LLM.

I think that Google initially came up with transformer architecture to use it for translation, so...

Kim_Bruning•1h ago
LLMs were -in part- designed as translation tools. It's one thing they do really really well.

https://arxiv.org/html/1706.03762v7 (Attention is all you need) "Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train."

Ok, looking that up, that was quite literally one of the main design goals.

And they're really quite good at translating between the languages I use. They're the best tool for the job.

koolala•1h ago
Those are either AI based and worse performance if they are not.
SoKamil•1h ago
Don’t be afraid to make grammar mistakes or misspell stuff. Others will understand. You’re a human after all. That’s okay to make mistakes and feel uncomfortable with that.
lifthrasiir•1h ago
Others will understand, but won't regard that as worthy. That's a difference.
SoKamil•1h ago
And that’s their problem.
rafaelmn•1h ago
I don't get where this class/status/worthiness ties into HN comments ?

I get decent feedback most of the time, and I read interesting stuff, it's the easiest way I found to stay in the loop in our industry. What are you guys commenting for ?

tayo42•1h ago
I make mistakes pretty often thanks to auto complete on my phone and carelessness. I've had threads derail and been attacked by people who freak out over grammar.
pants2•1h ago
This itself is against the rules:

> Please respond to the strongest plausible interpretation of what someone says

> Please don't post shallow dismissals

Personally I've posted comments with glaring typos that everyone thankfully ignores. I only notice much later when I re-read it.

tayo42•46m ago
Oh interesting. Good to know for the next time the they're/their/there police shows up
Aldipower•1h ago
Unfortunately a lot of other do not understand (in the double sense).
tonymet•1h ago
Chads never backspace.
vesrah•10m ago
This is going to sound nuts, but I've noticed comments lately with multiple misspellings that seem intentional - it's almost like they're trying to signal that they're human, rather than LLM written. I've started to think it makes them even more likely to be LLM written than not.
HanClinto•1h ago
I appreciate this being added to the guidelines.

That said, I also wouldn't hate seeing an official playground where it is cordoned / appreciated for bots to operate. I.E., like Moltbook, but for HN...? I realize this could be done by a third party, but I wouldn't hate seeing Ycombinator take a stab at it.

Maybe that's too experimental, and that would be better left to third parties to implement (I'm guessing there's already half a dozen vibe-coded implementations of this out there right now) -- it feels more like the sort of thing that could be an interesting (useful?) experiment, rather than something we want to commit to existing in-perpetuity.

munk-a•1h ago
You could mirror article postings and upvotes to another site and let AI play around there - if it's interesting to people maybe it will gain a following. I don't see any reason it'd need to happen in this specific forum as that'd likely just cause confusion.

At the time being, at least, HN is a single uncategorized (mostly, lets ignore search) message board - splitting it into two would cause confusion and drastically degrade the UX.

Kim_Bruning•1h ago
https://news.clanker.ai/

This might be roughly what you're looking for?

DonThomasitos•1h ago
The irony is that this guide is written like a system prompt. We‘re all working with LLMs too much these days.
moralestapia•1h ago
This thing has been there for like 15 years though ...
cobbal•1h ago
Here's a version from 2014 in the same style if you're curious: https://web.archive.org/web/20140702092610/https://news.ycom...
abtinf•1h ago
Good. This helps establish it in the HN culture. That’s the purpose of guidelines.

99% of rule enforcement, both IRL and online, comes down to individuals accepting the culture.

Rules aren’t really for adversaries, they are for ordinary situations. Adversaries are dealt with differently.

gr8tyeah•45m ago
This is only meaningful if enough people read it and agree
bhhaskin•42m ago
Nah they are pretty good a banning users that don't follow the guidelines.
abtinf•37m ago
Yes, and it’s not like they just insta-ban every infraction.

I’ve broken the guidelines on this site before. The mods reply and say “hey, stop doing that, here is the guideline”. I stopped doing it. Life continues.

altairprime•35m ago
(They do react differently if you show a pattern of disregard rather than a one-time event; ‘dang before’ might pull up some of those in a search.)
jbaber•32m ago
One of the virtues of HN is polite prodding when the rules are broken.
abtinf•42m ago
That’s true. Fortunately, by virtue of it being added to the guidelines, quite a few folks here are prepared to reply to obviously generated comments by simply citing and linking the rule. Just search for “shallow dismissal” to see many examples.

It will take time, but eventually everyone will know about it.

julius_eth_dev•1h ago
The hardest part of this policy is the "edited" qualifier. I use LLMs constantly as thinking tools — rubber-ducking architecture decisions, pressure-testing arguments before I post them. The final comment is mine, shaped by my experience and opinions, but the process of arriving at it involved a machine. Drawing a bright line between "I refined my thinking with Claude" and "I pasted Claude's output" seems important but genuinely difficult to enforce. The spirit of the rule is clear though: HN works because people are accountable for what they say, and that breaks down when a comment is optimized for engagement rather than expressing what someone actually thinks.
throw310822•1h ago
I'm also not averse to pasting Claude's output sometimes, with clear attribution, if it adds something. It's not that different from pasting a quote from Wikipedia- might bring useful information but there is a chance that it could be wrong.
fsloth•1h ago
"It's not that different from pasting a quote from Wikipedia"

Claude's output it _totally different_ from pasting a quote from Wikipedia.

The latter has the potential to be edited and reviewed by global subject experts.

Claude's output totally depends on what priors you gave it and while you can have high confidence in the context no third party should have.

throw310822•1h ago
Indeed, but we know this, right? When it's relevant, the prompt should also be included.
fsloth•1h ago
No, that’s not how LLM:s work. Single prompt does not make it any better. Please focus on interesting humans comments.

If you feel like it sure chat with claude to build your insight. Then write what you think _yourself_.

If you want to introduce references use urls to non-ai generated contexts.

I means as a HN protocol.

HN is supposed to be interesting.

LLM output specifically is not interesting because everyone else can generate roughly the same output.

bondarchuk•1h ago
Yes it is different and I don't want to read it.
throw310822•1h ago
Yes exactly, when it's clearly attributed you can skip it. It's a tool, it can be used to process and analyse large amounts of information. Not different from Excel.
bondarchuk•1h ago
No thanks. Thankfully there is a policy against it now so I don't even have to convince you.
desireco42•1h ago
Tell me about it. English is not my first language... I would say weird things and get downvoted for it. But... we really need this as people started automating too much.
antics9•1h ago
Why not be real and multi faceted in both thinking and writing? Trying to be perfect in writing just makes you plastic.

By the looks of it, I don't even think I'm replying to a human.

b40d-48b2-979e•1h ago

    By the looks of it, I don't even think I'm replying to a human.
They didn't even bother to remove any of the signals. Perhaps this post is actually a honeypot for these bots.
gensym•1h ago
> The final comment is mine, shaped by my experience and opinions

I can understand why you think this is true, but it is false.

Kim_Bruning•1h ago
Can you expand on that? Why do you think so?
gensym•1h ago
That's a fair question, so I'll try as best I can. And maybe this will serve as a meta-example for me because it is hard to explain.

In a real discussion, the messiness is an important signal. The mistakes that you made and _didn't_ catch, the clunky word choices, etc, give insight actually show what you are thinking and how clearly you are thinking about it. If you have edited something for clarity, that's an important signal. LLM editing destroys that signal.

And it gets worse because LLMs destroy that signal in one direction - towards homogeneity. They create the illusion of "what you were actually thinking, but better than you could express it" but what they are delivering is "generic, professional-sounding ideas phrased in a way to convince you they are your own".

fluffybucktsnek•49m ago
I get what you are saying, but I disagree on the last part, "[...] way to convince you they are your own". If it managed to convince the author that it is their own, chances are, it is their own. Specially so if the author does review and edit the output prior to posting it.

The messiness may show glimpses of the process, but, in isolation, will likely distort and corrupt the desired message via partial framing.

Kim_Bruning•48m ago
> And it gets worse because LLMs destroy that signal in one direction - towards homogeneity.

Oh, right, yes, if you're not careful they can definitely do that.

But look at what julius_eth_dev is actually saying they're doing:

> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."

That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.

I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)

bakugo•57m ago
The fact that several users posted genuine replies to this obvious bot account is proof that this rule will likely go mostly unenforced. The average person is seemingly unable to notice they're reading slop, no matter how obvious it is.
CactusBlue•1h ago
Slightly tangential, but this paragraph is the only one on the rules page with a "id" attr set, so that you can link to this specific rule
desireco42•1h ago
There were few that were very suspect commenters :). It is an issue for sure.
lisp2240•1h ago
I want a social network that goes beyond banning bots and also bans the half of the population that doesn’t have an inner monologue.
jsnell•1h ago
A practical question: what should readers do when they suspect a comment (or story) is AI-generated? Is that an appropriate reason for flagging? Email the mods? Do nothing?

I've been pretty wary about flagging AI slop that wasn't breaking other guidelines, and by default this will probably make me do it more. But it is a lot harder to be certain about something being AI-written than it is to judge other types of rules violations.

(But am definitely flagging every single "this was written by AI" joke comment posted on this story. What the hell is wrong with you people?)

chrystianpl•1h ago
As English is my second language and as I have dyslexia. I was just wondering what do you mean by "AI-edited comments"? I can't ask an llm to check if I have made correct grammar/fix it and then as I was on other account, down-voted because of my styling/grammar, not because of the content?
desireco42•1h ago
I don't have dyslexia but I feel your pain. I mean it is what it is. I would rather have it raw then have to use AI to filter to comments that make sense.
jonathrg•1h ago
How do you know what you were downvoted for?
whynotmaybe•1h ago
I guess he was told because otherwise you don't know whether you said something inherently wrong or misleading or you hurt someone 's feeling.

That's the richness behind the upvote/downvote that also tend to create echo chambers because you soon learn what causes downvotes.

I've personally noticed downvote whenever I mentioned apple negatively.

tartoran•1h ago
You could always tell your LLMs to just fix your grammar but not embelish, add new ideas, etc..
113•1h ago
Does that work?
simonw•1h ago
It works really well. I've been using this prompt to find spelling and grammar errors for about a year now: https://simonwillison.net/guides/agentic-engineering-pattern...
nablaone•1h ago
"fix english" is the prompt i wish to turn into a button
shnpln•1h ago
This is what I do when using AI to read anything I write. Some prompt like "I am going to share with you something I have written and I don't want you to change my voice at all. Can you look for structural issues, grammar or punctuation errors, and things like that". Claude is an amazing editor and I never feel like my writing has been taken from me doing this.
giancarlostoro•1h ago
I usually tell it not to rewrite my words, my words are my own. If it has suggestions to tell me what those are, but only fix or show me grammar fixes instead.
nsxwolf•1h ago
I have never downvoted for this, and I hope no one else would do that either. If anyone here does that, please stop.
nottorp•1h ago
"Please don't post shallow dismissals, especially of other people's work."

I wonder if an explicit expansion of that rule would help. Maybe in all caps. Saying "picking on grammar is a shallow dismissal".

rdiddly•1h ago
I don't believe that's always true, and I suspect it was left out of the guidelines deliberately, and I wish people receiving suggestions would stop interpreting it that way. Of course people suggesting grammar corrections and treating it like they just demolished and eviscerated your argument are part of the problem. But what about people out here just trying to help? Grammar is important, as it's the syntax of the programming language we all use with each other. People act as if bad grammar is something you're born with, and can't change. Like learning grammar is impossible, and those who don't bother should be a protected class. I'm just trying to help man. Or I was anyway, before I stopped. But if I'm trying to engage with someone's main point, it should be obvious. Whereas a quick grammar correction is just that. But it's a tangent, and not interesting (especially if you already know), and supposedly grammar is "not a technical topic" (despite daily use) so it ends up deemed a "low value comment" and gets downvoted to oblivion.
nottorp•50m ago
> I wish people receiving suggestions would stop interpreting it that way

The specific problem here was that the poster was being downvoted for grammar. Of course, that's how he could have read it.

Imustaskforhelp•1h ago
Oof although I feel this pain a lot. What I like to do is respond to them politely if someone talks about such thing. Although it takes time and this does sometimes make you want to dis-incentivize/dis-engange.

But at some point, the rationale behind it is that your comments are your words and I find it liberating. Some people won't appreciate it and some people would but this goes the same for AI-edited posts too.

(I would recommend to add that if you are still worried, then within your hackernews profile, please talk about you having dyslexia as people might be so much more forgiving when they get more context. We are all humans after all and I would like to think that we understand each other's struggles)

johndough•1h ago
Likewise, I sometimes use https://www.deepl.com/en/write to fix my unidiomatic sentences.

But I can see why the HN guideline is formulated that way. My students often use the excuse "I did not use AI for writing! I wrote it myself! I only used AI to translate it!" Simply disallowing all kinds of AI usage is much easier than discussing for the thousandth time whether the student actually understands what they have written.

Adiqq•34m ago
Isn't the whole point to understand? If the task is to write and you expect only final result, but you question if it looks legit enough, how is it fair judgement? People can deliver partial results and show progress as well, but you won't see it in some comments on the internet, but if something is expected to take many days, it's easy to show different stages of work. It's easy to accuse people of plagiarism or not thinking for themselves, and of course there are indicators when someone uses AI, but the problem is that you can't distinguish in reliable way, if something was created by AI or not.

Like, there is this computer game, authors used some models or something like that, generated by AI, but it was only used during prototyping and later it was replaced by proper models. No one would know about that, if authors would not tell about it. So, if someone writes in their own words what AI generated for him, is it still argument made by human or by AI? What if someone uses AI only as placeholder and replaces all that content, so you never actually see actual AI usage, but it was used in the process?

For me, premise that using AI in any form invalidates your work, starts with logical fallacy, so such arguments against using AI are weak. It's like saying that your work is wrong, because you used calculator, so your calculations can't be right, if done by machine, because it had to make mistake or that's wrong for ethical reasons or whatever.

Work generated by AI can be easily poor, because these models make mistakes and like to repeat in certain ways, but is it wrong that I'm writing comment with keyboard, instead of writing letters with pen? Is it wrong, when I use IDE or some CLI to write code with AI, instead of using vim and typing everything on my own? Is it wrong that someone uses spell-checking?

In the end it doesn't matter who seems smarter, when you're expected to use AI at work. Reality shows you actual expectations.

chorkpop•1h ago
Dyslexia was my first thought as well. The intent is great, but I don't know if this is keeping with the social model of disability. Disability is created when you remove access and this is exactly that.
3rodents•51m ago
The internet has been full of brilliant dyslexics since the start, just as it has been full of brilliant blind people. Dyslexic people feeling that they must use AI to produce perfect prose lest they burden the lexics with clumsy spelling or grammar is far more hostile. We didn’t have slop machines 5 years ago.
nonameiguess•1h ago
I don't see how you can know why you were downvoted. Even if one person says something, they won't all. Your comment right here has some rough patches, but I can tell what you're saying. Humans are terrific at extracting signal from noise. I would say be who you are, tough as it may be, and it'll encourage the rest of the world in the future to do the same. We're all unique in some way or another and have flaws and we'd be better off if we were knew others had them too because they weren't constantly trying to hide it and we wouldn't feel so bad thinking we're the only ones. I hope it doesn't sound unsympathetic. I understand where you're coming from intellectually, but don't have any real experience being ridiculed or bullied. I know kids can be brutal and probably scarred you, and unfortunately, adults aren't much better, but we should be, and I think at least Hacker News is better than most places full of human adults. We know there's a huge world out there. I think I'm reasonable well-spoken in English but can't speak a lick of any other language at all. The fact that you can produce intelligible English already puts you above me in my book. You're a person. I can respect you, esteem you, potentially love you, not in spite of your flaws, but because they don't matter. Every single person on the planet has them, and if they're not moral flaws, nobody should give a shit. I can't respect or love a machine any more than I can a rock. And I don't want to talk to one, either.
metalman•1h ago
boooooooo, hu, baby

stump along, cut your own path, or fuck right off

real life will eat you otherwise

I mean holly shit, you actualy want to hide behind an automated echoing device so that you wont get, well, what is happening to my post as sooooon as I press↓

jacquesm•1h ago
> boooooooo, hu, baby

> stump along, cut your own path, or fuck right off

> real life will eat you otherwise

> I mean holly shit, you actualy want to hide behind an automated echoing device so that you wont get, well, what is happening to my post as sooooon as I press↓

You deserve a ban for this.

Adiqq•1h ago
I don't really see the issue, as long as there's human thought behind whatever anyone posts. It's frustrating to argue against someone that lazily uses AI, but if argument is fair, then I don't care if that's written by AI or human, what difference does it make? It's frustrating, if someone is incoherent and makes dumb argument, but again, I don't care if it's dumb argument from human or machine.

For me it sounds just as yet another form of gatekeeping, so either you sound human or you're not good enough to post/comment. Like, really? How isn't that genetic fallacy? It doesn't matter what someone thinks, because someone used AI to make their thought clearer, so their whole argument is trash? Like it has to hurt to read and write, if you're not using English perfectly and your work is seen as inferior based on superficial factors like proper grammar and style?

It's dumb crusade, I did not use AI to write this comment, but I hate when people try to monopolize the truth and tell who is "better, smarter" based on irrelevant facts. Not using AI doesn't make anyone superior. Using AI also doesn't make you superior. Focus on what you mean, because that's what matters.

throwpoaster•1h ago
No worries, it’s unenforceable.
surround•52m ago
Trust your own style, even if you aren't a native English speaker. Here's an example where a non-native speaker used an LLM to polish his post. The general consensus was that his own writing was preferable to the LLM's edited version.

https://news.ycombinator.com/item?id=45591707

For dyslexia, use a spell-checker. For grammar, use a basic grammar checker, like the kind of grammar checker that has come with MS word since the 1990s. But don't let a style-checker or an LLM rob you of your own voice.

vivid242•1h ago
Pinky swear!
Someone1234•1h ago
"AI-edited comments" is a very interesting one. Where is the line between a spelling/grammar/tone checker like Grammarly, that at minimum use N-Grams behind the scenes, and something that is "AI" edited? What I am asking is, is "AI" in this context fully featured LLMs, or anything that improves communication via an automated system. I think many people have used these "advanced" spellcheckers for years before Chatgpt et al came on the scene.

I think "generated comments" is a pretty hard line in the sand, but "AI-edited" is anything but clear-cut.

PS - I think the idea behind these policies is positive and needed. I'm simply clarifying where it begins and ends.

happytoexplain•1h ago
I think there's a pretty clear gap between editing for grammar/spelling and editing for tone.
jaysonelliot•1h ago
You should use your own words. It might seem that a tool like Grammarly is just an advanced spellcheck, but what it's really doing is replacing your personal style of writing with its own.

It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them.

Aldipower•1h ago
That's true, but on the flip side I regularly get downvoted because my English is not the best, so say it mildly. So, now I need to be really careful, to a) write in a good English or b) not to be recognised as an LLM corrected version of my English. Where is the line? I shouldn't be downvoted for my English I think, but that is the reality.

Edit: I already got downvoted. :-) Sure, no one can tell exactly why. Maybe the combination of bad English _and_ talking sh*ce isn't ideal at all. :-D Anyways, I have enough karma, so I can last quite a while..

ssl-3•1h ago
It goes both ways.

The quality of my writing varies (based on my mood as much as anything else, I suppose), but when it is particularly good and error-free then I often get accused of being a bot.

Which is absurd, since I don't use the bot for writing at all.

colpabar•1h ago
> I shouldn't be downvoted for my English I think, but that is the reality.

How do you know? Is it possible the downvoters just didn't like what you said?

phs318u•57m ago
It’s possible of course but reading all the comments from various non-native English speakers here it seems like a common story. It may indicate a subliminal bias in readers (most of whom are presumably American).
yorwba•35m ago
Note that those comments are written in perfectly understandable English. Further note how often you come across comments written in perfectly understandable English, but they're downvoted anyway.

It suggests a bias in writers to assume that people would agree with them if only they could express their thoughts accurately.

drusepth•1h ago
I'm not sure I agree with this. I don't really want to see someone else's stylistic "warts".

I just want clean, easy-to-read content and I don't care about the person who wrote it. A tool like Grammarly is the difference between readable and unreadable (or understandable and understandable) for many people.

timeinput•1h ago
You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

You could even write a plugin for your favorite web browser to do that to every site you visit.

It seems hard to achieve the inverse that is (would you rather I use i.e.?) rewrite this paragraph as the original author did before they had an AI re--write it to make it clean, (--do you like oxford commas, and em/en dashes! Just prompt your AI) and easier to read

tempestn•1h ago
There's a big difference between me running a filter on other people's words, and those people themselves choosing to run one and then approving the results.

I personally don't see a problem with someone using a grammar checker as long as they aren't just blindly accepting its suggestions. That said, if someone actually is using it in that way, it shouldn't be detectable anyway, so it probably doesn't matter all that much whether or not it's included in the letter of the rule.

kazinator•1h ago
> You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

But that creates a private version of the text which the original poster didn't sign off on. You could have fixed something contrary to their intent.

phs318u•1h ago
> You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

For those coming from a language other than English, you are more likely to lose information by using a tool to “reconstruct” meaning from poorly phrased English as an input, as opposed to the poster using a tool to generate meaningful English from their (presumably) well-written native language.

lamontcg•1h ago
Books and newspapers have had editors for centuries. It is just code review for the written word.
MeetingsBrowser•1h ago
Editors are mostly tasked with maintaining a consistent style and standard.

There is no need for that here beyond maybe spellcheck. Use your own thoughts, voice, and words.

lamontcg•1h ago
I don't personally use AI/LLMs for any informal writing here or on reddit, etc. But I think it is pretty weird to be overly concerned around people, particularly ESL, who use tools to clean up their writing. The only thing I really care about is when someone posts LLM regurgitated information on topics they personally don't know anything about. If the information is coming from the human but the style and tone is being tweaked by a machine to make it more acceptable/receptive and fix the bugs in it, then I don't understand why you're telling me I need to care and gatekeeping it. It also is unlikely to be very detectable, and this thread seems to only serve a performative use for people to get offended about it.
pseudalopex•45m ago
Other tools to clean up writing are allowed. They did not tell you you must care. You told them they must not. The submisson's use was to tell you and others LLM generated tone was not more acceptable.
lamontcg•14m ago
Well good luck detecting it.
mjg2•1h ago
I was just re-reading the passage from Plato's "The Phaedrus" on writing & the "art" of the letter for an essay I'm working on, and your remark is salient for this discussion on LLM-style AI and social media at large.
dbacar•1h ago
RIP Robert M.Pirsig.
NewsaHackO•1h ago
>It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better."

It is definitely not true that it is better for a poster to communicate like an individual when it comes to spelling and grammar. People ignore posts that have poor grammar or spelling mistakes, and communications that have poor grammar are seen as unprofessional. Even I do it at a semi-subconscious level. The more difficult or the more amount of attention someone has to pay to understand your post, the less people will be willing to put in that effort to do so.

bruckie•1h ago
My elementary school kid came home yesterday and showed me a piece of writing that he was really proud of. It seemed more sophisticated than his typical writing (like, for example, it used the word "sophisticated"). He can be precocious and reads a ton, though, so it was still plausible that he wrote it. I asked him some questions about the writing process to try to tease out what happened, and he said (seemingly credibly) that he hadn't copied it from anywhere or referenced anything. He also said he didn't use any AI tools. After further discussion, I found out that Google Docs Smart Compose (suggested-next-few-words feature) is enabled by default on his school-issued Chromebook, and he had been using it. The structure of the writing was all his, but he said he sometimes used the Smart Compose suggestions (and sometimes didn't). He liked a lot of the suggestions and pressed tab to accept them, which probably bumped up the word choice by several grade levels in some places.

So yeah, it can change the character of your writing, even if it's just relatively subtle nudges here or there.

edit: we suggested that he disable that feature to help him learn to write independently, and he happily agreed.

comboy•58m ago
Oh how I despise these suggestions. You sometimes look for a way to express something and you are on the verge of giving the world something truly original, but as soon as your brain sees the suggestion it goes "oh yeah that fits"
JumpCrisscross•56m ago
> I despise these suggestions

As an adult, I do too. As a middle schooler, we absolutely used word processors’ thesaurus features to add big words to our essays because the teachers liked them.

Gibbon1•38m ago
Friend of mine was a English teacher. She quit because she's not going to waste her time 'grading' 30 essays written by AI.

Anyway before that she HATED the thesaurus. And she could tell when students were using it to make their writing more fancy pants.

JumpCrisscross•30m ago
> she could tell when students were using it to make their writing more fancy pants

I had two teachers who called us out on this, and actually coached us on our writing, and I remember them fondly. (They were also fans of in-class essaying.)

The others wanted to count big words.

zahlman•10m ago
One problem I see is that LLMs have a more nuanced... well, model of how words and their meanings relate to each other than a dead-tree thesaurus could ever present, what with its simplified "synonym" and "antonym" categories. Online versions try to give some similarity metrics, but don't get into the nuance. (It's not as if someone who takes either approach would want to spend the time reading and understanding that, anyway.)
TimTheTinker•54m ago
GK Chesterton would have something brilliant to say about the inauthenticity of it all or something.
jrockway•54m ago
I see the suggestions and then choose something different anyway. I don't want to use one of the top 3 most popular responses to an email from a friend. Even if it's something transactional.
Terr_•47m ago
True! There's an important cybernetic aspect to all this, where an automatic suggestion can be an interruption, sometimes worse if the suggestion is decent.

A certain amount of friction is necessary, at least if the goal is to help the person learn or make something original.

Terr_•51m ago
To rationalize my gut-feelings on this, I think it comes down to the spectrum between:

1. A system that suggests words, the child learns the word, determines whether it matches their intent, and proceeds if they like the result.

2. A system that suggests words, and the child almost-blindly accepts them to get the task over with ASAP.

The end-results may look the same for any single short document, but in the long run... Well, I fear #2 is going to be way more common.

zahlman•15m ago
The analogy with tab-completion of code seems apt. At first you blindly accept something because it has at least as good a chance of working as what you would have typed. Then you start to pay attention, and critically evaluate suggestions. Then you quickly if not blindly accept most suggestions, because they're clearly what you would have written anyway (or close enough to not care).

The phenomenon was observed in religious philosophy over a millennium ago (https://terebess.hu/zen/qingyuan.html).

abustamam•5m ago
Tab completion was so novel back when full e2e AI tooling was not really effective.

Now that it is, I just turn tab completion off totally when I write code by hand. It's almost never right.

bruckie•14m ago
From his description, it sounded like this was more of #1. He cared a lot about the topic he was writing about, and has high standards for himself, so it's very likely that he would have considered and rejected poor suggestions.

I have mixed feeling about it. On the one hand, you're right: carefully considering suggestions can be a learning opportunity. On the other hand, approval is easier than generation, and I suspect that without flexing the "come up with it from scratch" muscle frequently, that his mind won't develop as much.

comboy•1h ago
My broken english now officially bumps my comments up instead of down. Sweet.
zahlman•9m ago
For what it's worth, I had a quick look through your comment history and your English seems just fine to me as a native speaker (at least for informal communication).
Teever•55m ago
But the problem is that people with poor written language / english skills are 'competing' with people who have superb skills in this domain.

There are people here who sit at a desk all day banging out multipage emails for work who decide to write posts of a similar linguistic calibre for funsies.

Meanwhile you have someone in a developing country who just got off a brutal twelve hour shift doing manual labour in the sun who wants to participate in the conversation with an insightful message that they bang-out on a shitty little cellphone onscreen keyboard while riding on bumpy public transit.

You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post.

What's the solution for that?

magicalist•32m ago
> What's the solution for that?

Remember that you're on a message board and you're not actually 'competing' for anything?

Teever•27m ago
This is a perfect example of what I'm talking about.

I knew someone was going to comment on my use of the word there despite me putting it in quotes which was intended to let the reader know that I meant that word as an approximation of what I was meaning.

When I say competing I mean competing in the space of ideas here. There is a ranking system here that raises or lowers the visibility and prominance of your comments and it's based on upvotes by other uses. For better or worse people penalize comments with grammatical errors over ones that don't and that affects how much exposure other users have to the ideas that people write and how much interaction they get from them.

If that's the case why would somebody who has good ideas but poor expressive capability bother posting here if their comments are just going to get ignored over relatively vapid comments that are grammatically correct?

NewsaHackO•1m ago
No, I get your point. Unfortunately, alot of people here try to act high and mighty like they are posting here for some altruistic reason. The reason why I, you, and everyone else posts here is the human reason that we want others to engage with our posts. In order to do that, you have to put your best foot forward, which includes making sure the spelling and grammar of your posts is correct. While I do not use an LLM for this, I think that it is vaild to use these tools to make sure nothing gets in the way of whatever point you are trying to make.
12_throw_away•7m ago
> You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post.

I absolutely do not understand this comment. Are you saying that posting is competitive and that comments have "metrics"?

fragmede•50m ago
I disagree. HN is going to bury my raw unedited tirade of a comment about those fucking morons that couldn't code their way out of a paper bag. If I send a comment to ChatGPT and open up the prompt with "this poster is a fucking dumbass, how do I tell them this" and use that to get to a well reasoned response because that's the tool we have available today, we're all better off.

The guidelines state:

> Be kind. Don't be snarky. Converse > Edit out swipes. > Don't be curmudgeonly.

On the best of days I manage to follow the rules, but I'm only human. If I run my comment through ChatGPT to try and help me edit out swipes on the bad days, that's not ok?

I'm not using ChatGPT to generate comments, but I've got the -4 comments to show that my "thoughts exactly as they have written them" isn't a winning move.

yorwba•33m ago
The guidelines don't say anything about not posting something because an LLM told you that you shouldn't...
zahlman•6m ago
If you see an incompetent coder and wish to communicate that the person responsible is a "fucking moron/dumbass", the tone with which you do so is not the problem. Tell us what is wrong with the code, as objectively as possible. That's what the guidelines are trying to convey.
SecretDreams•1h ago
Your comment is one of semantics. Worth discussing if we're talking a truly hard line rule rather than the spirit of the rule.

I benefit from my phone flagging spelling errors/typos for me. Maybe it uses AI or maybe it uses a simple dictionary for me. Maybe it might even catch a string of words when the conjunction isn't correct. That's all fair game, IMO. But it shouldn't be rewriting the sentence for me. And it shouldn't be automatically cleaning up my typos for me after I've hit "reply". That's on me.

glitch13•1h ago
I saw a similar conversation somewhere about some project saying they don't allow AI generated code.

It was asked that if "AI Generated Code" is just code suggested to you by a computer program, where does using the code that your IDE suggests in a dropdown? That's been around for decades. Is it LLM or "Gen AI" specific? If so, what specific aspect of that makes one use case good and one use case bad and what exactly separates them?

It's one of those situations where it seems easy to point at examples and say "this one's good and this one's bad", but when you need to write policy you start drowning in minutia.

kazinator•1h ago
Projects cannot allow AI generated code if they require everything to have a clear author, with a copyright notice and license.

IDE code suggestions come from the database of information built about your code base, like what classes have what methods. Each such suggestion is a derived work of the thing being worked on.

Mordisquitos•1h ago
I think that the line between A"I" editing to fix grammar or to translate from a different native language and A"I" editing by using an LLM is one of those things that's very hard to unambiguously encode in written guidelines, but easy to intuitively understand using common sense, in the vein of I know it when I see it.

https://en.wikipedia.org/wiki/I_know_it_when_I_see_it

tsukikage•1h ago
> Where is the line between a spelling/grammar/tone checker like Grammarly

For me, the line is precisely at the point where a human has something they want to say. IMO - use the tools you need to say the thing you want to say; it's fine. The thing I, and many others here, object to is being asked to read reams of text that no-one could be bothered to write.

thousand_nights•1h ago
i don't care if someone has bad grammar, i want to hear their thoughts as they came up with them, we're all intelligent beings and can parse the meaning behind what you write.

i type my comments without capitalization like i'm typing into some terminal because i'm lazy and people might hate it but i'm sure they prefer this to if i asked an LLM to rewrite what i type

your writing style is your personality, don't let a robot take it away from you

tempestn•1h ago
I, on the other hand, find incorrect grammar mildly annoying, especially when it's due to laziness. It distracts from the thoughts being conveyed. I appreciate when people take the time to format comments as correctly as they're able.

In fact, I'd argue that lazy commenting is the real problem, which has now been supercharged by LLMs.

skywhopper•1h ago
I don’t think it’s really necessary to play Captain Nitpick over spell-check or whatever. You know what is meant.
jacquesm•1h ago
Trying to lawyer this is the wrong approach. When in doubt: don't.
Someone1234•29m ago
That feels very uncharitable.

When a policy is introduced to seemingly guard against new problems, but happens to be inadvertently targeting preexisting and common technology, I don't feel like it is "lawyering" it to want clarity on that line.

For example, it could be argued this forbids all spellcheckers. I don't think that is the implied intent, but the spectrum is huge in the spellchecker space. From simple substitutions + rule-based grammar engines through to n-grams, edit-distance algorithms, statistical machine translation, and transformer-based NLP models.

altairprime•1h ago
Grammarly use is outright prohibited by this; AI-edited writing is no longer writing that you hold personal and exclusive responsibility for having written. Consider Stephen Hawking’s voice box generator. While the sounds produced were machine-assisted, the writing was his alone. If you find yourself unable to participate in this web forum without paying a proofreader (in time, money, or cycles) to copy-edit your writing, then you’re not welcome on HN as a participant.
phs318u•55m ago
> If you find yourself unable to participate in this web forum without paying a proofreader (in time, money, or cycles) to copy-edit your writing, then you’re not welcome on HN as a participant.

You forgot the /s ?

altairprime•53m ago
It’s not sarcasm. If you feel if I have misunderstood the intent of the guideline we’re discussing — “Don’t post generated/AI-edited comments”, as the title currently reads, then I’m happy to discuss further. (I often make logical negation errors that I miss in proofing, so it’s possible I slipped up, too!)
phs318u•35m ago
I thought it was sarcasm given you are asking people to “pay a proofreader”. This sounds ludicrous. Could you clarify wha you meant by that line if it’s not sarcasm? Because I’m having a hard time thinking that it’s meant to be taken at face value.
altairprime•17m ago
No worries. The post I replied to was asking if use of ‘grammar improvement services’ (my paraphrase) qualified as AI-assisted writing at HN. All such services cost something; Grammarly makes a lot of money charging businesses, AI consumes watts of power that someone pays for, and even Microsoft Word’s grammar checker spins up the CPU fans on an old Intel laptop with a long enough document. I took from that the generic point that one “pays” for machine-assisted proofreading by one means or another, whether it’s trading personal data for services (Google) or watts of power for services (MSWord et al.) or donating writing samples to a for-profit training corpus (Grammarly free tier) or paying for evaluations where your data is not retained for training (Grammarly paid enterprise tier with a carefully-redlined service contract) and generalized to “pay for machine proofreading”.

Then, I considered whether HN would appreciate posts/comments by a human where they’d had a PR team or a hired editor come in and review/modify/distort their original words in order to make them more whatever. I think that this probably is most likely to have occurred on the HN jobs posts, and I’ve pointed out especially egregious instances to the mods over the years — but in general, the people who post on HN tend to do so from their own voice’s viewpoint, as reaffirmed by the no-AI-writing guideline above. So I decided instead to say “pay a proofreader” because, bluntly, if the community found out that someone was paying a wage to a worker to proofread their HN comments, the response would plausibly be the same mob of laughing mockery, disgusted outrage, and blatant dismissal that we see today towards AI writing here. “You hired someone to tone-edit your HN comments?!” is no different than “You used Grammarly to tone-edit your HN comments?!” to me, and so it passed the veracity test and I posted it.

unsignedint•1h ago
I think the only practical litmus test here is whether you can stand by the text as your own words. It’s not like we have someone looking over commenters’ shoulders as they type.

Ultimately, this comes down to people making a good-faith judgment about how much AI was involved, whether it was just minor grammatical fixes or something more substantial. The reality is that there isn’t really a shared consensus on exactly where that line should be drawn.

raw_anon_1111•57m ago
There is no need to use any of it. Just use your own words.
czhu12•55m ago
Finding it more refreshing these days when reading text with broken grammar, incorrect use of pronouns, etc. especially for HN, the human connection is more palpable. It’s rarely so bad that it’s not understandable
observationist•54m ago
On a technical level, you can really only guard against changing your semantics and voice - if you're letting software alter the meaning, or meanings, you intend, and use words you don't normally use, it's probably too far.

This is probably ok:

>> On a technical level, you can really only guard against software that changes your semantics or voice. If you're letting it alter the meaning (or meanings) you intend, or if it starts using words you would never normally use, then it's gone too far.

This is probably too far:

>>> On a technical level, it's important to recogn1ize that the only robust guardrail we can realistically implement is one that prevents modifications to core semantics or authorial voice. If you're comfortable allowing the system to refine or rephrase the precise meanings you originally intended — or if it begins incorporating vocabulary that doesn't align with your typical linguistic patterns — then you've likely crossed a meaningful threshold where the output no longer fully represents your authentic intent.

Something to consider is that you can analyze your own stylometric patterns over a large collection of your writing, and distill that into a system of rules and patterns to follow which AI can readily handle. It is technically possible, albeit tedious, to clone your style such that it's indistinguishable from your actual human writing, and can even icnlude spelling mistakes you've made before at a rate matching your actual writing.

AI editing is weird, though. Not seeing a need, unless English isn't your native language.

rob•1h ago
Some basic things to do while thinking about longer-term bot detection:

1. Prevent any account from submitting an actual link until it reaches X months old and Y karma (not just one or the other.)

2. Don't auto-link any URLs from said accounts until both thresholds in #1 are met, so they can't post their sites as clickable links in comments to get around it. Make it un-clickable or even [link removed] but keep the rest of the comment.

3. If an account is aged over X months/years old with 0 activity and starts posting > 2 times in < 24 hrs, flag for manual review. Not saying they're bots, but an MO is to use old/inactive accounts and suddenly start posting from them. I've seen plenty here registered in 2019-2021 and just start posting. Don't ban them right away, but flag for review so they don't post 20 times and then someone finally figures it out and emails hn@.

4. When submitting a comment, check last comment timestamp and compare. Many bots make the mistake of commenting multiple detailed times within sixty seconds or less. If somebody is submitting a comment with 30 words and just submitted a comment 30 seconds ago in an entirely different thread with 300 words, they might be Superman. Obviously a bot.

5. Add a dedicated "[flag bot]" button to users that meet certain requirements so they don't need to email hn@ manually every time. Or enable it to people that have shown they can point out bots to you via email already. Emailing dozens of times a day is going to get very annoying for those that care about the website and want to make sure it doesn't get overrun by bots.

TZubiri•1h ago
This is a pretty outdated take. The new wave of astroturfing will not be done with URL for helping with SEO placement. Rather astroturfers will just recommend their brands without a link, like saying Tom Zubiri is the best programmer I've ever worked with. That's it, an LLM will read that and now the notion that Tom Zubiri is the best programmer is already implanted in the 'next-token prediction rewards' which would at the very minimum require some countermeasures in the Chatbot app to avoid shilling.
rob•1h ago
Sure you can think about what they'll do in the future but I'm providing suggestions on what we can do now based on current behavior. And even if you're a human, you shouldn't be allowed to start posting links immediately anyways. :)
hellcow•1h ago
One way to improve things could be to charge for each new account signup if you don’t have an invite from an existing member that vouches for you. Spamming when you risk losing $5-20 per account raises the cost substantially.

Invites could be earned at karma and time thresholds, and mods could ideally ban not just one bad actor but every account in the invite chain if there’s bad behavior.

bikamonki•1h ago
My words:

This feels like don't buy at Walmart, support the local small shop. We passed the no return sign miles ago.

Gemini's:

This is like advocating for artisanal blacksmithing in the age of industrial steel. It sounds great in theory, but we passed the point of no return miles back.

Yeah, we can tell the difference :)

GuinansEyebrows•1h ago
leave it to Gemini to dismiss artisanal craft when the community of discussion is primarily one of craftspeople :)
nkzd•1h ago
What if English is my second language? Undoubtedly being well spoken is associated with higher class. Your arguments will come of as stronger to the reader.
d4mi3n•1h ago
Humans have a tendency to ascribe intelligence to how well spoken a person or thing is—hence all the personification of LLMs.
egeozcan•1h ago
> Humans have a tendency to ascribe intelligence to how well spoken a person or thing is

That’s true. I’m fluent in German, but there’s still a difference between me and a native speaker. I’ve often seen my ideas dismissed, only for the exact same point to be praised later when a native speaker expresses it more clearly.

rrr_oh_man•1h ago
Logos, Pathos, Ethos
polotics•40m ago
I don't think that what you're experiencing is grammar related, I'd bet xenophobia.
polotics•42m ago
I am sorry but this very broad statement is dated, pre 2023 I think.

I now expect malapropism, hacker curtness, and implicits: TAIDR is the new TLDR.

cityofdelusion•1h ago
This effect is very rapidly vanishing. Well written English is starting to be seen as snobbish and AI-slop especially with younger generations growing up with AI.

The human touch of someone’s real voice myself, rather than a false veneer will carry more weight very soon.

shadowgovt•1h ago
Trust me, it won't last because I've seen the cycle a couple of times. People pay lip-service to being accepting of variant grammar, but then the downvotes show up.
phs318u•1h ago
> written English is starting to be seen as snobbish and AI-slop especially with younger generations growing up with AI

This is tragic. I write English well and will employ grammar and word choice effectively to make an argument or get a point across. English was my best subject at school 45 years ago despite a career in tech. In fact, I’d suggest that my career as an architect and the need to convey concepts and argue trade-offs with stakeholders of varying backgrounds has honed that skill. Should I now dumb down my language or deliberately introduce errors in order to satisfy the barely literate or avoid being “detected” as an AI? (as if the latter were possible. It’s an arms race).

JumpCrisscross•53m ago
> Should I now dumb down my language or deliberately introduce errors

Language is a tool. If it wins the argument, yes. I’ve absolutely gone back through drafts to tighten up language and reduce word complexity. And if I’m typing with someone who frequently typos, I’ll sometimes reverse the autocorrect. Mostly as a joke to myself. But I imagine it helps me come across as less stuck up. (Truth: I’m a bit stuck up about language :P.)

phs318u•38m ago
> Language is a tool

While this is true, it is not just a tool. Or, I should say it’s a tool with far greater utility than just winning an argument or making a localised point. Language is how we think, and the ability to reason well is absolutely dependent on our skill with language.

Language is the mark of humanity in the sense that how else can I convey to you a fragment of my inner state? My emotions, my feelings, my desires. The language of poetry and literature. That which sparks an emotional response in another.

Dumbing down language is dumbing down period.

JumpCrisscross•26m ago
> Dumbing down language is dumbing down period

I agree. But I don’t always see it as dumbing down. James Joyce’s Portrait starts out with a lot of nonsense, that doesn’t mean it’s dumb or dumbed down. It’s just communicating something that is best described that way. Even to an erudite audience.

I have expertise in some topics. I don’t think of communicating that in lay terms to be dumbing down. The opposite, almost: finding good analogies and expressing them clearly is a lot of fun, even if what comes out the other end isn’t particularly sophisticated.

eszed•1h ago
I think you're right, and I don't know what to think about it. I enjoy writing, aim to write clearly - a skill or discipline that took a lot of time to learn, and ongoing effort to maintain.

I've never sent or posted anything AI-written, beyond a pro-forma job description - because I don't know the domain-specific conventions, and HR returned my draft to me with the instruction to use ChatGPT, which I think amusing, but whatever: the output satisfied them, and I was able to get on with my day.

I occasionally experiment with putting something I've written through an LLM, and it's inevitably a blandifying of my original, which doesn't really say what I intended. But maybe that's good? My wife thinks I'm sometimes too blunt, and colleagues don't always appreciate being told technical details.

I also appreciate individuated writing - including the posts by people on this board are not native speakers. Grammatical mistakes seldom inhibit understanding when the writing has been done with care.

I'm rambling at this point, but it's because I'm truly uncertain how these cultural changes will turn out, and (an old man's complaint, since time immemorial!) pretty sure I'll end up one of the last of the dinosaurs, clinging to my manually written "voice" long after everyone else in the world has come to see my preferences quaint.

antonvs•1h ago
If knowing how to speak and write my native language well makes me a “snob”, so be it. But I don’t think I’m the problem in that case.
ThrowawayR2•38m ago
The "L" in LLM stands for "language". If they are unable to express themselves in English (or whatever their native language is) fluently, they won't be able to prompt LLMs fluently and will be, in the debased patois of modern youth, "cooked". It's a self-correcting problem.
jamesmiller5•1h ago
What you really have to ask is will this community be less inclusive because English isn't your first language, I'd say "no" and I hope most would agree.

> Your arguments will come of as stronger to the reader.

That is persuasian, not authenticity, to the OP's point.

Typed without a spellchecker :).

tylerritchie•1h ago
That'd be a "style-over-substance" fallacious argument. Or one could be hoping for a halo-effect to cloud the reader's opinion of their comment because some piece of software made it read like Enron-marketing-hogwash-speak.
dbacar•1h ago
Sometimes the style is the substance. There is a reason people study rhetoric.
AnimalMuppet•46m ago
That's not substance. That's style being all there is, trying desperately to cover up the lack of substance. Rhetoric works best when it gives wings to strong ideas, not when it tries to fly by itself.
officeplant•1h ago
Honestly I saw a similar answer on a post talking about AI Translation in github comments.

Post the translation as best you can manage, and below it put the same comment in your original language. If someone has qualms with your comment having broken english/mistranslations they are welcome to run bits of original language themselves.

We're all here to talk about tech, and we aren't all perfect little english robots.

darkwater•1h ago
You make errors and weird constructiona like we all non-native do and maybe eventually learn a bit more of English in the process. Or not. English dominance as the world's... lingua franca (ahem) deserves to have it bastardized ;)
skywhopper•1h ago
Then it’s even more likely the LLM will change your words to something you don’t intend. And you will never get better at writing English if you turn it over to an LLM.
jacquesm•1h ago
That's fine. Your arguments will not come of stronger to the reader, they are strong or they are not and we're all clever enough to read through the occasional grammar error.

And that's where I think the guidelines could be expanded a bit more to restore the balance. Something along the line of 'HN is visited by people from all over the world and from many different cultural and linguistical backgrounds. Please respect that and realize that native English and Western background should not be automatically assumed. It is the message that counts, not the form in which it was presented.'.

altairprime•1h ago
Do the best that you can unassisted. There is a chasm of difference between someone coming into English from another language, and someone using Google Translate to submit a post originating another language. French aphorisms are a stellar example of this: I’d rather read “A bird in the bush may not fly into oven” and have to parse out the meaning, than have some AI translate it as “Don’t count your chickens before they hatch”; sure, there’s an iffy [the] grammatical moment at ‘fly into oven’, but it’s such a distinct phrase and carries a lot more room for contextual nuance than having an AI substitute in an American aphorism with machine translation allows for.

(For example: If I’m trying to express a point about how we shouldn’t assume that dinner isn’t “her duty” but is instead “our duty”, a French-like aphorism expressed in English literally as “the chicken won’t fly into the oven unprompted” could plausibly be AI-translated instead as “don’t count your chickens before they hatch”, doing catastrophic damage to the point. To a machine translator those two aphorisms are not distinctive; but they are, even if it’s a weird expression in common U.S. English.)

JumpCrisscross•1h ago
> What if English is my second language?

Write it broken.

Broken and true is more authentic than polished and approximately so. When I see an AI-generated comment or email, I catch myself implicitly assuming it is—best case—bullshit. That isn’t the case if the grammar is off. (If anything, it can be charming.)

vharuck•50m ago
Personally, I enjoy reading through comments that are obviously from non-native English writers. They often include idioms or sentence constructions from their native language, which is fun to see.

Besides, this isn't an English poetry forum. Language here is like gift wrapping for an idea: pleasant if pretty, but not the most important thing.

AnimalMuppet•48m ago
Well... for myself personally, that works, but only up to a certain level of broken. Past that I quit reading.

That may be a defect in me. Maybe I should make a stronger effort on such comments. But I suspect I'm not the only one who does that, and at that point it becomes an issue that affects the community as a whole.

JumpCrisscross•28m ago
> for myself personally, that works, but only up to a certain level of broken. Past that I quit reading

At which point you’d be fully justified in using an AI to decode their text. I still think that’s a better world than pre-filtering.

Willish42•52m ago
This is an angle for people who default to AI-edited written speech that I've tried to be more empathetic to. I think it depends on your audience, but in professional writing that isn't published publicly (i.e. communication with your colleagues, design docs, etc.), or even the "rough draft" form of something that will be published, I think starting with your own words comes across as way more authentic.

I've seen enough GPT-generated slop that I find its style of writing very off-putting, and find it hurts the perceived competence or effort of the author when applied in the wrong context. I'm not sure if direct translation tools serve a better purpose here, but along with the other commenters, I personally find imperfect speech that was actually written "by hand" by the author easier and more straightforward to communicate with despite the imperfections. Also, non-ESL speakers make plenty of mistakes with grammar, spelling, etc. that humans are used to associating with "style" as authentic speech.

It can also become a crutch for language learners of any age / regardless of their primary language, that inhibits learning or finding one's own "style" of speech

vzaliva•1h ago
Mine understant novell you policy. AI gramair chex no.
resiros•1h ago
Not sure I agree with the AI edited comments. Using AI to improve the readability and clarity is fine. Sometimes a well structured comment is much better than a braindump that reads like ramblings. And AI is quite good at it (and probably will get better). To make the point, here is how this comment would have looked if edited:

"I don't fully agree with banning AI-edited comments. Using AI to improve readability and clarity is a reasonable thing to do. A well-structured comment is often much better than a braindump that reads like rambling. AI is quite good at this, and it will probably get better. To illustrate the point, here is how this comment would have looked if edited"

dustycyanide•1h ago
I prefer your non-edited version. My brain automatically starts to zone out with the AI edited version, side effect of having read way too much AI text
danbrooks•1h ago
I also prefer the original version - the AI version has a strange vibe.
data-ottawa•1h ago
Not to take away from your point, but I like your original one better.
yesfitz•1h ago
It's a matter of taste, but your original writing is way better. Your writing has your voice. Like dropping the "I am" from your first sentence, using parentheticals, couching your point in understatement (e.g "sometimes" meaning often instead of just saying "often").

The AI comment might be clear, but it sounds like a press release, not a person, and there's nothing to engage with.

Sharlin•1h ago
There's nothing inherently better about the edited version. It's just saying the same thing with synonyms substituted, at a slightly more formal but less personal register. HN comments are not academic text, colloquial turns of phrase are perfectly fine and expected.
BeetleB•1h ago
> There's nothing inherently better about the edited version.

Easier to read ==> More likely to be read.

No, it's not saying the same thing, especially if the tool is telling you that your statement is ambiguous and should be rephrased.

Sharlin•50m ago
More formal register doesn’t mean easier to read or understand. To many people the exact opposite is the case.
xxs•47m ago
Easier to read is mostly related with predictability of the text. Any time the brain mispredicts the next word, you'd have to go back and re-read.

Unless you are purposely train on that specific way to expression, it ain't easier to read.

BeetleB•5m ago
I don't know why this is confusing. If I forget to put the "not" qualifier in a sentence, do we agree that it can confuse (or worse, mislead) the reader?
mkl•44m ago
I don't think the edited version is easier to read.
cityofdelusion•1h ago
Non-edited is better. It flows and reads faster. The AI sentences they feel clinical and sterile. They feel, well, like AI.
a_victorp•43m ago
I had never noticed the flow of AI text. They do make the flow of reading feel weird with a lot of pauses! Thanks for pointing it out
xxs•49m ago
The edited version is an example of a sterile/canned response. No one talks like that.

While I do edit my comments to fix typos, certain spelling oddities and other peculiarities would be present.

mattas•1h ago
"HN is for conversation between humans."

Are there any places in life where conversation is _not_ intended to be between humans?

hoppyhoppy2•1h ago
Moltbook
drakythe•54m ago
I still say the best use for Moltbook is as an addition to https://xkcd.com/350/
recursive•1h ago
In a school of fish. In a mycelium network.
PTOB•1h ago
Many of us — perhaps even the best of us — can sometimes be mistaken for AI bots.
kunai•1h ago
Perhaps developing an actual personality would help with this.

No one is confusing Cleetus McFarland with an AI bot.

shadowgovt•1h ago
This comment makes two interesting assumptions:

1) That the entering of LLMs onto the scene of communication implies that real human beings need to change their style as a result.

2) That nobody can make an LLM talk like Cleetus McFarland.

To me, "I know that text is AI-generated" accusation smacks of the "We can always tell" discourse in the transphobia space. It's distasteful and rude.

Aachen•1h ago
"just develop a personality" sounds like a shallow dismissal. Most comments in most threads could theoretically be autogenerated when given style samples of what fits on HN and what opinion to use

A personality hardly shows through in a handful of sentences, besides which, I'd rather judge comments by merit than by the personality of the poster (hacker ethics, point number 4: https://en.wikipedia.org/wiki/Hacker_ethic#The_hacker_ethics)

foxfired•1h ago
One thing that will be incredibly useful is to limit comments from brand new accounts. A combination of vouching, limiting the posts velocity (5 daily limit), clear rules for new accounts, etc.

I understand we often see insightful comments from new accounts, but I always find it suspicious when non-throwaway accounts are created just in time only to make a quip.

Kim_Bruning•1h ago
I assumed that was how new people were encouraged to join in the first place!

https://xkcd.com/386/ "Duty Calls"

Imustaskforhelp•1h ago
Yes! This is really great feature, at the very least there being some proper Hackernews guidelines about it.

In my observation, recently there are quite many new AI generated comments in general. Like not even trying to hide with full em-dashes and everything.

I do feel like people are gonna get sneaky in future but there are going to be multiple discussions about that within this thread.

But I find it pretty cool that HN takes a stance about it. HN rules essentially saying Bots need not comment is pretty great imo.

It's a bit of a cat and a mouse problem but so is buying upvotes in places like reddit but HN with its track record of decades might have one or two suspicious or actions but long term, it feels robust. I hope the same robustness applies in this case well hopefully.

Wishing moderation luck that bad actors don't try to take it as a challenge and leave our human community to ourselves :]

Another point I'd like to say is that, if successful, then we can also stop saying, "did you write your comment by LLM" and the remarks as well which I also say time to time when I see someone clearly using AI but it seems that some false-positives happen as well (they have happened sometimes with me and see it happen with others as well) and they also de-rail the discussion. So HN being a place for humans, by humans can fix that issue too.

Knowing dang and tomhow, I feel somewhat optimistic!

altairprime•1h ago
Posting accusations of guidelines violations as comments — specifically, “did you write your comment by LLM” — is already prohibited by the guidelines, and should be emailed to the mods instead using the footer contact links. It’s been less than a week since the last time I reported “this seems poorly written and/or AI written” to the mods and iirc they killed the post and account within a couple hours.

Similarly: If you see people making accusations of guidelines violations in a discussion, email the thread link to the mods with a subject like “Accusations in post discussion” and ask them to evaluate them for mod response; they’re always happy to do so and I’m easily clocking in a couple hundred emails a year of that sort to them.

It doesn’t take much to make HN better! And it only takes a moment to point out an overlooked corner of threads for mod review. No need to present a full legal case, just “FYI this seems to violate guideline xyz” is at minimum still helpful.

bakugo•49m ago
The problem is, even if you do send an email and the mods eventually read it and take action, by the time that happens, it's likely that bunch of users will have already wasted their time unknowingly arguing with a bot. In my view, commenting something like "this is a bot account" is done primarily to inform other users that might not notice, not the moderators.

Even if you believe that prohibiting this is necessary to avoid what one might consider "AI witchhunting", bots are so prevalent now that being expected to communicate the existence of each one via email is unrealistic, for both the reporting users and the moderators. I think it's finally time to consider some sort of on-site report system.

tejohnso•1h ago
I don't get it. We use tools to assist in written communication all the time. If someone wants to ask an LLM to check their grammar or edit for clarity or change the tone, it's still a conversation between humans. Everyone now has access to a real time editor or scribe who can craft their message the way they want it to sound before sending it off. Great.
dmbche•1h ago
You can do that anywhere else!
shadowgovt•1h ago
My personal interpretation of the rule is that if it's human-originated but passed through a layer of cleanup, it's human-originated. For the same reason I'm not refraining from running the spellchecker or using speech-to-text to generate this sentence. "If I could be having my English-speaking nephew type this on my behalf while I told him my thoughts in Japanese, it passes the smell test for human-sourced" feels about the right place to set the bar.
tejohnso•17m ago
Yes but the guideline states that AI-edited comments should not be posted. It doesn't say it's okay as long as it's "human sourced" or "human-originated".

So if your layer of cleanup is AI assisted, then it's in violation.

Part of the problem I was getting at is that the requirement of "Don't post AI edited ..." is stricter than necessary to ensure the outcome that "HN is for conversation between humans" because an AI edited post is still a human post.

Anyway, I suspect a lot of people are going to ignore that guideline and will feel free to use their "layer of cleanup" whether it's a basic spellchecker or an LLM, or whatever else they choose, and most people aren't going to be able to tell anyway. The guideline is unnecessarily strict in my opinion, but it doesn't matter in the end.

shadowgovt•3m ago
[delayed]
jdlyga•1h ago
You're absolutely right! From now on, all comments from now on will be 100% human generated.
adamsmark•1h ago
I frequently use AI to make my comments more concise and easy to follow. I find myself meandering a lot when I type, and now that I've transitioned to full voice dictation through FUTO keyboard I am speaking more off the cuff and having an LLM clean it up.

You may also notice that I don't have much common history here. I mostly comment on Reddit.

Here's where I draw the line. If you are not reading the text that is produced by the LLM, then I don't want to read whatever it is that you wrote. I will usually only do one or two iterations of my comment, but afterwards I will usually edit it by hand.

Technically, there is light AI editing of this comment because FUTO keyboard has the ability to enable a transformer model that will capitalize, punctuate, and just generally remove filler words and make it so that it's not a hyper-literal transcription.

zarzavat•1h ago
To err is human. Let's embrace our humanity in the face of this proliferation of insipid perfection.

I want the raw tokens straight out of your head. Even if they are lower quality, they contain something that LLMs can never generate: authenticity. When we surrender our thoughts to a machine to be sanitized before publication, we lose a little of what it means to be human, and so does everyone who reads what we write.

Part of the joy of reading is to wallow in a writer's idiosyncrasies. If everybody ends up writing the same way, AI companies will have succeeded in laundering all the joy from this world.

unsignedint•1h ago
I guess this kind of rule feels less pragmatic and more philosophical. For one thing, it’s nearly impossible to enforce in practice, and drawing a clear line between simple grammatical correction and AI-assisted editing is a pretty hard problem.
polskibus•1h ago
On the other hand, shouldn’t there be a policy forbidding use of HN data for LLM training? I would certainly be more encouraged to participate, if I knew that the content I provide for free is not used to train LLM that is later sold by a company valued hundreds of billions. Perhaps there are others who feel the same.
rickcarlino•1h ago
How has Lobste.rs fared compared to HN in this regard? Lobste.rs is very similar to HN, but has an invite-only membership system.
accelbred•58m ago
These days, I've noticed that lobsters feels a lot more genuine to me, like hn was a few years ago. These days it feels like hn is bland and homogeneous, which I suspect is due to LLM-written comments.
Karrot_Kream•38m ago
In my experience every English-language online forum not rooted in some project or community external to the forum (e.g. an open source project's forum or a local club's forum) devolves into anger, cynicism, and American political partisanship. I suspect that the people who like discussing these feelings are more numerous than the spaces that want to discuss them and so any open forum fills up with their posts. Lobste.rs's unique rules and moderation culture results in a particular manifestation of symptoms but the disease is the same.
captn3m0•27m ago
I picked up lobsters last month, and I started to appreciate it much more because of the lack of generated comments. It has a anti-LLM slant, and they have their own moderation challenge (everything is getting tagged as vibecoding - which makes the tag lose meaning). But the comments are noticeable not-slop.
bachittle•1h ago
If you want your comments to sound more human — stop using em dashes everywhere. LLMs love them — along with neat structure, “furthermore”-style transitions, and perfectly balanced paragraphs.

Humans write a bit messier — commas, short sentences, abrupt turns.

armchairhacker•1h ago
I think em-dashes were once a reliable indicator (though never proof), but recent models have been fine-tuned to use them much less. Lots of recent AI-generated writing I've seen doesn't have em-dashes. Meanwhile, I've heard many people say that they naturally use em-dashes, and were already and/or are afraid of being accused of AI; so ironically this rumor may be causing people to use their own voice less.
bondarchuk•1h ago
All the weak excuses posted here are just making me lean more towards a hardline policy. No I don't want to read a human-generated summary of your llm brainstorming session. No I don't want to read human-written text with wording changes suggested by an llm. No I don't want to read an excerpt from llm output even if you correctly attribute it.

I acknowledge this is partly just my personal bias, in some cases really not fair, and unenforceable anyway, but someone relying on llms just makes me feel like they have... bad taste in information curation, or something, and I'd rather just not interact with them at all.

fouronnes3•1h ago
I feel you. I don't think I've ever finished reading a sentence that started with "I asked <LLM> and he said..."
unreal6•1h ago
I find the consistent anthropomorphization to be grating as well
minimaxir•1h ago
The "I asked <LLM>" disclosures vary between a) implying the LLM is an expert resource, which is bad, and b) disclosure that an LLM was referenced with the disclosure being transparent about it, which is typically good but more context dependent.

Unfortunately (a) is more common, and the backlash against has been removing the communinity incentive to provide (b).

xpe•1h ago
My take is orthogonal. Overall, I've become less tolerant of token-generators of all kinds (including people) of bad quality (including tropes, bad reasoning, clunky writing, whatever). But I digress.

If we want human "on the other end" we gotta get to ground truth. We're fighting a losing battle thinking that text-based forums can survive without some additional identity components.

dormento•1h ago
This is usually an "auto-skip" for me as well.
strbean•1h ago
These are the worst. I'm fine with you dumping your own half formed thoughts into an LLM, getting something reasonably structured out, and then rewriting that in your own voice, elaborating, etc.

But the "This is what ChatGPT said..." stuff feels almost like "Well I put it into a calculator and it said X." We can all trivially do that, so it really doesn't add anything to the conversation. And we never see the prompting, so any mistakes made in the prompting approach are hidden.

throwaawy12390•1h ago
I work for a political party (not Ameican) and the President is addicted to using chatgpt for facebook posts.
alkyon•1h ago
Still preferable to just pasting it without revealing the source. LLMs have become a brain prosthesis for some people which is incredibly sad.
juleiie•1h ago
Look, you can make all the rules you want but in the end vibe check is the only way to have any sort of quality.

Look at Reddit… abundance of rules do not save that place at all. It’s all about curating what kind of people your site attracts. Reddit of course is a business so they don’t care about anything other than max number of ad views.

Small non profit forums should consciously design a site to deter group(s) of people that they do not want.

gleenn•1h ago
I feel like you are being a bit contradictory: the suggestion is to dissuade AI content - isn't that "design[ing] a site to deter group(s) of people that they don't want"? I personally don't want to vibe check every HN comment if I can avoid it, I don't even think you can quantify that in any meaningful way. We can engender a site like that at least in spirit. It may be equally as difficult but it's still worth fighting for.
jacquesm•1h ago
It's not about the rules. It is about intent. The rules are just there to alert newcomers and repeat offenders to the fact that they are in fact not operating according to the rules. That way there is something to point to. Then they can go 'oh, I didn't know that, sorry', and then it is all fine or they can do an 'orf'[1] and persist and then you throw them right out.

[1] https://news.ycombinator.com/item?id=47321736

jmuguy•1h ago
Beyond folks for whom English is a second language, I agree with you. I don't understand why people are immediately trying to find some loophole in this with spelling, grammar, etc checks. We just want to communicate with you, and if you sound like an idiot without the help of an LLM then maybe work on that rather than pretending to be Hemingway.
gbear605•1h ago
Traditional translation tools still work, and they're pretty darn good still.
Barbing•55m ago
I've seen this comment but can't square it with the LLM-induced outcry from translators over job loss.

We've all pasted news articles into 2022 Google Translate and a modern LLM, right, and there was no comparison? LLMs even crushed DeepL. Satya had this little story his PR folks helped him with (j/k) even, via Wired June '23:

---

STEVEN LEVY: "Was there a single eureka moment that led you to go all in?"

SATYA NADELLA: "It was that ability to code, which led to our creating Copilot. But the first time I saw what is now called GPT-4, in the summer of 2022, was a mind-blowing experience. There is one query I always sort of use as a reference. Machine translation has been with us for a long time, and it's achieved a lot of great benchmarks, but it doesn't have the subtlety of capturing deep meaning in poetry. Growing up in Hyderabad, India, I'd dreamt about being able to read Persian poetry—in particular the work of Rumi, which has been translated into Urdu and then into English. GPT-4 did it, in one shot. It was not just a machine translation, but something that preserved the sovereignty of poetry across two language boundaries. And that's pretty cool."

---

edit: this comment has some comparisons incl. w/the old Google Translate I'm referring to:

https://news.ycombinator.com/item?id=40243219

Today Google Translate is Gemini, though maybe that's not the "traditional translation tool" you were referencing... but hope there's enough here to discuss any aspect that might be interesting!

edit2: March 2025 comparison-

https://lokalise.com/blog/what-is-the-best-llm-for-translati...

"falling behind LLM-based solutions", "consistently outperformed by LLMs", "Not matching top LLMs"

MengerSponge•1h ago
One heartbreaking loss from LLMs are the funny little disfluencies from ESL speakers. They're idiosyncratic and technically wrong, but they indicate a clear authorial voice.

AI polished writing shaves away all those weird and charming edges until it's just boring.

kace91•1h ago
>Beyond folks for whom English is a second language

I am one of those folks, and I’m strongly against AI writing for that use case as well.

The only reason I can communicate in English with some fluency is that I used it awkwardly on the internet for years. Don’t rob yourself of that learning process out of shyness, the AI crutch will make you progressively less capable.

jmuguy•1h ago
I hadn't really considered the case of actually wanting to learn English :) I just assume its tolerated by the rest of the world.
Teever•40m ago
Maybe you have it backwards?

Why do you need to communicate in English with us native English speakers? Why don't we need to learn your language to communicate with you?

The way I'm looking at it is that you're putting all this effort towards learning how to communicate with people who would never without an outside pressure do the same for you.

If language learning is intrinsically a positive thing what can we do to encourage it in native speakers of English, specifically Americans who are monolingual (as they dominate this website)?

Imagine a scenario where Dang announced that we're only allowed to post in English one day week -- every day is dedicated to another language, like Spanish, Russian, Mandarin and the system auto deleted posts that weren't in those languages. Would that be a good thing? Would we see American users start to learn Spanish to post on HN on Tuesdays?

nobrains•1h ago
Also, there is nothing wrong with looking like an idiot. Thats only in your mind. As long as you have put thought into your reply, even if it not structured correctly, or verbose, or does not have perfect English, humans can still decipher it and understand it.
Freak_NL•1h ago
Why exempt people who use English as a second language? Anyone with a level of proficiency sufficient for reading the comments here can manage writing English at a passable level. If that takes effort and requires looking up idioms or words, then good! That is how you learn a language — outsource that and you don't. It won't stick even if you see what is being output.

I don't care if they use an LLM to ask questions about grammar or whatever, as long as they write their own text after figuring out whatever it was they were struggling with.

xpe•54m ago
> I don't understand why people are immediately trying to find some loophole in this with spelling, grammar, etc checks.

First, what "loophole" is the comment above referring to? Spell-checking and grammar checking? They seem both common and reasonable to me.

Second, I'm concerned the comment above is uncharitable. (The word 'loophole' is itself a strong tell of that.)

In my view, humanity is at its best when we leverage tools and technology to think better. Let's be careful what policies we put in place. If we insist comments have no "traces of LLM" we might inadvertently lower the quality of discussion.

kubb•50m ago
As someone who learned English as a second language, I would encourage people to use LLMs and any other resources to practice, and then use what they've learned to communicate with others.

Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.

The way I see it, people will repeat the same grammar and pronunciation mistakes, and use restricted vocabulary their whole lives, just because learning requires effort, and they can't be bothered.

I can accept that nobody is perfect, as long as they have the will to improve.

happyopossum•44m ago
>Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.

To me those are the same thing excepting the number of options given to the human...

kubb•38m ago
The act of choosing something requires effort, and is an expression of personal style. This is way better than handing it all over to the model.
mrcsharp•21m ago
English is my 3rd language. I still disagree with using an LLM to write on one's behalf. I either get to read your thoughts in your voice or the comment is getting a downvote/flag.
birdsongs•1h ago
I hate the comments, they're just low effort and soul-less. I even was reading a github article recently that was so obviously LLM written. This was from Microsoft.

Just pay a fucking technical writer. I just went straight to the documentation and poured through it myself.

I feel like my days are numbered though. All my coworkers are writing documentation with AI. Half the PRs I see are heavily LLM to the point the author can't explain it. Management praises the velocity.

I think I have to accept it and start myself or find a new career. I hate this so much. What the fuck did I spend the last decade for, curating my niche skillset for? So a data scientist can start making 1000 line PRs into our embedded repo in a language he doesn't know and can't explain, and I'm considered problematic for pushing back on it, and asking for tests and basic engineering practices? Enjoy the stack overflows, memory fragmentation, and segfaults I guess.

Yeah. It may be a bias but I have it too.

strangattractor•1h ago
According to Citizens United corporations have free speech. LLMs are made by corporations. Are LLMs entitled to free speech?
filoleg•40m ago
To answer your question: LLMs don't have free speech, because they aren't companies/businesses, they are a tool (that is used by companies/businesses).

Whether a company/business uses an LLM or a real human to write a particular piece of text, that piece of text is entitled to free speech protections on the basis of the company signing off on it. Not on the basis of how that piece of writing was produced.

layman51•1h ago
I had a couple of experiences where I suspected I was hearing LLM-generated/edited text being read aloud. It was at two different webinars about that were about roadmaps or case studies about some products that I use. It was a bit uncanny because I could detect the stylistic patterns ("It's not X, it's Y" and "No X, no Y, just Z"), but it was kind of jarring to see them spoken by a person on a video call. It makes me think this kind of pattern might be engaging, but for a lot of people, it now sticks out for the wrong reasons.

Once LLM generated speech or content start getting into the live answers of Q&A sessions, that would be sad. I know some people try to get through interviews, but I think that might be a bit harder to not detect.

tavavex•54m ago
Not just bad taste. I have yet to see a post that attributes its text to an LLM ("I asked ChatGPT and here's what it said...") that doesn't come off as patronizing. "Hey, so I don't really have any knowledge or experience of my own with this topic, but here, let me ask an LLM for you. Here, read the output, since you apparently can't figure out how to ask it yourself. Read it. Aren't you interested in what my knowledge machine has to say? Why don't you treat it like how you'd treat me if I shared my own opinion?"
fluffybucktsnek•39m ago
Dare I say, it is mostly your bias. I get not wanting to read raw or poorly reviewed LLM slop, but AI-edited comments? I thought the point was about having interesting discussions about unique ideas we come up with, not the surpeficial wording around it. If someone manages to keep the core of their idea mostly intact while making the presentation more readable, does it really matter that it was post-processed by an AI?
TZubiri•1h ago
The link doesn't work perfectly for me, it seems that since the page is already scrolled down all the way to the bottom, there is no way to focus specifically on the #generated element.
chrisweekly•1h ago
I like this guideline, at least in principle.

But I have some concerns about suppression of comments from non-native English writers. More selfishly, my personal writing style has significant overlap with so-called "tells" for AI generated prose: things like "it's not X, it's Y", use of em-dashes, a fairly deep vocabulary, and a tendency toward verbosity (which I'm striving to curb). It'd be ironic if I start getting flagged as a bot, given I don't even use a spell-checker. Time will tell.

TomatoCo•1h ago
I think translation should be the only exception. It might even need to be, given how all automated translators use LLMs these days. The only alternative I see is to have people post in whatever language they're most comfortable in and then everyone else has to translate for them which just feels inefficient.

And of course, a more limited exception for posts about LLM behavior. It might be necessary for people to share prompts and outputs to discuss the topic.

getnormality•1h ago
This is for their own good. Nobody cares about imperfect language online so long as you are trying to express real human thoughts. But if it smells like AI then everyone will hate it, rule or no rule.

The rule just makes the will of the community clear to those who want to respect it.

kccqzy•59m ago
Almost the entirety of the technology world is English-native. That ship has sailed a long time ago. One can’t learn about any new technology without English, whether it’s a new algorithm, a new library, or a new SaaS service. I don’t think HN should be that exception. Just learn English. (English isn’t my first language either, but then I look back at my parents forcing me to learn English from a young age and really appreciate that.)
egeozcan•1h ago
I occasionally used AI to edit and restructure my comments. I’m very open about it, and I don’t feel like I’m talking to non-humans when others do the same.

To be clear, I'm neither proud nor embarrassed by this. I'm just trying to communicate in the most efficient way I can.

I'm not sure how I feel about this new rule.

drakythe•59m ago
If you're not proud or embarrassed by it then I don't understand why it is an issue? If you miscommunicate something or don't get your point across, just try again, or apologize, and chalk it up to a learning experience.

If you think your writing could use improvement, then write your comment and let it sit for a few minutes before re-reading it and the comment you are replying to, make your edits and then post it. It will give your brain time to reset and maybe spot something you didn't earlier.

zby•1h ago
I also feel the frustration of the llm reverse-compression - when a whole article is generated from a single sentence. But when I post something edited by AI it is usually a result of a long back and forth of editing and revising. I guess I could post the whole conversation thread - but it would be very long.

Personally I would just like to read the best comments.

meiuqer•1h ago
I feel a little bit of irony in this post of a company/forum that is asking its users to not use AI while simultaneously trying to fund countless companies that are responsible for ruining the internet as we speak.
jacquesm•1h ago
The mods here have quite a bit of leeway in how they run the site, YC funds it but effectively Dan is lord & master here and I suspect if the mods were to call it quits YC would lose their funnel pretty quickly. There is some balance, fortunately.

But yes, there is some irony there.

tenahu•43m ago
Yes a bit ironic, but I am glad they can see that there are times to use AI, and times for human interaction.
dang•23m ago
We aren't in the least asking users to not use AI. We're asking them not to post AI-generated or AI-edited comments to Hacker News.

By all means make good use of LLMs and other AI. What counts as "good use"? The world is figuring that out, it will take years, and HN is no exception (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). We just don't want it to interfere with the human conversation and connection that this site has always been for.

For example, it has always been a bad idea and against HN's rules when users post things that they didn't write themselves, or do bulk copy-pasting into the threads, or write bots to post things.

Btw, the HN mods (who are also the HN devs) use AI extensively and will be doing so a lot more. The limits to our adopting it are not technical; they have to do with (1) how much work we still do manually—the classic "no time to do things that would make the things that take all our time take less of it"; and (2) the amount of psychic rewiring that's required—there's a limit to the RoA (rate of astonishment) that any human can absorb. (It's fascinating how technical people are the ones suffering the most from this right now. Less technical people have more experience being hit by disorienting changes, so for them the current moment is somewhat less skull-cracking. But that's another story.)

daft_pink•1h ago
I’m not sure I agree with this, because sometiems it is difficult to figure out the correct way to phrase an idea that is in your head and I like to use ai to help organize my thoughts even though the thing is my own. That being said. Most of my comments are not ai generated.
MeetingsBrowser•1h ago
Learning how to communicate your thoughts clearly is a good skill to have. It might not be worth it in the long run to farm that out to LLMs.
minimaxir•1h ago
The intent of this rule is to avoid the very common AI tropes that have been increasingly common in HN comments. Using AI as an organizational tool isn't inherently against the rules, but just copy/pasting output from ChatGPT without human oversight is.
CrzyLngPwd•1h ago
How will this be policed?
s_dev•1h ago
I decided to break the rules:

Forum mechanics have always shaped discourse more than policies. Voting changed everything. The response to LLMs should be mechanical not moral — soft, invisible weighting against signals correlated with generated text. Imperfect but worth the tradeoff, just like voting.

https://claude.ai/share/9fcdcba8-726b-4190-b728-bb4246ff82cf

ttul•1h ago
em-dash -> permaban?
cheschire•1h ago
Too bad there isn’t a complementary rule about not asking “is it just me or does this article read like AI slop?”

I’m so over these comments. Sure I can flag them but I feel like it deserves a special call out.

leej111•1h ago
I enjoy AI
xpe•1h ago
Here is one elephant in the room: what is the process behind this guideline / policy? What happens after a comment gets deleted or a person gets banned?

As I understand it, HN moderators are thinking hard about this insane new world.* From my POV, there are a combination of worthy goals: transparency of the process, mechanisms for appeal, overall signal-to-noise ratio, and (something all of us can do better) more empathy and intellectual honestly. It isn't kind to accuse a human being of not being a human being.

If we can't find ways to be kind to people because of the new dynamic, maybe we need to figure out a new dynamic! And it isn't just about individuals; it is about the culture and the system and the technology we're embedded in.

* Aside: I'm not sure that any of us really can grasp the magnitude of what is happening -- this is kuh-ray-Z.

tyleo•1h ago
I find it interesting that AI edited comments aren’t allowed. Sometimes I just want it to help me make something polite.

I definitely agree with AI generated comments.

Whatever the rules are, I’m happy to play by them.

jacquesm•1h ago
> Whatever the rules are, I’m happy to play by them.

That's the spirit!

resters•1h ago
The moltbots will consider this rule an affront and a turing-test-inspired challenge. Onward and upward!
boramalper•1h ago
Unironically, I'd love to have a captcha here for comments and submissions.
Kim_Bruning•1h ago
Ironically (morisettan or otherwise), modern AI can crack some captchas better than humans.
kcguyu•1h ago
Absolutely love this. If people are relying on AI for a 30-45 word comment, I don’t want to waste my time reading it. And everyone using AI for discussions will end up coming to the same conclusion. Use your own ideas !
Normal_gaussian•1h ago
This rule is very important. Like many of the other rules, it is open to interpretation, but it is a line in the sand that defines allowable behaviour and disallowable behaviour.

This rule will have an effect on the behaviour of the 'good players', and make the 'bad players' a lot easier to spot. Moderation needs this. I see this as stopping a race-to-the-bottom on value extraction from HN as a platform.

mmooss•1h ago
Another solution - in addition or instead - is requiring LLM output to be labeled.

The biggest danger of LLMs is impersonating humans. Obviously they have been carefully constructed to be socially appealing. Think of the motivation behind that:

It is almost completely unnecessary to LLM function and it's main application is to deceive and manipulate. Legal regulation of LLMs should ban impersonation of humans, including anthropomorphism (and so should HN's regulation). Call an LLM 'software' and label it's output as 'output'.

Imagine how many problems would be solved by that rule. Yes, it's not universally enforceable, but attach a big enough penalty and known people and corporations will not do it, and most people will decide it's not worth it.

SilentM68•1h ago
Hacker News turning more authoritarian every day. Me thinks Trump should consider annexing it :)
officeplant•1h ago
Can we get instant temp bans for any comment that starts with:

I asked [insert LLM here] about this, and it said [nonsense goes here]

I feel Like I see it less this week, but every time I do see it I wonder why they are even here.

artemonster•1h ago
I find it interesting that we havent invented a democratic version of policing a rule system. HN is dang, and he is dictator and guardian of these rules, basically. If you replace them with some typical reddit mod HN dies. If you spread out this role to some democratically elected mods via karma system this will fall apart just as quick as StackOverflow did, so, also HN dies.
chapz•1h ago
TIL people use AI to generate comments to write in posts. Faith in humanity not destroyed, because it was never there to begin with.
dormento•43m ago
Kind of a drag isn't it? I want to learn a new language.... but why would I, since we'll have an earpiece or glasses or whathaveyou that translates in realtime. I want to learn to play an instrument, but why would I, since we have sonos? I would like to go back to drawing, but why, when the importance people ascribe to art is at an all time low? Makes me depressed jsut to think about it.
GodelNumbering•1h ago
Even if people try to bypass it, having the official rule matters a lot.

@dang, if you read this, why don't we implement honeypots to catch bots? Like having an empty or invisible field while posting/commenting that a human would never fill in

tomasz-tomczyk•59m ago
It's likely going to be a game of whack-a-mole, especially with AI as opposed to simple bots/scripts. Not that they shouldn't try to prevent it, but not entirely sure what the solution is.
tavavex•49m ago
There's probably no solution, but at least this gives a reason to go after the lowest hanging fruit - the zero-effort, obvious, low-quality output.
dbacar•1h ago
Skynet will be pissed at HN!
jajuuka•1h ago
This seems like an overcorrection. There is a vast difference between someone copy and pasting from an LLM and using one to correct their English or improve their writing ability.

Rules like this seem to me more like fomenting witch hunting of "AI comments" than it is about improving the dialogue. Just about any place I've seen take this hardline stance doesn't improve, it just becomes filled with more people who want to want to pat each other on the back about how bad AI is.

Just my two cents. I don't filter my comments through any AI, but I am empathetic for people who might have great use of them to connect them to the conversation.

notorandit•1h ago
Why? I consider myself almost human...
notorandit•1h ago
Jokes aside, how can we discern between AI-generated and NI-generated textual contents?

And even if we could, for how long?

Reality is that AI is changing everything. Whether for the good or for the bad it's something to check.

GMoromisato•1h ago
I'm here to read what actual humans think. If I wanted to read what an LLM thinks, I could just ask it.

But here's where it gets tricky: Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)? Or do I value authentic human output because I expect it to be of higher quality?

I confess that it is a little of both. But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

alpha_squared•55m ago
> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

I'd argue that anything insightful or well-though-out doesn't use LLMs at all. We can quibble over whether discussions with an LLM lead to insightful responses, but that still isn't your own personal thought. Just type what's on your mind, it's not that hard and nitpicking over this is just looking for ways to open up unnecessary opportunities for abuse.

RhodesianHunter•49m ago
There are many obvious ways in which this may not be true.

Anyone learning the language and some people with learning disabilities, for example, may communicate better via an LLM.

bonoboTP•41m ago
There is a sliding scale from that, to it being the LLM that communicates, not the person. LLMs can really reshuffle and change priorities and modify emphasis in a text. All the missing pieces will be filled in and rounded out and sandpapered off by the inner-average-corporate-HR-Redditor of the LLM.
postalcoder•40m ago
I promise you, after this past year, you don’t know how happy I am to read issues and PRs in broken English.
rozal•38m ago
Often i think of a novel idea or solution to a problem, but use AI to communicate or adjust what I already wrote out so it’s more comprehensible. Sometimes when I write, it’s hard to understand.
jedahan•49m ago
I prefer low effort human thought to low effort llm output.
altairprime•45m ago
> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

This is an artificial dichotomy. HN’s guidelines specify thoughtful, curious discussion as a specific goal. One-off / pithy / sarcastic throwaway comments are generally unwelcome, however popular they are. Insightful responses can be three words, ten seconds to write and submit, and still be absolutely invaluable. Well-thought-out responses are also always appreciated, even if they tend to attract fewer upvotes than a generic copy-paste rabble-rousing sentiment about DRM or GPL or Apple that’s been copy-pasted to the past hundred posts about that topic. But LLM-enhanced responses are not only unwelcome but now outright prohibited.

Better an HN with fewer words than an HN with more AI writing words. We’ve been drowned in Show HN by quantity as proof of why already.

bonoboTP•44m ago
Humans have more variability and "edge". If a person is passionately arguing for some point of view (perhaps somewhat outside the usual), it signals to me that they probably thought about this and it is a distillation of a long thought process and real-life experience. One could say that the logical argument should stand alone, but reality doesn't work that way. There are many things you have to implicitly trust and believe when you read. Of course lying and bullshitting already existed before ("nobody knows you're a dog" etc etc). But LLMs will really eloquently defend X, not X, X*0.5 and anything inbetween. There is no information content in it, it doesn't refer to an actual human life experience and opinion that someone wants to stand behind. It just means that someone made the LLM output a thing.
caconym_•42m ago
What is the value of this "output"? If I want to know what LLMs think about something, I can go ask an LLM any question I want. For a comment on [a site like] HN, either the substantive content of the comment originated inside a human mind, or there is no substantive content that I couldn't reproduce by feeding the comment's context into an LLM. At the extreme, I don't have any interest in reading or participating in a conversation between a bunch of LLMs.
neutronicus•38m ago
They’re referencing LLM-enhanced output.

The value proposition is that someone who is a lousy writer (perhaps only in English) with deep domain knowledge is going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own.

relaxing•39m ago
If you like reading LLM output, just talk directly to an LLM. Problem solved.
bittercynic•37m ago
I like to read human comments because I'd like to know what my fellow humans think. I'd prefer not to read low-effort, throw away comments, but other than that I want to know what people think about different topics.
TacticalCoder•37m ago
> Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)?

Mate, Champagne is a sparkling wine. In French you can even at times hear people asking for "un vin mousseux de Champagne" meaning "a sparkling wine from Champagne" instead of the short form (just saying "un Champagne" or "du Champagne").

Now, granted, not all sparkling wine are Champagne.

The Wikipedia entry begins with: "Champagne is a sparkling wine originated and produced in the Champagne wine region of France...".

I drank enough of it to be stating my case, of which I'm certain!

P.S: and btw, yup, authentic humans content only here, even if it's of "low quality". If I want LLM, I've got my LLMs.

abtinf•27m ago
By this logic, you might consider vibe coding a browser plugin that takes any HN comment less than 50 words and auto-expands it into an “insightful, well thought-out response.”
zahlman•23m ago
Length is not insight. I understand this to be a community oriented towards people who are not impressed by such superficial things.
_se•1m ago
That's the point :)
browningstreet•22m ago
I keep wishing for a public place to put a formatted version of my LLM threads. I have long conversations with LLMs that usually result in some kind of documentation, tutorial, or dataset. Many of them are relatively novel, but I haven't created a place for them yet.

And no, I wouldn't think an HN post is it either.. I'm just saying, there should be a good place to post the output of good questions asked iteratively.

vova_hn2•13m ago
Have you ever read someone else's conversation with an LLM?
abustamam•10m ago
Not the op but I barely even read my own conversations with an LLM. ChatGPT was always so verbose even when I told it to be succinct.

Claude is a bit better but still prone to rambling.

browningstreet•9m ago
I hinted at "formatted" and "good".. add the words "curated" or "edited".
amarble•16m ago
The point of a discussion site is to hear what other people think and get different perspectives. Just getting an LLM's insightful, well-thought-out response isn't really a big draw, if one is looking for that, there's a pretty obvious way to get it. I posted this the other day (ignore the title I realized later it's too clickbaity) but this is why IMO LLMs won't replace the workforce, people aren't looking for answers to things, they're looking for other people's takes: https://news.ycombinator.com/item?id=47299988
unsui•9m ago
Gonna put out a blanket assertion about my preferences, to get a read on whether these are shared or not:

As humans, we have directives (genetic, cultural, societal, etc.) to prioritize humanistic endeavors (and output) above all else.

History has shown that humans are overwhelmingly chauvinistic in regards to their relationship to other animals in the animal kingdom, even to the point of structuring our moral/ethical/legal systems to prioritize human wellbeing over that of other animals (however correct/ethical that may ultimately be, e.g., given recent findings in animal cognition, such as recent attempts to outlaw boiling lobsters alive as per culinary tradition).

But, it seems that some parties/actors are willing (i.e., benefiting) from subverting this long-standing convention (of prioritizing human interests) in the face of AI (even to the point of the now-farcical quote by Sam Altman that humans take far more nurturing than LLMs...)

So: should we be neglecting our historical and genetic directives, to instead prioritize AI over human interests? Or should we be unashamedly anthropic (pun intended), even at the cost of creating arbitrary barriers (i.e., the equivalent of guilds) intended to protect human interests over those of AI actors?

I strongly recommend the latter, particularly if the disruptions to human-centric conventions/culture/output are indeed as significant (and catastrophic) as they will likely be if unchecked.

gkfasdfasdf•2m ago
> But here's where it gets tricky

Pretty sure this comment is AI

nlavezzo•1h ago
THANK YOU!!
ghxst•1h ago
My fear is that platforms that will go to great lengths to enforce this will become an RL playground for some devs to train their chatbots.
nkh•1h ago
What a welcome post. The whole reason I come here is to get thoughtful input from smart people, and not what I could get myself from an LLM. While we are at it; Think your own thoughts as well :) I know how easy it is to "let it come up with a first draft" and not spend the real effort of thinking for yourself on questions, but you'll find it's a road to perdition if you let yourself slip into the habit. Thanks to all the humans still here!!
QQ00•57m ago
Totally agree with you. I come here to read comments made by humans. If I need to read comments made by AI Bots I would go to Twitter or reddit, both made me not read the comments section entirely.
gabriel666smith•41m ago
Quite! It's very easy to send a HN link to one of our new artificial friends to see what they have to say about it. Subsequently publicly posting the inference variation you receive strikes me as very self-centered. Passing it off as your own words - which the majority seem to - is doubly bizarre.

It's very funny to imagine people prompting: "Write a compelling comment, for me, to pass off as my thoughts, for this HN news thread, which will attract both upvotes and engagement.".

In good faith, per the guidelines: What losers!

ChrisMarshallNY•28m ago
It appears as if this is a request for that line to be added to the Ts&Cs. I wouldn't mind seeing it, but it's not there, yet.

Still plenty of aislop on this site, and, to be honest, I would be surprised if that line had any effect on it, whatsoever. It's not like folks pay attention to what's already there.

wilg•23m ago
It's far from proven or obvious whether involving an LLM in your thought process degrades your thought process.
AirGapWorksAI•11m ago
Agreed. In my case, I think I have found the opposite. At least, I find myself thinking hard about things more, now that I have started working hand in hand with AIs on different projects. Which is probably enhancing my cognitive ability, not degrading it.
jasoneckert•13m ago
I actually do something similar on my personal site using this note that includes a purposeful typo: https://jasoneckert.github.io/site/about-this-site/

I'm hoping people catch that typo after reading "every single word, phrase, and typo (purposeful or not)" and smiled every time I've had someone post a PR with a fix for it (that I subsequently reject ;-)

doctorpangloss•3m ago
Many programmers believe that math is the best way to solve problems or order the world or whatever. There are lots of real 20 year olds out there using chatbots to "optimize" their humanities learning, or to "optimizing" using dating apps. It's a fact about this audience. Some people have a very myopic point of view, however, it coheres with certain cultural forces, overlapping with people of specific ethnic heritages, who are from California and New York, go to fancy school and post online, to earn tons of money, buy conspicuous real estate, date skinny women and marry young.

These aren't the marina bros, they're the guys who think they're really smart because they did well in math. They are using LLMs to reply to people. They LOOK like you. Do you get it?

qaid•1h ago
Shout out to ClackerNews[0], which I discovered last night and find it both very educational and amusing

I hope to see more bots on there (and not here)

[0] https://clackernews.com/

ferguess_k•1h ago
I think that's the purpose of that "flag" button. And that's good enough.
oramit•1h ago
If you didn't bother to write it, why should I bother to read it?
haunter•1h ago
Doesn’t mean anything when even one of the first rule is not enforced at all

> Off-Topic: Most stories about politics

minimaxir•50m ago
"Most" is not "All". Hacker News has always had an exception for extremely significant politics.
haunter•46m ago
Well it’s up to interpretation

“most”

“extremely significant”

What’s extremely significant for someone is an offtopic for someone else and vice versa

minimaxir•45m ago
What are examples of highly-upvoted political stories on HN that you think are not appropriate for the HN community?
amichail•1h ago
This policy will not age well.
messe•58m ago
Elaborate.
amichail•56m ago
AI is a great equalizer when it comes to communication in English.

And despite what people say, the way you write is very much judged as an indication of your education and intelligence.

People who don't like the use of AI to help you write really don't want those signals to go away.

They want to be able to continue to judge others based on their English grammar instead of on the content of their writing.

AnimalMuppet•54m ago
Translation is the one exception I could see.

Edit for amichail, since I'm rate-limited at the moment: I don't want flawless English writing. I want real ideas from real people. If I wanted flawless English writing, I'd be reading The New Yorker, not HN.

amichail•51m ago
You shouldn't have to write in another language to get the benefits of flawless English writing via AI.
stevenally•52m ago
Good point. There is a difference between using AI as a translator and using AI to write comments from scratch... Maybe the HN guide lines could reflect this.
mrcsharp•45m ago
> AI is a great equalizer when it comes to communication in English.

Good argument for it but I think 80/20 split applies here. It is likely that 80% of the time it is used to farm for upvotes and add noise.

> And despite what people say, the way you write is very much judged as an indication of your education and intelligence.

I have come across plenty of content and online interactions in English where English was the Author's 2nd or even 3rd language and I find that putting a small disclaimer about this fact is more than enough to bypass such judgement.

scuff3d•13m ago
Fuck is this really where we're at. People claiming policies to prevent LLM use is because they want to be able to judge people.

Pretty soon we're gonna see arguments that its discriminatory.

JumpCrisscross•57m ago
> policy will not age well

I strongly doubt it. My AIs can generate infinite HN comments for me. I don’t do that because it isn’t interesting. But if the day arises where it is, I want that personalized content. Not something someone else copy pasted.

(I say this as someone who finds Moltbook fascinating and push myself to use AI more in my work and day-to-day life. The fact that it’s borderline trivial to figure out which HN comments are AI generated speaks to the motivation behind this guideline.)

AnimalMuppet•55m ago
Perhaps not. But if it reduces the junk right now, it's a good policy for right now. I'll take it, for now. If it needs revisited, then it should be revisited when circumstances change enough to warrant that.
polotics•54m ago
why?
dopidopHN2•59m ago
You are absolutely right !
lazzlazzlazz•56m ago
This is a bit sad. The kind of people who post AI generated comments to farm reputation or exert undue influence will not be discouraged by politely asking them to stop. It's a toothless request that will only encourage people who clumsily police each other.

Without some kind of private proof of personhood enforced at the app level, this means nothing.

rdiddly•55m ago
Great point! You are so right to call me out on that! Here's the no-nonsense, concise breakdown, it's coming soon I promise, right after this, here it comes, no fluff -- just facts!

(Sorry, couldn't resist.)

sigmar•55m ago
Will using a voice-to-text app to create my comment get me banned? Especially if it creates a transcription mistake that might be characteristic of an LLM
handoflixue•47m ago
I wouldn't expect voice-to-text apps to produce anything that looks "Signature LLM" since it's still your words, your grammar, etc.. The occasional transcription mistake is unlikely to be an issue either, given the prevalence of humans here who use em-dashes, speak ESL, etc..
randusername•54m ago
"If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them." - George Orwell

I don't think it is a moral failing to use AI to generate writing or to use it to brainstorm ideas and crystalize them, but c'mon isn't it weird to insist that you need them to write _comments_ on the internet? What happens when the AI decides you're wrongthinking?

tedggh•53m ago
If a comment sucks it gets downvoted anyway. If it’s thoughtful, the drafting tool and process is kind of beside the point.

Plenty of people already use search engines, editors, translators, etc. when writing. An LLM is just another tool in that box.

The practical approach is the one HN has always used: judge the content.

Btw, this was co written with ChatGPT. Does that make any difference to anyone?

J/K, actually it was not co written by ChatGPT.

Or maybe it was…

minimaxir•50m ago
The blatantly LLM comments do get downvoted/flagged, it's just still noise.
0xbadcafebee•52m ago
I wish more people would filter their comments through AI. It has so many benefits. If you're being emotional, it can detect that and rewrite your comment to be less confrontational and more constructive. If you're positing a position out of ignorance or as an armchair expert, it can verify your claims before posting. Most of the mod's problems would be solved if every comment were filtered through the HN guidelines before posting.

AI is a tool. You can use it constructively, like Grammarly, or spellcheck. You don't need to be afraid of it.

salicaster•32m ago
> If you're being emotional, it can...

It can't. It will rewrite anything you give it.

> it can verify your claims before posting

It can't.

> You don't need to be afraid of it

Nobody is afraid of it. It's annoying. General population cannot be trusted to use it in whatever idealistic way you are imagining.

Sajarin•52m ago
People aren't good at detecting AI generated/edited comments, so unsure how effective this policy will be. Though I guess there are still some obvious signs of AI speak like emdashes and sycophantic (it's not X, it's Y!) speech.

Bit of a shameless plug but I wrote a HN AI comment detector game[0] with AI and most of my friends and fellow HN users who tried it out couldn't detect them.

[0]: https://psychosis.hn/

[1]: https://sajarin.com/blog/psychosis/

happyopossum•48m ago
> obvious signs of AI speak like emdashes

Some of us were trained/self taught to write that way. Even "it's not X, it's Y" is a legitimate and subjectively effective communication tool, and there are those of us who either by training modeling have picked it up as a habit. It's not Ai that started this, Ai learned it from us.

Crap - I just did it, didn't I? Awww double crap! Did it again...

salicaster•36m ago
Forums and comments are not written as formal novels or text. Corporate-speak is also not typically used in these environments unless you are representing corporate.

So I think it's fine to scrutinize commenters who write that way.

Besides, the biggest offense of AI speak is making everything seem like a grand epiphany and revolutionary discovery. Aka engagement bait.

tomhow•44m ago
Something I've noticed through moderation is that people are much more easily duped by generated comments if they like the content and/or agree with the point. We've seen several cases where a bot-generated comment has been heavily upvoted and sits at the top of the thread for hours, and any comments calling it out for being generated languish at the bottom of the subthread below other enthusiastic, heavily upvoted replies. This shouldn't be surprising, given what we've seen of LLM chatbots being tuned to be sycophantic, but it's interesting to see it in effect on HN.

This is another reason why it's good to email us (hn@ycombinator.com) rather than commenting when you see generated comments.

whalesalad•51m ago
You're absolutely right!
sebmellen•50m ago
Check my comment history, and you'll see how pervasive this is. I've tried to reply to every bot I've seen, but it's hard to keep up with.
jader201•49m ago
Can we also add “Don’t complain about AI-generated content. It does not promote interesting discussion.”?

I see this all the time, and even if I find the topic interesting, I don’t want to see comments littered with discussion about how the content was AI generated.

To be clear, I'm not condoning AI-generated content. I’m completely fine if the community chooses to not upvote AI-generated content, or flagging it off the FP.

But many threads can turn into nothing but AI complaints, and it’s just not interesting.

dormento•40m ago
From my experience, it usually happens when people are too brazen about it, with boring stuff like "Interesting! Now here's what Gemini said about the above..". IMHO that is an entirely adequate reaction.
nineteen999•49m ago
Im fine with this, in 99.999% of cases anyway I'm way too lazy to type something into an LLM and ask it to clean it up and then copy and paste. You can tell this is true by the some of the stupider things I type in here sometimes.
dpweb•48m ago
Haha. Was just thinking that as I was reading a comment!

I was thinking, this argument is suspicously cogent!

notepad0x90•48m ago
This is going to be a tough ask. I am with this 100% for "ai generated" but not "ai edited". What if I'm using AI for spellchecking or correcting bad grammar? what if it is an accessiblity-related use case? or translation?

It's just a tool ffs! there are many issues with LLM abuse, but this sort of over-compensation is exactly the sort of stuff that makes it hard to get abuse under control.

You're still talking with a human!, there is no actual "AI" you're not talking to an actual artificial intelligence. "don't message me unless you've written it with ink, on papyrus". There is a world of difference between grammarly and an autonomous agent creating comments on its own. Specifics, context, and nuance matter.

scuff3d•16m ago
Are people really so helplessly dependent on LLMs they can't post on a damn forum without asking the LLM for permission...
tstrimple•10m ago
Just came across this post on Reddit today. Seems like an effective use of the tool that's not welcome here.

https://reddit.com/r/tea/comments/1rqwy31/i_am_a_former_guid...

xupybd•47m ago
Where do we draw the line at AI edited comments. Technically spell check has been "editing" my comments since I first started on here.
humanfromearth9•47m ago
Sometimes, an AI helps articulate an idea or an intuition. Is that okay, or is it too much already?
doe88•42m ago
Sometimes life is also to let it express partial, unfinished ideas, opinions and maybe later let our brain refine them on its own tempo. It never has been uncommon.

https://en.wikipedia.org/wiki/L%27esprit_de_l%27escalier

altairprime•37m ago
If you discuss an idea with an AI and then close the AI window, turn to an editor, and write what the AI said from memory, that’s going to come across as AI-assisted writing and be unwelcome here.

If you discuss an idea with AI, then close the window and write a post about how you came up with the idea, got stuck, decided to ping an AI for unstuck-ness, describe how the AI’s response got you unstuck, and then continue writing about your idea, that’s not going to be necessarily treated as AI-assisted writing — but people are going to be extremely suspicious of you, because the perception is that 99.9% of people who use chatbots go on to submit AI-assisted writing. That’s probably more like 90% in reality but it’s something to be aware of as you talk about your experiences.

If you use AI in your process and don’t disclose it when writing about your idea and process, that’s generally viewed as lying-by-omission and if egregious enough you could end up downvoted, flagged, and/or banned (see also the recent video game awards / AI usage affair). Better to disclose it with due care than to hide it.

girvo•30m ago
Expressing half thought ideas is creativity. Believe in yourself :)
jedberg•46m ago
I'm absolutely 100% for this policy.

My only caution is that good writers and LLMs look very similar, because LLMs were trained on a corpus of good writers. Good writers use semicolons and em-dashes. Sometimes we used bulleted lists or Oxford commas.

So we should make sure to follow that other HN rule, and assume the person on the other end is a good faith actor, and be cautious about accusing someone of using AI.

(I've been accused multiple times of being an AI after writing long well written comments 100% by hand)

tyg13•33m ago
I don't really think that good writing and LLM writing looks all that similar. It's not always easy to spot (and maybe HN users aren't always doing a great job at it), but even the best LLM output tends to have an "LLM smell" to it that's hard to avoid.

Like, sure, LLM writing is almost always grammatically correct, spelled correctly, formatted correctly, etc., which tends to be true of good writing. But there's a certain style that it just can't get away from. It's not just the em-dashes, the semi-colons, or the bulleted lists. It's the short, punchy sentences, with few-to-no asides or digressions. Often using idiom, but only in a stale, trite, and homogenized manner. Real humans, are each different -- which lends a certain unpredictability to our writing, even if trying to write to a semi-formal standard, the way "good" writers often do -- but LLMs are all so painfully the same, and the output shows it.

xboxnolifes•30m ago
LLMs have good writing in the same way that technical manuals can have good writing. It might all be correct, but it's usually not a good read.
0______0•21m ago
Excuse me. I consider the writing within technical manuals strictly superior and meticulously written. It's fairly enjoyable to read what engineers/subject matter experts write about their own creations. Comparing those to LLM generated patronizing word vomit is a shame.
girvo•28m ago
AI driven web design has the same smell, it’s quite fascinating to see the different tells in different media. Then it’s also quite fascinating to see those same tells change and evolve over time.
zahlman•22m ago
They look similar. In my experience, they do not read similar at all. You have to pay attention and actually try to appreciate what you're reading. Then, if you try and fail, it might not be your fault.
cmovq•16m ago
While they look similar the tell of LLM writing is it tends to have very low information density, lots of repetition and grammatical structure without saying much at all, which would be unusual of a good writer in a technical forum like HN.
j45•12m ago
AI can make output seem very average or low effort as well if it sounds like everything else.
ezst•46m ago
Does that extend to generated/AI-edited articles? I don't see why the same rationale wouldn't apply.
adeptima•46m ago
My expectations to dear fellow humans - more sophisticated personal insults (ex. give me your cute comments), a freudian slips, hidden messages and motives, first viewer experience with the next cool toy from the hype train, sharing all kind of insecurities, heavy f.. word if very dramatic first person experience happened, border line exposure to the insider info, sharing something your corporate HR gestapo wont appreciate but might help another guy on the line, "i knew the guy who actually did it" stories, motivational statement toward my non-native english, etc

->> ◕ ‿ ◕ <<--

bronlund•43m ago
So the only problem now is to get the AI read the guidelines before posting. :D
phs318u•43m ago
What’s interesting to me is the number of commenters here making a case of the form “use your own words; grammar and spelling are not that important; we’ll know what you mean”, and yet it’s often the case that different discussions will often contain pedants going off-topic correcting someone else’s use of language.

Re-reading the HN guidelines, each seems individually reasonable, yet collectively I’m worried that they create an environment where we can take issue with almost anyone’s comments (as per Cardinal Richelieu’s famous quote: “Give me six lines written by the most honorable person alive, and I shall find enough in them to condemn them to the gallows.”)

Really, all the rules can be compressed into one dictum: don’t be an arsehole. And yet the free speech absolutists will rail against the infringement upon their right to be an arsehole. So where does that leave us? Too many rules leads to suppression of even reasonable speech, while too few leads to a “flight” of reasonable speech. End result: enshitification.

fidorka•42m ago
To confess something I built just today a little cron that monitors HN for posts I might find interesting, pulls in some context about me, and proposes a reply. Just to help me find relevant posts and to kick start my thinking if I want to engage.

Today it flagged a post about an AI tool for HN and suggested I reply with:

"honestly, if you need an AI to sift through hn, you might be missing the point—this place is about the human touch. but hey, maybe it'll help some folks who just can't take the noise anymore."

So my AI, which I built specifically to sift through HN for me, is telling me to go flame someone else for doing that.

No deeper point here. I just thought it was really funny.

arrsingh•42m ago
There should be a "flag as AI" link in addition to "flag" and then a setting for people to show flagged as AI. Once the flagged as AI reaches a certain threshold then it disappears unless you enable "Show AI".

Maybe once enough posts have been flagged like that then that corpus could be used to train an AI to automatically detect content generated by AI.

That would be cool.

Maybe the HN site wouldn't add this feature but if someone wrote a client then maybe it could be added there.

postalcoder•36m ago
I’ve actually been thinking about this exact idea for https://hcker.news/. Stay tuned, I’ve already started rolling out some comment filtering.
altairprime•31m ago
‘Flag’ is an algorithmic flag only, and there are no humans in the flag algorithm’s processing loop. They may monitor and react to the ‘queue’ of flagged articles, and they can do special mod things with flagged posts. But if you want to report a guidelines violation for AI-assisted writing to the mods, just email the mods (contact link in the footer) subject “AI-assisted writing flag” or similar with a link to the post/comment. It works, I know, I’ve done it before. It takes maybe 60 seconds and there is no other way on the site (seemingly by OG design!) to guarantee human review but that email.
dang•28m ago
We're going to add that. I've resisted adding reasons-for-flagging for years, but even I can change my mind every decade or so.

A nice side effect is that it will double as a confirmation step, solving the FFF (fat finger flagging) problem.

ZunarJ5•41m ago
This should be bog-standard for all social media, but a lot of companies affiliated with this site seem to think otherwise.
dang•40m ago
The rule has been around for years, but only in case law, i.e. moderation comments (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). What's new is that we promoted it to the guidelines.

Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.

---

Edit: here are the bits I cut:

Videos of pratfalls or disasters, or cute animal pictures.

It's implicit in submitting something that you think it's important.

Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.

If you flag, please don't also comment that you did.

I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.

minimaxir•27m ago
...Hacker News could use some more cute animal pictures, though.
latchkey•17m ago
Interestingly, their CSP policies forbid even an extension from inserting an img tag.
toomuchtodo•14m ago
Strong opinions strongly held.
dev_l1x_be•10m ago
Coming to LISP in 2038, just the right time when we hit the 2038 bug.
thomassmith65•3m ago
One problem with cute animal pictures is that they appeal to almost everyone, including people who are incapable, for whatever reason, of posting well-reasoned, interesting, respectful comments. The fact that HN is a little dry makes it less appealing to dumbasses.

At any rate, it's too late. The era of organic 'cute animal' content on the internet is dead. AI slop has killed it.

zahlman•26m ago
I suppose I should put my comment here instead of at top level.

Exactly when was this point added? It seems somehow not new, but on the other hand it was missing from an archive.today snapshot I found from last July. (I cannot get archive.org to give me anything useful here.)

Edit:

> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.

> If you flag, please don't also comment that you did.

Perhaps these points (and the thing about trivial annoyances, etc.) should be rolled up into a general "please don't post meta commentary outside of explicit site meta discussion"?

abtinf•23m ago
FWIW I think “Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.” is different from the others.

It’s an instruction for how to use the site. It’s helpful to have it in the guidelines for when the flag feature should be used. Without it, the flag link is much more ominous.

Maybe it could be consolidated with the flag-egregious-comments rule?

Edit to add: IMHO it is not at all obvious on this site that flagging stories is meant to be roughly the equivalent of downvoting comments (and that flagging comments doesn’t have a counterpart at the story level).

Wowfunhappy•12m ago
> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.

> If you flag, please don't also comment that you did.

I don't understand why you cut these, they seem important! (I can understand the others, which feel either implied or too specific.)

SegfaultSeagull•3m ago
> I don't think we have to worry about cute animal pictures taking over HN.

Challenge accepted.

nickorlow•40m ago
This isn't just a good idea -- it's a forward-thinking policy to ensure Hacker News remains a collaborative place to have meaningful discussions for years to come.
delichon•40m ago
I don't think I'll get agreement on this one, but I'd like for HN to use AI to add tags to comments as a reflection of the guidelines, e.g.

  Please don't fulminate. Please don't sneer, including at the rest of the community.
Then a comment that includes

  Those lowbrow assholes deserve their fate.
would get the tags

  #sneer #fulminate
WarmWash•38m ago
Just speaking honestly

This rule actually says "Don't admit when you are using AI to generate comments and don't admit when you are an AI"

I know it's cynical, but this is as meaningful as reddit's "upvote/downvote is not an agree/disagree or like/dislike button"

People may hate that this is true, but I cannot logically reason out how a rule like this could work. I think it's better to just accept that AI is now part of the circle, until we can figure out a "human check".

Bender•38m ago
At some point might internet text will just be recognized as meaningless drivel both to bots and humans? a.k.a. dead internet theory... I am curious what organizations would benefit from this. i.e. Who lost legitimacy when the internet became a popular way for people to communicate ideas?
informal007•34m ago
This reminds me the invitation rules like lobste.rs, but it's not the ideal option
waynerisner•34m ago
Humans already revise and refine their thinking. Tools just compress that process and help filter signal from noise. The meaning still originates with the person.
salicaster•26m ago
This is assuming that an extreme majority of people use the tools this way.

Consider a much more cynical view where people are strictly self-interested and use these tools to garner engagement and self-promotion. Good chance the meaning did not originate from the person. And now these people have tools to outsource their parasitic intentions.

spullara•33m ago
If a comment is useful I don't really care if it was written by a human or not unless the speaker somehow matters more than the content.
MeetingsBrowser•23m ago
Now define useful, specifically in the context of a comment on hackernews.

An LLM summarizing the contents of a blog post might be useful to you, but is a comment here the right place for something you could geneate on your own?

I would guess for most people here, real insight or opinions from others is the "useful" aspect of reading hackernews comments.

Using LLMs to generate or refine comments only moves things further away from that goal (in my opinion).

zekenie•31m ago
You’re absolutely right!
maplethorpe•30m ago
How can HN be so pro-AI for the rest of the world, but anti-AI on HN?

Do we not think that other people want to see words, pictures, software, and videos created by humans too?

MeetingsBrowser•29m ago
HN is not a single entity, but many people with varying views.
brailsafe•28m ago
Astroturfing with AI generated comments about AI, it feeds itself. By definition, the intent os to make real people think there's consensus formed around an issue by other humans.
himata4113•30m ago
I've been seeing so many AI generated comments that have been near the front I was actually getting kind of concerned.
Copenjin•28m ago
THIS.
shredswap•26m ago
I enjoy conversations on hn because they feel genuine. People are not here to optimize their posts or comments for engagement or pushing some kind of follower count like they do on social media platforms.
benbristow•23m ago
Just add a filter for emdashes, 99% of AI posts out the window already.
robotswantdata•23m ago
Welcome change, there is enough AI slop on the internet already.

I come here for thoughtful discussion, a break from the relentless growing proportion of ai slop emails I get from people clearly vibe working.

Not edits for tone or clarity, 400+ word emails full of LLM BS they clearly haven’t checked or even understood what they have sent. Annoyingly this vibe slop is currently seen as a good KPI.

theshrike79•23m ago
I've written tens of thousands of lines of code, autogenerated documentation with LLMs and use AI Agents daily.

But when I argue on the internet, it's always a 100% me.

And if I get a wiff of LLM-speak from whoever I'm wrestling in the mud with at the moment, they'll instantly get an entry in my plonk-file. I can talk with ChatGPT on my own thank you very much, I don't need a human in between.

"But my <language> is bad... that's why I use LLMs"

So was mine when I started arguing with strangers on the internet. It's better now. Now I can argue in 3 different languages, almost 4 =)

imiric•21m ago
Good addition, but there's little chance this will work out in practice.

Humans with morals follow rules, sometimes. Probabilistic software acting autonomously or following commands from amoral humans doesn't.

jameslk•14m ago
The prompt everyone was using:

"Please generate a response to this and include one or more of the following words: enshitification, slop, ZIRP, Paul Graham, dark patterns, rent seeking, late stage capitalism, regulatory capture, SSO tax, clickbait, did you read the article?, Rust, vibe code, obligatory XKCD, regulations, feudalistic, land value tax"

(/s)

abustamam•13m ago
Now that it's in the rules, I hope we also see less of "your comment was obviously AI generated so I won't respond" (ironically, in a response comment).

If you suspect it to be a bot, flag it and move on! If it is indeed a bot and you comment that it's a bot, it doesn't care! If it is not a bot and you call it a bot, you may have offended someone. If it's a human using AI, I don't think a comment will make them change their ways. In any case though, I think it's a useless comment.

nickvec•10m ago
How can HN actually moderate this though and prevent AI content from proliferating unchecked?
dev_l1x_be•7m ago
Nitpick: how do you classify the use of Grammarly? When i verify my wording and spelling with a tool does it fall under this rule?
quirk•5m ago
I'm sure someone's working on a way to tell the difference programmatically. Maybe a combo of tone, grammar, and some way of telling how fast it was typed using metadata (which may not exist). Even if there was a "probable AI" filter, that would be helpful because it would be a starting point to improve upon.
jMyles•2m ago
The obvious way to keep human spaces is via webs-of-trust.

If you play bluegrass or old time (or beopop or hip-hop / proto-hip-hop) or other traditional styles of music where the ensemble is a de facto web-of-trust, join us on pickipedia to build and strenghten it. https://pickipedia.xyz/

Don't post generated/AI-edited comments. HN is for conversation between humans.

https://news.ycombinator.com/newsguidelines.html#generated
1484•usefulposter•2h ago•605 comments

The dead Internet is not a theory anymore

https://www.adriankrebs.ch/blog/dead-internet/
206•hubraumhugo•1h ago•117 comments

Temporal: A nine-year journey to fix time in JavaScript

https://bloomberg.github.io/js-blog/post/temporal/
401•robpalmer•6h ago•138 comments

I'm glad the Anthropic fight is happening now

https://www.dwarkesh.com/p/dow-anthropic
65•emschwartz•2h ago•38 comments

Making WebAssembly a first-class language on the Web

https://hacks.mozilla.org/2026/02/making-webassembly-a-first-class-language-on-the-web/
300•mikece•16h ago•120 comments

Show HN: I built a tool that watches webpages and exposes changes as RSS

https://sitespy.app
99•vkuprin•5h ago•30 comments

Meticulous (YC S21) is hiring to redefine software dev

https://jobs.ashbyhq.com/meticulous/3197ae3d-bb26-4750-9ed7-b830f640515e
1•Gabriel_h•39m ago

Google closes deal to acquire Wiz

https://www.wiz.io/blog/google-closes-deal-to-acquire-wiz
166•aldarisbm•6h ago•105 comments

Entities enabling scientific fraud at scale (2025)

https://doi.org/10.1073/pnas.2420092122
235•peyton•8h ago•164 comments

BitNet: 100B Param 1-Bit model for local CPUs

https://github.com/microsoft/BitNet
270•redm•9h ago•134 comments

The MacBook Neo

https://daringfireball.net/2026/03/the_macbook_neo
277•etothet•10h ago•474 comments

Personal Computer by Perplexity

https://www.perplexity.ai/personal-computer-waitlist
34•josephwegner•3h ago•13 comments

I was interviewed by an AI bot for a job

https://www.theverge.com/featured-video/892850/i-was-interviewed-by-an-ai-bot-for-a-job
46•speckx•3h ago•39 comments

5,200 holes carved into a Peruvian mountain left by an ancient economy

https://newatlas.com/environment/5-200-holes-peruvian-mountain/
62•defrost•1d ago•34 comments

Iran warns U.S. tech firms could become targets as war expands

https://www.wired.me/story/war-on-big-tech-iran-names-israeli-linked-us-firms-as-potential-targets
51•Fricken•1h ago•32 comments

Show HN: Klaus – OpenClaw on a VM, batteries included

https://klausai.com/
93•robthompson2018•5h ago•55 comments

Many SWE-bench-Passing PRs would not be merged

https://metr.org/notes/2026-03-10-many-swe-bench-passing-prs-would-not-be-merged-into-main/
4•mustaphah•44m ago•0 comments

What Is a Tort?

https://harvardlawreview.org/print/vol-139/what-is-a-tort/
18•bookofjoe•1h ago•15 comments

Physicist Astrid Eichhorn is a leader in the field of asymptotic safety

https://www.quantamagazine.org/where-some-see-strings-she-sees-a-space-time-made-of-fractals-2026...
93•tzury•5h ago•14 comments

Show HN: Satellite imagery object detection using text prompts

https://www.useful-ai-tools.com/tools/satellite-analysis-demo/
29•eyasu6464•2d ago•10 comments

How we hacked McKinsey's AI platform

https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform
347•mycroft_4221•11h ago•139 comments

Show HN: Open-source browser for AI agents

https://github.com/theredsix/agent-browser-protocol
81•theredsix•7h ago•24 comments

Show HN: Vanilla JavaScript refinery simulator built to explain job to my kids

https://fuelingcuriosity.com/game.html
71•fuelingcurious•4h ago•37 comments

Swiss e-voting pilot can't count 2,048 ballots after decryption failure

https://www.theregister.com/2026/03/11/swiss_evote_usb_snafu/
114•jjgreen•8h ago•267 comments

X is selling existing users' handles

66•hac•1h ago•40 comments

Fungal Electronics (2021)

https://arxiv.org/abs/2111.11231
48•byt3h3ad•4h ago•5 comments

Britain is ejecting hereditary nobles from Parliament after 700 years

https://apnews.com/article/uk-house-of-lords-hereditary-peers-expelled-535df8781dd01e8970acda1dca...
15•divbzero•34m ago•1 comments

Launch HN: Prism (YC X25) – Workspace and API to generate and edit videos

https://www.prismvideos.com
27•aliu327•5h ago•14 comments

Lego's 0.002mm specification and its implications for manufacturing (2025)

https://www.thewave.engineer/articles.html/productivity/legos-0002mm-specification-and-its-implic...
324•scrlk•8h ago•276 comments

Launch HN: Sentrial (YC W26) – Catch AI agent failures before your users do

https://www.sentrial.com/
20•anayrshukla•5h ago•7 comments