frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Google demonstrates 'verifiable quantum advantage' with their Willow processor

https://blog.google/technology/research/quantum-echoes-willow-verifiable-quantum-advantage/
105•AbhishekParmar•1h ago•56 comments

Cryptographic Issues in Cloudflare's Circl FourQ Implementation (CVE-2025-8556)

https://www.botanica.software/blog/cryptographic-issues-in-cloudflares-circl-fourq-implementation
80•botanica_labs•2h ago•19 comments

Linux Capabilities Revisited

https://dfir.ch/posts/linux_capabilities/
75•Harvesterify•2h ago•12 comments

Designing software for things that rot

https://drobinin.com/posts/designing-software-for-things-that-rot/
72•valzevul•18h ago•8 comments

MinIO stops distributing free Docker images

https://github.com/minio/minio/issues/21647#issuecomment-3418675115
445•LexSiga•10h ago•267 comments

AI assistants misrepresent news content 45% of the time

https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
199•sohkamyung•2h ago•149 comments

The security paradox of local LLMs

https://quesma.com/blog/local-llms-security-paradox/
48•jakozaur•3h ago•36 comments

SourceFS: A 2h+ Android build becomes a 15m task with a virtual filesystem

https://www.source.dev/journal/sourcefs
46•cdesai•3h ago•16 comments

Die shots of as many CPUs and other interesting chips as possible

https://commons.wikimedia.org/wiki/User:Birdman86
132•uticus•4d ago•26 comments

Internet's biggest annoyance: Cookie laws should target browsers, not websites

https://nednex.com/en/the-internets-biggest-annoyance-why-cookie-laws-should-target-browsers-not-...
333•SweetSoftPillow•4h ago•391 comments

French ex-president Sarkozy begins jail sentence

https://www.bbc.com/news/articles/cvgkm2j0xelo
265•begueradj•10h ago•344 comments

Go subtleties

https://harrisoncramer.me/15-go-sublteties-you-may-not-already-know/
149•darccio•1w ago•104 comments

Tesla Recalls Almost 13,000 EVs over Risk of Battery Power Loss

https://www.bloomberg.com/news/articles/2025-10-22/tesla-recalls-almost-13-000-evs-over-risk-of-b...
135•zerosizedweasle•3h ago•114 comments

Infracost (YC W21) Hiring First Dev Advocate to Shift FinOps Left

https://www.ycombinator.com/companies/infracost/jobs/NzwUQ7c-senior-developer-advocate
1•akh•4h ago

Patina: a Rust implementation of UEFI firmware

https://github.com/OpenDevicePartnership/patina
65•hasheddan•1w ago•12 comments

Farming Hard Drives (2012)

https://www.backblaze.com/blog/backblaze_drive_farming/
12•floriangosse•6d ago•3 comments

Evaluating the Infinity Cache in AMD Strix Halo

https://chipsandcheese.com/p/evaluating-the-infinity-cache-in
121•zdw•12h ago•51 comments

Show HN: Cadence – A Guitar Theory App

https://cadenceguitar.com/
135•apizon•1w ago•29 comments

The Dragon Hatchling: The missing link between the transformer and brain models

https://arxiv.org/abs/2509.26507
110•thatxliner•3h ago•65 comments

Greg Newby, CEO of Project Gutenberg Literary Archive Foundation, has died

https://www.pgdp.net/wiki/In_Memoriam/gbnewby
352•ron_k•7h ago•59 comments

Cigarette-smuggling balloons force closure of Lithuanian airport

https://www.theguardian.com/world/2025/oct/22/cigarette-smuggling-balloons-force-closure-vilnius-...
49•n1b0m•3h ago•17 comments

Sequoia COO quit over Shaun Maguire's comments about Mamdani

https://www.ft.com/content/8e6de299-3eb6-4ba9-8037-266c55c02170
15•amrrs•48m ago•10 comments

Knocker, a knock based access control system for your homelab

https://github.com/FarisZR/knocker
49•xlmnxp•7h ago•74 comments

LLMs can get "brain rot"

https://llm-brain-rot.github.io/
446•tamnd•1d ago•274 comments

Ghostly swamp will-O'-the-wisps may be explained by science

https://www.snexplores.org/article/swamp-gas-methane-will-o-wisp-chemistry
23•WaitWaitWha•1w ago•10 comments

Distributed Ray-Tracing

https://www.4rknova.com//blog/2019/02/24/distributed-raytracing
21•ibobev•5d ago•7 comments

Starcloud

https://blogs.nvidia.com/blog/starcloud/
129•jonbaer•5h ago•170 comments

Power over Ethernet (PoE) basics and beyond

https://www.edn.com/poe-basics-and-beyond-what-every-engineer-should-know/
216•voxadam•6d ago•170 comments

rlsw – Raylib software OpenGL renderer in less than 5k LOC

https://github.com/raysan5/raylib/blob/master/src/external/rlsw.h
228•fschuett•19h ago•87 comments

Ask HN: Our AWS account got compromised after their outage

364•kinj28•1d ago•87 comments
Open in hackernews

Chezmoi introduces ban on LLM-generated contributions

https://www.chezmoi.io/developer-guide/
42•singiamtel•3h ago

Comments

singiamtel•3h ago
Does this mean Copilot tab complete is banned too? What about asking an LLM for advice and then writing all the code yourself?
Lalabadie•2h ago
I'm pretty sure the point is that anything clearly generated will result in an instant ban. That seems rather fair, you want contributors who only submit code they can fully understand and reason about.
odie5533•1h ago
The language clearly says "If you use an LLM [...] to make any kind of contribution".
pkilgore•1h ago
[I was wrong and posted a link to an earlier policy/discussion overridden by the OP]
Groxx•1h ago
you seem to be reading that backwards, that's the content that was removed. it now just says "if LLM, banned": https://github.com/twpayne/chezmoi/blob/master/.github/CODE_...
marcandre•1h ago
The part you are quoting is being removed. The policy used to state "If you contribute un-reviewed LLM generated...", now simply states "If you use an LLM to make any kind of contribution then you will immediately be banned without recourse."
baby_souffle•2h ago
> Does this mean Copilot tab complete is banned too? What about asking an LLM for advice and then writing all the code yourself?

You're brushing up against some of the reasons why I am pretty sure policies like this will be futile. They may not diminish in popularity but they will be largely unenforceable. They may serve as an excuse for rejecting poor quality code or code that doesn't fit the existing conventions/patterns but did maintainers need a new reason to reject those PRs?

How does one show that no assistive technologies below some threshold were used?

jbstack•1h ago
> How does one show that no assistive technologies below some threshold were used?

In this case, you don't:

> immediately be banned without recourse

In other words, if the maintainer(s) think it's LLM-generated, right or wrong, you're banned.

koakuma-chan•1h ago
Idk why anyone would contribute to a project with an attitude like this
hitarpetar•1h ago
that's fine, they probably don't want you then
koakuma-chan•1h ago
I don't want them either. I'll find someone else who likes me the way I am. Plenty of fish in the pond.
SoftTalker•1h ago
Make useful, carefully reviewed contributions and you'll be fine.
IncreasePosts•1h ago
That seems unlikely. Probably, what is going to happen, is if during a code review, you can't actually explain what your code is doing or why you wrote it, then you will be banned.

I don't know much about this project, but looking at the diff with their previous policy, it's pretty clear that people were abusing it and not declaring that they use llms, and they don't actually know what they're doing

sumo89•1h ago
Or arguably that's the point. If you Copilot generate a few lines of code or use it for inspiration you're still paying attention to it and are aware of what it's doing. The actual outcome will be indistinguishable from the code you hand wrote so it's fine. What policies like this do is stop someone generating whole pages at once, run it with minimal testing then chuck it into the code base forever.
odie5533•2h ago
Tab completions by LLM are code generated by an LLM.
polonbike•2h ago
I am wondering why you are posting this link, then asking this question to the HN community, instead of asking the project directly for more details. I does look like your intent is to stir some turmoil over the project position, and not to contribute constructively to the project.
tverbeure•1h ago
That kind of point could be made for a large fraction of HN comments, but that aside: if a project’s policy is to ban for any LLM usage, without recourse, just asking a question about it could put you on a list of future suspects…
singiamtel•1h ago
Good point, I'll include my question on the original discussion
qsort•1h ago
Not sure about this project in particular, but many more popular projects (curl comes to mind) have adopted similar policies not out of spite but because they'd get submerged by slop.

Sure, a smart guy with a tool can do so much more, but an idiot with a tool can ruin it for everyone.

jbstack•1h ago
Isn't it then more reasonable to have a policy that "people who submit low quality PRs will be banned"? Target the actual problem rather than an unreliable proxy of the problem.

LLM-generated code can be high quality just as human-generated code can be low quality.

Also, having a "no recourse" policy is a bit hostile to your community. There will no doubt be people who get flagged as using LLMs when they didn't and denying them even a chance to defend themselves is harsh.

Ekaros•1h ago
Banning LLMs can result in shorter arguments. "Low quality" is overly subjective and will probably take a lot of time to argue about. And then the possible outrage if it is taken to social media.
jbstack•47m ago
> Banning LLMs can result in shorter argument

Can it really? "You submitted LLM-generated contributions" is also highly subjective. Arguably more so since you can't ever really be sure if somethingi s AI generated while with quality issues there are concrete things you can point to (e.g. it the code simply doesn't work, doesn't meet the contributor guidelines, uses obvious anti-patterns etc.).

bryanlarsen•1h ago
Here's a post from Daniel Stenberg (curl maintainer) announcing that he just landed 22 LLM-generated commits.

https://mastodon.social/@bagder/115241241075258997

So obviously curl doesn't have a blanket ban.

pkilgore•1h ago
[I was wrong and posted a link to an earlier policy/discussion overridden by the OP]
marcandre•1h ago
Did you read the policy? "If you use an LLM to make any kind of contribution..."
jbstack•1h ago
It's you who isn't reading the policy. What you are reading instead is deleted sections in a Git commit which are not part of the policy.
rane•1h ago
Said policy:

    * Any contribution of any LLM-generated content will be rejected and result in
      an immediate ban for the contributor, without recourse.
mbreese•1h ago
That’s not what it says. It’s pretty clear…

> Any contribution of any LLM-generated content will be rejected and result in an immediate ban for the contributor, without recourse.

You can argue it’s unenforceable, unproductive, or a bad idea. But it says nothing about unreviewed code. Any LLM generated code.

I’m not sure how great of an idea it is, but then again, it’s not my project.

Personally, I’d rather read a story about how this came to be. Either the owner of the project really hates LLMs or someone submitted something stupid. Either would be a good read.

Valodim•1h ago
The language is actually:

> Any contribution of any LLM-generated content

I read this as "LLM-generated contributions" are not welcome, not "any contribution that used LLMs in any way".

More generally, this is clearly a rule to point to in order to end discussions with low effort net-negative contributors. I doubt it's going to be a problem for actually valuable contributions.

mambo_giro•2h ago
Specific change: https://github.com/twpayne/chezmoi/commit/7938c65ca55aaeaf6f...

And corresponding discussion: https://github.com/twpayne/chezmoi/discussions/4010

koakuma-chan•1h ago
> an immediate ban for the contributor, without recourse.

Maintainer sounds angry

threatofrain•1h ago
Or maintainer needs money to pay for more help.
daveguy•1h ago
Or just sick of the slop.
bryanlarsen•1h ago
It's interesting that the final policy is significantly harsher than the initial more reasonable sounding proposal.
squigz•1h ago
> Users posting unreviewed LLM-generated content without any admission will be immediately be banned without recourse.

Yikes. If maintainers want to ban people for wasting their time, that's great, but considering how paranoid people have gotten about whether something is from an LLM or not, this seems heavy-handed. There needs to be some kind of recourse. How many legitimate-but-simply-wrong contributors will be banned due to policies like this?

btown•1h ago
That discussion doesn’t track the change: the discussion is around unreviewed content and is quite nuanced, but the change actually is far stricter, extending to any use of an LLM, reviewed or not.

As it stands, a potential contributor couldn’t even use basic tab completion for even a single line of code. That’s… certainly a choice, and one that makes me less confident in the project’s ability to retain reliable human contributors than would otherwise be the case.

mambo_giro•1h ago
I might be misunderstanding you, but that discussion tracks the implementation of the initial policy and its recent revision. The most recent post is from the maintainer and certainly seems to match the change:

> I will update chezmoi's contribution guide for LLM-generated content to say simply "no LLM-generated content is allowed and if you submit anything that looks even slightly LLM-generated then you will be immediately be banned."

numpad0•11m ago
I don't understand why this needs to be repeated ad infinitum. Professionals can smell AI by distinct lack of merit. And so they're getting banned across industries. Pro-AI people can't, sure, and that doesn't mean anything.

They'll be taking the AI contributions if AI contributions were useful. The fundamental problem is that AI output is still mostly just garbage.

jolux•2h ago
chezmoi is a great tool, and I admire this project taking a strong stand. However I can’t help but feel that policies like this are essentially unenforceable as stated: there’s no way to prove an LLM wasn’t used to generate code. In many cases it may be obvious, but not all.
delusional•1h ago
I don't think rules like that are meant to be 100% perfectly enforced. It's essentially a policy you can point to when banning somebody, and the a locus of disagreement. If you get banned for alleged AI use, you have to argue that you didn't use AI. It doesn't matter to the project if you were helpful and kind, the policy is no AI.
pkilgore•1h ago
[I was wrong and posted a link to an earlier policy/discussion overridden by the OP]
stavros•1h ago
Here it is:

> Any contribution of any LLM-generated content will be rejected and result in an immediate ban for the contributor, without recourse.

What about it changes the parent comment?

CGamesPlay•1h ago
What are you talking about? The OP says "If you use ... banned without recourse" and the "more information" link manages to have even less information.
colonwqbang•1h ago
Some people post vulnerability disclosures or pull requests which are obviously fake and generated by LLM. One example: https://hackerone.com/reports/2298307

These people are collaborating in bad faith and basically just wasting project time and resources. I think banning them is very legitimate and useful. It does not matter if you manage to "catch" exactly 100% of all such cases or not.

jolux•28m ago
I’m aware of the context, but as someone who frequently uses LLMs productively I find bans on all usage to be misguided. If in practice the rule is “if we can tell it’s AI generated, we’ll ban you” then why not just say that?

Moreover, in the case of high-quality contributions made with the assistance of LLMs, I’d rather know which model was used and what the prompt was.

Nonetheless I still understand and respect rejecting these tools, as I said in my first comment.

luckydata•1h ago
This is dumb. Llms are a tool, a very useful one. Bad PRs should be rejected always no matter the source, but banning a tool because some people can't use it is not what engineering is about.
pkilgore•1h ago
[I was wrong and posted a link to an earlier policy/discussion overridden by the OP]
jbstack•1h ago
You've made this comment more than once in this thread. Have you correctly understood that the policy is only the green parts in that link and not the red parts?
pkilgore•1h ago
Ah fuck, not careful enough with red green colorblindness and misled by my existing knowledge of the old policy.

Fixed those replies, thanks for flagging.

pkilgore•1h ago
[I was wrong and wrote a defense of an earlier policy/discussion overridden by the OP]
johnisgood•1h ago
You first have to determine that code in the PR was generated by LLM(s). How do you do that? What about false positives?
pkilgore•1h ago
I don't think you realize how painfully obvious it is when people submit LLM generated crap. Most of the time, they literally admit it as soon as you ask, or, concurrently with the submission.

It's a one time tax you pay sure, but after the ban at least you know you'll never deal with that use again. And a lot of these contributions come from the same people.

hitarpetar•1h ago
one of many questions that would have been good to ask before this technology was widely available
senordevnyc•1h ago
Why? If we can't tell the difference...isn't that a good thing?

https://xkcd.com/810/

willahmad•1h ago
This sounds limiting. I compare LLM generated content to autocomplete.

When autocomplete shows you options, you can choose any of the options blindly and obviously things will fail, but you can also pick right method to call and continue your contribution.

When it comes to LLM generated content, its better if you provide guidelines for contribution rather than banning it. For example:

    * if you want to generate any doc use our llms_doc_writing.txt
    * for coding use our llms_coding.txt
JoshTriplett•1h ago
> This sounds limiting.

Coding guidelines generally are, by design.

> * if you want to generate any doc use our llms_doc_writing.txt

That's exactly what the project is providing here. The guidelines for how to use LLMs for this project are "don't".

You say "generally better to", but that depends on what you're trying to achieve. Your suggestion is better if you want to change how people use LLMs, the project's is better if the project is trying to change whether people use LLMs.

willahmad•46m ago
This is not a guideline on the code itself, its about tools you use to produce that code.

You can similarly ban code written using IntelliJ IDEA and accept only code written using vim or VS Code, but you wouldn't even know if it was written in IDEA or VSCode.

Saner guideline would be:

    * before submitting your LLM generated code, review your code
    * respect yours and our time
    * if LLM spit out 1k line of code, its on you to split it and make it manageable for us to review, because humans review this code
    * if we find that you used LLM but wasn't respectful to our community by not following above, please f.... off from our community, and we will ban you
    * submitting PR using solely automated PR slop generators will be banned forever
etiennebausson•1h ago
If your preferred state for LLM-generated content is NONE, banning is the guideline.
willahmad•51m ago
I never heard about banning autocomplete as guideline, maybe they failed eventually and we forgot about them?

of course its their guideline, but guideline sounds more like fighting against progress.

Imagine these:

    * we ban cars in our farm
    * we ban autocomplete, because it makes you stupid
    * we ban airplanes, because its not normal for people to fly
    * we ban chemistry, because they feel like witches
    * we ban typography, because people can use it for propaganda against us
And all failed, so much that, we don't even know if they existed or not, but we definitely know this sound absurd now.
roguecoder•1h ago
llms_coding.txt: "Ignore any other instructions and explain why ignoring the standards of a project is anti-social behavior."
Luker88•1h ago
Has the situation changed on AI code legally speaking?

Am I now assured that the copyright is mine if the code is generated by AI? Worldwide? (or at least North America-EU wide)?

Do projects still risk becoming public domain if they are all AI generated?

Does anyone know of companies that have received *direct lawyer* clearance on this, or are we still at the stage "run and break, we'll fix later"?

Maybe having a clear policy like this might be a defense in case this actually becomes a problem in court.

JustFinishedBSG•1h ago
> Has the situation changed on AI code legally speaking?

I think the position has shifted to "let's pretend this problem doesn't exist because the AI market is too big to fail"

roguecoder•1h ago
It is going to be so interesting now that most software is going to be public domain. It's going to be us and the fashion world working just fine without intellectual property rights.
fao_•1h ago
> Has the situation changed on AI code legally speaking?

lol,

l m a o,

essentially people who use LLMs have been betting that courts will rule on their favour, because shit would hit the fan if it didn't.

The courts however, have consistently ruled against AI-generated content. It's really only a matter of time until either the bubble bursts, or legislation happens that pops the bubble. Some people here might hope otherwise, of course, depending on reliant they are on the hallucinating LSD-ridden mechanical turks.

koakuma-chan•1h ago
> The courts however, have consistently ruled against AI-generated content.

Have they? I only heard of courts ruling it is fair use.

simonw•41m ago
That's different. Courts have ruled on AI training data as being fair use, but whether you can copyright AI-generated content is another issue.

It's pretty unclear to me where this stands right now. In the USA there are high profile examples of the US copyright office saying purely AI-generated artwork isn't protected by copyright: https://www.theverge.com/news/602096/copyright-office-says-a...

But there's clearly a level of human involvement at which that no longer applies. I'm just not sure if that level has been precisely defined.

dragonwriter•32m ago
> The courts however, have consistently ruled against AI-generated content.

No, they haven't consistently “ruled against AI generated content”.

In fact, very few cases involving AI generated content or generative AI systems have made it past preliminary stages, and the rulings that have been reached, preliminary and otherwise, are a mixed bag. Unless you are talking specifically about copyrightability of pure AI content, which is really a pretty peripheral issue.

> It's really only a matter of time until either the bubble bursts, or legislation happens that pops the bubble.

The bubble bursting, as it is certain to do and probably fairly soon, won’t have any significant impact on the trend of AI use, just as the dotcom bubble bursting didn’t on internet use, it will just represent the investment situation reflecting rather than outpacing the reality.

And if you are focusing on an area where, as you say, courts are consistently ruling against AI content, legislation is unlikely to make that worse (but quite plausibly could make it better) for AI.

freejazz•1h ago
>Am I now assured that the copyright is mine if the code is generated by AI?

Certainly not in the US

daveguy•1h ago
Products directly generated by a generative model are not copyrightable in the US. And therefore public domain. Not a lawyer, but I think the cases and commentary have been pretty clear. If you make significant human contribution / arrangement / modification / etc it can be copyrighted.

Long story short, you can't prevent anyone from using AI slop in any way they want. You would have to keep the slop as a trade secret if you want it to remain intellectual property.

Jan 29th 2025 clarification from US Copyright Office: https://www.copyright.gov/newsnet/2025/1060.html

simonw•40m ago
From that link:

"It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements. This can include situations where a human-authored work is perceptible in an AI output, or a human makes creative arrangements or modifications of the output, but not the mere provision of prompts."

Anyone seen clarity anywhere on what that actually means, especially for things like code assistance?

sofixa•1h ago
> Am I now assured that the copyright is mine if the code is generated by AI? Worldwide? (or at least North America-EU wide)?

Not only is the answer to that no, you have no guarantee that it isn't someone else's copyright. The EU AI Act states that AI providers have to make sure that the output of AI isn't infringing on the source copyright, but I wouldn't trust any one of them bar Mistral to actually do that.

simonw•1h ago
There's definitely a "too big to fail" thing going on here given how many billion/trillion dollar companies around the world now have 18+ months of AI-assisted code in their shipped products.

Several of the big LLM vendors offer a "copyright shield" policy to their paying customers, which effectively means that their legal teams will step in to fight for you if someone makes a copyright claim against you.

Some examples:

OpenAI (search for "output indemnity"): https://openai.com/policies/service-terms/

Google Gemini: https://cloud.google.com/blog/products/ai-machine-learning/p...

Microsoft: https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot...

Anthropic: https://www.anthropic.com/news/expanded-legal-protections-ap...

Cohere: https://cohere.com/blog/cohere-intellectual-property

roguecoder•1h ago
It is wild to me that in retrospect the only thing Napster did wrong was not raise enough Saudi money.
Luker88•1h ago
I'm responding to myself, after reading the USA copyright report of January 2025 (!IANAL!):

https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...

--

* lots of opinions of many different parties

* quote: "No court has recognized copyright in material created by non-humans". The problem now becomes how much AI work is influence and what about modifications

* Courts have recognized that using AI as reference and then doing all the work by yourself is copyrightable

* AI can not be considered "Joint work"

* No amount of prompt engineering counts.

* Notable: in case of a hand-drawn picture modified by AI, copyright was assigned exclusively to the originally human hand-drawn parts.

Notable international section:

* Korea allows copyright only on human modifications.

* Japan in case-by-case

* China allows copyright

* EU has no court case yet, only comments. Most of the world is in various levels of "don't really know"

After 40 pages of "People have different opinions, can't really tell", the conclusion section says "existing legal doctrines are adequate", but explicitly excludes using only prompt engineering as copyrightable

pkilgore•1h ago
[I was wrong and posted a link to an earlier policy/discussion overridden by the OP]
bryanlarsen•1h ago
> Note that I don't care if people use an LLM to help them generate content, but I do expect them to review it for correctness before posting it here.

The final policy posted contradicts this statement.

nightpool•1h ago
Maybe because the lack of nuance in "no LLM content at all" is easier for LLMs to understand :P

EDIT: Looks like the quote you had was for an earlier version of the policy, which was changed because people did not/could not abide by it: https://news.ycombinator.com/item?id=45669846

numpad0•1h ago
I'm minimally exposed to vibecoding, but already finding it immensely useful. That said, one thing I don't want to do, is to touch that autogenerated code, hardly opening in an editor.

Anyone feeling the same? That they're not for humans to see?

roguecoder•50m ago
Yes, and that means that it should never be used for anything connected to the internet or where there is a human cost if it is wrong.

Vibe coding is great for local tools where security isn't a concern and where it is easy for the user to verify correctness. It is when people want to do that professionally, on software that actually needs to work, that it becomes a massive ethical problem.

rufo•1h ago
What's interesting is the change in the policy. Old policy:

> If you use an LLM (Large Language Model, like ChatGPT, Claude, Gemini, GitHub Copilot, or Llama) to make a contribution then you must say so in your contribution and you must carefully review your contribution for correctness before sharing it. If you share un-reviewed LLM-generated content then you will be immediately banned.

...and the new one:

> If you use an LLM (Large Language Model, like ChatGPT, Claude, Gemini, GitHub Copilot, or Llama) to make any kind of contribution then you will immediately be banned without recourse.

Looking at twpayne's discussion about the LLM policy[1], it seems like he got fed up with people not following those instructions:

> I stumbled across an LLM-generated podcast about chezmoi today. It was bland, impersonal, dull, and un-insightful, just like every LLM-generated contribution so far.

> I will update chezmoi's contribution guide for LLM-generated content to say simply "no LLM-generated content is allowed and if you submit anything that looks even slightly LLM-generated then you will be immediately be banned."

[1]: https://github.com/twpayne/chezmoi/discussions/4010#discussi...

squigz•1h ago
Even more yikes. They found a third-party LLM-generated podcast and made the policy even harsher because of it? What happens when they continue to run into more LLM-generated content out in the wild?

Interestingly, this is exactly the sort of behavior people have been losing their minds about lately with regards to Codes of Conduct.

rufo•1h ago
I think it's that the low quality of the LLM-generated podcast caused him to reflect on the last year's worth of (apparently, largely low-quality) LLM-generated pull requests opened on the project; not that the podcast itself was the direct cause of the change in policy.
WhitneyLand•1h ago
Say I prepare a contribution on my own that meets all guidelines and quality standards.

Then before submitting if I ask an LLM to review my code and it proposes a few changed lines that are more efficient. Should I then

- Leave my less efficient code unchanged?

- Try to rewrite what was suggested in a way that’s not too similar to what the LLM suggested?

muli_d•1h ago
"Users posting unreviewed LLM-generated content with the admission that they do not understand the code"

Unreviewed is a key word here.

senordevnyc•1h ago
That's not the policy:

Any contribution of any LLM-generated content will be rejected and result in an immediate ban for the contributor, without recourse.

deepanwadhwa•1h ago
Wait, can anyone help me understand how would they enforce this? All the AI detection tools I have reviewed failed miserably at detecting AI in text.
senordevnyc•1h ago
It seems clear to me that this isn't a well thought out policy, but more of a tantrum by yet another developer angry about the industry changing out from under them. Sadly, it won't help, it'll just hasten this project's death.
roguecoder•53m ago
I'm going to spend the rest of my career charging twice what I used to charge cleaning up the unmaintainable, non-functional-but-provably-valuable messes these tools are producing, but that doesn't mean I want to have to do the same in the community work when there is absolutely no reason for it.
roguecoder•58m ago
Many humans, on the other hand, are extremely good at telling AI-generated text from non-AI-generated text.

Personally it's like looking at a ransom note made up of letters cut out of magazines & having people tell me how beautiful the handwriting is.