frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

AI tooling must be disclosed for contributions

https://github.com/ghostty-org/ghostty/pull/8289
197•freetonik•1h ago

Comments

electric_muse•1h ago
I just submitted my first big open source contribution to the OpenAI agents SDK for JS. Every word except the issue I opened was done by AI.

On the flip side, I’m preparing to open source a project I made for a serializable state machine with runtime hooks. But that’s blood sweat and tears labor. AI is writing a lot of the unit tests and the code, but it’s entirely by my architectural design.

There’s a continuum here. It’s not binary. How can we communicate what role AI played?

And does it really matter anymore?

(Disclaimer: autocorrect corrected my spelling mistakes. Sent from iPhone.)

kbar13•1h ago
if you read his note i think he gives good insight as to why he wants PRs to signal AI involvement.

that being said i feel like this is an intermediate step - it's really hard to review PRs that are AI slop because it's so easy for those who don't know how to use AI to create a multi-hundred/thousand line diff. but when AI is used well, it really saves time and often creates high quality work

spaceywilly•1h ago
As long as they make it easy to add a “made with AI” tag to the PR, it seems like there’s really no downside. I personally can’t imagine why someone would want to hide the fact they used AI. A contractor would not try to hide that they used an excavator to dig a hole instead of a shovel.
victorbjorklund•49m ago
I guess if you write 1000 lines and you just auto tabbed an auto-complete of a variable name done by AI you might not wanna say the code is written by AI.
ineedasername•46m ago
>I personally can’t imagine why someone would want to hide the fact they used AI.

Because of the perception that anything touched by AI must be uncreative slop made without effort. In the case of this article, why else are they asking for disclosure if not to filter and dismiss such contributions?

kg•1h ago
The OP seems to be coming from the perspective of "my time as a PR reviewer is limited and valuable, so I don't want to spend it coaching an AI agent or a thin human interface to an AI agent". From that perspective, it makes perfect sense to want to know how much a human is actually in the loop for a given PR. If the PR is good enough to not need much review then whether AI wrote it is less important.

An angle not mentioned in the OP is copyright - depending on your jurisdiction, AI-generated text can't be copyrighted, which could call into question whether you can enforce your open source license anymore if the majority of the codebase was AI-generated with little human intervention.

victorbjorklund•51m ago
As long as some of the code is written by humans it should be enforceable. If we assume AI code has no copyright (not sure it has been tested in courts yet) then it would only be the parts written by the AI. So if AI writes 100 lines of code in Ghostty then I guess yes someone can "steal" that code (but no other code in Ghostty). Why would anyone do that? 100 random lines of AI code in isolation isn't really worth anything...
ToucanLoucan•1h ago
> And does it really matter anymore?

Well, if you had read what was linked, you would find these...

> I think the major issue is inexperienced human drivers of AI that aren't able to adequately review their generated code. As a result, they're pull requesting code that I'm sure they would be ashamed of if they knew how bad it was.

> The disclosure is to help maintainers assess how much attention to give a PR. While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.

> I'm a fan of AI assistance and use AI tooling myself. But, we need to be responsible about what we're using it for and respectful to the humans on the other side that may have to review or maintain this code.

I don't know specifically what PR's this person is seeing. I do know it's been a rumble around the open source community that inexperienced devs are trying to get accepted PRs for open source projects because they look good on a resume. This predated AI in fact, with it being a commonly cited method to get attention in a competitive recruiting market.

As always, folks trying to get work have my sympathies. However ultimately these folks are demanding time and work from others, for free, to improve their career prospects while putting in the absolute bare minimum of effort one could conceivably put in (having Copilot rewrite whatever part of an open source project and shove it into a PR with an explanation of what it did) and I don't blame them for being annoyed at the number of low-quality submissions.

I have never once criticized a developer for being inexperienced. It is what it is, we all started somewhere. However if a dev generated shit code and shoved it into my project and demanded a headpat for it so he could get work elsewhere, I'd tell him to get bent too.

beckthompson•1h ago
I think its simple, just don't hide it. I've had mutliple contributors try to hide the fact that they used AI (E.g removing claude as a code author - they didn't know how to do it and close the PR when it first happened.). I don't really care if someone uses AI, but most of the people who do also do not test their changes which just gives me more work. If someone:

1.) Didn't try to hide the fact that they used AI

2.) Tested their changes

I would not care at all. The main issue is this is usually not the case, most people submitting PRs that are 90% AI do not bother testing (Usually they don't even bother running the automated tests)

Jaxan•19m ago
> How can we communicate what role AI played?

What about just telling exactly what role AI played? You can say it generated the tests for you for instance.

Waterluvian•1h ago
I’m not a big AI fan but I do see it as just another tool in your toolbox. I wouldn’t really care how someone got to the end result that is a PR.

But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”

quotemstr•1h ago
As a project maintainer, you shouldn't make rules unenforceable rules that you and everyone else know people will flout. Doing so comes makes you seem impotent and diminishes the respect people have for rules in general.

You might argue that by making rules, even futile ones, you at least establish expectations and take a moral stance. Well, you can make a statement without dressing it up as a rule. But you don't get to be sanctimonious that way I guess.

voxl•54m ago
Except you can enforce this rule some of the time. People discover that AI was used or suspect it all the time, and people admit to it after some pressure all the time.

Not every time, but sometimes. The threat of being caught isn't meaningless. You can decide not to play in someone else's walled garden if you want but the least you can do is respect their rules, bare minimum of human decency.

quotemstr•46m ago
It. doesn't. matter.

The only legitimate reason to make a rule is to produce some outcome. If your rule does not result in that outcome, of what use is the rule?

Will this rule result in people disclosing "AI" (whatever that means) contributions? Will it mitigate some kind of risk to the project? Will it lighten maintainer load?

No. It can't. People are going to use the tools anyway. You can't tell. You can't stop them. The only outcome you'll get out of a rule like this is making people incrementally less honest.

recursive•42m ago
Sometimes you can tell.
blaufuchs•40m ago
> Will it lighten maintainer load?

Yes that is the stated purpose, did you read the linked GitHub comment? The author lays out their points pretty well, you sound unreasonably upset about this. Are you submitting a lot of AI slop PRs or something?

P.S Talking. Like. This. Is. Really. Ineffective. It. Makes. Me. Just. Want. To. Disregard. Your. Point. Out. Of. Hand.

devmor•34m ago
There are plenty of argumentative and opinionated reasons to say it matters, but there is one that can't really be denied - reviewers (and project maintainers, even if they aren't reviewers) are people whose time deserves to be respected.

If this rule discourages low quality PRs or allows reviewers to save time by prioritizing some non-AI-generated PRs, then it certainly seems useful in my opinion.

natrius•26m ago
Unenforceable rules are bad, but if you tweak the rule to always require some sort of authorship statement (e.g. "I wrote this by hand" or "I wrote this with Claude"), then the honor system will mostly achieve the desired goal of calibrating code review effort.
KritVutGu•10m ago
> As a project maintainer, you shouldn't make rules unenforceable rules

Total bullshit. It's totally fine to declare intent.

You are already incapable of verifying / enforcing that a contributor is legally permitted to submit a piece of code as their own creation (Signed-off-by), and do so under the project's license. You won't embark on looking for prior art, for the "actual origin" of the code, whatever. You just make them promise, and then take their word for it.

wahnfrieden•59m ago
You should care. If someone submits a huge PR, you’re going to waste time asking questions and comprehending their intentions if the answer is that they don’t know either. If you know it’s generated and they haven’t reviewed it themselves, you can decide to shove it back into an LLM for next steps rather than expect the contributor to be able to do anything with your review feedback.

Unreviewed generated PRs can still be helpful starting points for further LLM work if they achieve desired results. But close reading with consideration of authorial intent, giving detailed comments, and asking questions from someone who didn't write or read the code is a waste of your time.

That's why we need to know if a contribution was generated or not.

KritVutGu•15m ago
You are absolutely right. AI is just a tool to DDoS maintainers.

Any contributor who was shown to post provably untested patches used to lose credibility. And now we're talking about accommodating people who don't even understand how the patch is supposed to work?

cvoss•50m ago
It does matter how and where a PR comes from, because reviewers are fallible and finite, so trust enters the equation inevitably. You must ask "Do I trust where this came from?" And to answer that, you need to know where it come from.

If trust didn't matter, there wouldn't have been a need for the Linux Kernel team to ban the University of Minnesota for attempting to intentionally smuggle bugs through the PR process as part of an unauthorized social experiment. As it stands, if you / your PRs can't be trusted, they should not even be admitted to the review process.

koolba•29m ago
> You must ask "Do I trust where this came from?" And to answer that, you need to know where it come from.

No you don’t. You can’t outsource trust determinations. Especially to the people you claim not to trust!

You make the judgement call by looking at the code and your known history of the contributor.

Nobody cares if contributors use an LLM or a magnetic needle to generate code. They care if bad code gets introduced or bad patches waste reviewers’ time.

falcor84•23m ago
Trust is absolutely a thing. Maintaining an open source project is an unreasonably demanding and thankless job, and it would be even more so if you had to treat every single PR as if it's a high likelihood supply-chain attack.
KritVutGu•21m ago
This is it exactly.

Slop generators being available to everyone makes everyone less trustworty, from a maintainer's POV. Thus, the circle of trust, for any given maintainer, shrinks starkly.

People do not become maintainers because they want to battle malicious, or even criminally negligent crap. They expect benign and knowledgeable contributors, or at least benign and willing to do their homework ones.

Being a maintainer is already hugely thankless. It's hard work (harder than writing code), and it comes with a lot less recognition. Not to mention all the newcomers that (a) maintainers usually eagerly educate, but then (b) disappear.

Screw up the social contract for maintainers even more, and they'll go extinct. (Edit: if a maintainer gets a whiff of some contributor working against them, rather than with them, they'll either ban the contributor forever, or just quit the project.)

Any sane project should categorically ban AI-assisted contributions, and extend their Signed-off-by definition, after a cut-off-date, to carry an explicit statement by the contributor that the code is free of AI-output. If this rules out "agentic IDE"s, that's a win.

renrutal•43m ago
I won't put it as "just another tool". AI introduces a new kind of tool where the ownership of the resulting code is not straightforward.

If, in the dystopian future, a justice court you're subjected to decides that Claude was trained on Oracle's code, and all Claude users are possibly in breach of copyright, it's easier to nuke from orbit all disclosed AI contributions.

raincole•43m ago
When one side has much more "scalability" than the other, then the other side has very strong motivation to match up.

- People use AI to write cover letters. If the companies don't filter out them automatically, they're screwed.

- Companies use AI to interview candidates. No one wants to spend their personal time talking to a robot. So the candidates start using AI to take interviews for them.

etc.

If you don't at least tell yourself that you don't allow AI PRs (even just as a white lie) you'll one day use AI to review PRs.

oceanplexian•26m ago
Both sides will use AI and it will ultimately increase economic productivity.

Imagine living before the invention of the printing press, and then lamenting that we should ban them because it makes it "too easy" to distribute information and will enable "low quality" publications to have more reach. Actually, this exact thing happened, but the end result was it massively disrupted the world and economy in extremely positive ways.

bootsmann•20m ago
> Both sides will use AI and it will ultimately increase economic productivity.

Citation needed, I don’t think the printing press and gpt are in any way comparable.

nosignono•34m ago
> I wouldn’t really care how someone got to the end result that is a PR.

I can generate 1,000 PRs today against an open source project using AI. I think you do care, you are only thinking about the happy path where someone uses a little AI to draft a well constructed PR.

There's a lot ways AI can be used to quickly overwhelm a project maintainer.

Waterluvian•32m ago
In that case a more correct rule (and probably one that can be automatically enforced) for that issue is a max number of PRs or opened issues per account.
oceanplexian•20m ago
> I can generate 1,000 PRs today against an open source project using AI.

Then perhaps the way you contribute, review, and accept code is fundamentally wrong and needs to change with the times.

It may be that technologies like Github PRs and other VCS patterns are literally obsolete. We've done this before throughout many cycles of technology, and these are the questions we need to ask ourselves as engineers, not stick our heads in the sand and pretend it's 2019.

whatevertrevor•8m ago
I don't think throwing out the concept of code reviews and version control is the correct response to a purported rise in low-effort high-volume patches. If anything it's even more required.
Razengan•32m ago
> if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”

or say "fork you."

dsjoerg•23m ago
You haven't addressed the primary stated rationale from the linked content: "I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so."
rattlesnakedave•1h ago
In my personal projects I also require all contributors to disclose rather they’ve used an editor with any autocomplete features enabled.
freedomben•56m ago
Heh, that's a great way to make a point, but right now AI is nowhere near what a traditional editor autocomplete is. Yes you can use it that way, but it's by no means limited to that. If you think of AI as a fancy autocomplete, that's a good personal philosophy, but there are plenty of people that aren't using it that way
miloignis•24m ago
Notably, tab completion is an explicltly called-out exception to this policy, as detailed in the changed docs.
estimator7292•59m ago
Do I also have to disclose using tab completion? My IDE uses machine learning for completion suggestions.

Do I need to disclose that I wrote a script to generate some annoying boilerplate? Or that my IDE automatically templates for loops?

AlexandrB•47m ago
It's a spectrum, isn't it? I wouldn't want to waste my time reviewing a bunch of repetitive code generated from some script or do something like review every generated template instantiation in a C++ code base. I would want to review the script/template definition/etc., but what's the equivalent for AI? Should the review just be the prompt(s)?

Edit: Also, it's always good to provide maximal context to reviewers. For example, when I use code from StackOverflow I link the relevant answer in a comment so the reviewer doesn't have to re-tread the same ground I covered looking for that solution. It also gives reviewers some clues about my understanding of the problem. How is AI different in this regard?

recursive•41m ago
If you're not sure, it's probably safer to just mention it.
flexagoon•34m ago
No, it explicitly says that you don't need to disclose tab completion.
KritVutGu•6m ago
> Do I also have to disclose using tab completion? My IDE uses machine learning for completion suggestions.

Yes, you have to disclose it.

> Do I need to disclose that I wrote a script to generate some annoying boilerplate?

You absolutely need to disclose it.

> Or that my IDE automatically templates for loops?

That's probably worth disclosing too.

hodgehog11•57m ago
How does this not lead to a situation where no honest person can use any AI in their submissions? Surely pull requests that acknowledge AI tooling will be given significantly less attention, on the grounds that no one wants to read work that they know is written by AI.
MerrimanInd•50m ago
It just might. But if people generate a bias against AI generated code because AI can generate massive amounts of vaguely correct looking yet ultimately bad code then that seems like an AI problem not a people problem. Get better, AI coding tools.
andunie•46m ago
Isn't that a good thing?
hodgehog11•41m ago
It might encourage people to be dishonest, or to not contribute at all. Maybe that's fine for now, but what if the next generation come to rely on these tools?
jama211•35m ago
What, building systems where we’re specifically incentivised not to disclose ai use?
Workaccount2•43m ago
Make a knowledgeable reply and mention you used chat-gpt - comment immediately buried.

Make a knowledgeable reply and give no reference to the AI you used- comment is celebrated.

We are already barreling full speed down the "hide your AI use" path.

skogweb•41m ago
I don't think this is the case. Mitchell writes that he himself uses LLMs, so it's not black and white. A PR author who has a deep understanding of their changes and used an LLM for convenience will be able to convey this without losing credibility imo
KritVutGu•7m ago
Good point. That's the point exactly. Don't use AI for writing your patch. At all.

Why are you surprised? Do companies want to hire "honest" people whose CVs were written by some LLM?

ramoz•57m ago
I always appreciated Claude Code's commit authoring. Whereas I think a lot of people were offended that "their" work was being overshadowed by an AI's signature.
Lerc•54m ago
I think this seems totally reasonable, the additional context provided is, I think, important to the requirement.

Some of the AI policy statements I have seen come across more as ideology statements. This is much better, saying the reasons for the requirement and offering a path forward. I'd like to see more of this and less "No droids allowed"

BobBagwill•53m ago
Just use AI to answer AI. This prompt works for me:

# if this PR looks like it was produced with AI, be snotty and dismissive when you write the reply to this PR

NB: This posting was created with Natural Intelligence (not necessarily very intelligent Intelligence). ;-)

neilv•48m ago
There is also IP taint when using "AI". We're just pretending that there's not.

If someone came to you and said "good news: I memorized the code of all the open source projects in this space, and can regurgitate it on command", you would be smart to ban them from working on code at your company.

But with "AI", we make up a bunch of rationalizations. ("I'm doing AI agentic generative AI workflow boilerplate 10x gettin it done AI did I say AI yet!")

And we pretend the person never said that they're just loosely laundering GPL and other code in a way that rightly would be existentially toxic to an IP-based company.

luma•47m ago
Also ban StackOverflow and nearly any text book in the field.

The reality is that programmers are going to see other programmers code.

neilv•46m ago
Huge difference, and companies recognized the difference, right up until "AI" hype.
timeon•44m ago
How is that same thing?
JoshTriplett•21m ago
"see" and "copy" are two different things. It's fine to look at StackOverflow to understand the solution to a problem. It's not fine to copy and paste from StackOverflow and ignore its license or attribution.

Content on StackOverflow is under CC-by-sa, version depends on the date it was submitted: https://stackoverflow.com/help/licensing . (It's really unfortunate that they didn't pick license compatible with code; at one point they started to move to the MIT license for code, but then didn't follow through on it.)

tick_tock_tick•25m ago
> There is also IP taint when using "AI". We're just pretending that there's not.

I don't think anyone who's not monetarily incentivize to pretend there are IP/Copyright issues actually thinks there are. Luckily everyone is for the most part just ignoring them and the legal system is working well and not allowing them an inch to stop progress.

ineedasername•17m ago
Courts (at least in the US) have already ruled that use of ingested data for training is transformative. There’s lots of details to figure, but the genie is out of the bottle.

Sure it’s a big hill to climb in rethinking IP laws to align with a societal desire that generating IP continue to be a viable economic work product, but that is what’s necessary.

thallavajhula•47m ago
I’m loving today. HN’s front page is filled with some good sources today. No nonsense sensationalism or preaching AI doom, but more realistic experiences.

I’ve completely turned off AI assist on my personal computer and only use AI assist sparingly on my work computer. It is so bad at compound work. AI assist is great at atomic work. The rest should be handled by humans and use AI wisely. It all boils down back to human intelligence. AI is only as smart as the human handling it. That’s the bottom line.

devmor•37m ago
I'm right there with you, and having a similar experience at my day job. We are doing a bit of a "hack week" right now where we allow everyone in the org to experiment in groups with AI tools, especially those that don't regularly use them as part of their work - and we've seen mostly great applications of analytical approaches, guardrails and grounded generation.

It might just be my point of view, but I feel like there's been a sudden paradigm shift back to solid ML from the deluge of chatbot hype nonsense.

tick_tock_tick•30m ago
> AI is only as smart as the human handling it.

I think I'm slowly coming around to this viewpoint too. I really just couldn't understand how so many people were having widely different experiences. AI isn't magic; how could I have expected all the people I've worked with who struggle to explain stuff to team members, who have near perfect context, to manage to get anything valuable across to an AI?

I was original pretty optimistic that AI would allow most engineers to operate at a higher level but it really seems like instead it's going to massively exacerbate the difference between an ok engineer and a great engineer. Not really sure how I feel about that yet but at-least I understand now why some people think the stuff is useless.

katbyte•17m ago
It’s like the difference between someone who can search the internet or codebase well bs someone who can’t

Using search engines is a skill

jerf•14m ago
I've been struggling to apply AI on any large scale at work. I was beginning to wonder if it was me.

But then my wife sort of handed me a project that previously I would have just said no to, a particular Android app for the family. I have instances of all the various Android technologies under my belt, that is, I've used GUI toolkits, I've used general purpose programming languages, I've used databases, etc, but with the possible exception of SQLite (which even that is accessed through an ORM), I don't know any of the specific technologies involved with Android now. I have never used Kotlin; I've got enough experience that I can pretty much piece it together when I'm reading it but I can't write it. Never used the Android UI toolkit, services, permissions, media APIs, ORMs, build system, etc.

I know from many previous experiences that A: I could definitely learn how to do this but B: it would be a many-week project and in the end I wouldn't really be able to leverage any of the Android knowledge I would get for much else.

So I figured this was a good chance to take this stuff for a spin in a really hard way.

I'm about eight hours in and nearly done enough for the family; I need about another 2 hours to hit that mark, maybe 4 to really polish it. Probably another 8-12 hours and I'd have it brushed up to a rough commercial product level for a simple, single-purpose app. It's really impressive.

And I'm now convinced it's not just that I'm too old a fogey to pick it up, which is, you know, a bit of a relief.

It's just that it works really well in some domains, and not so much in others. My current work project is working through decades of organically-grown cruft owned by 5 different teams, most of which don't even have a person on them that understands the cruft in question, and trying to pull it all together into one system where it belongs. I've been able to use AI here and there for some stuff that is still pretty impressive, like translating some stuff into psuedocode for my reference, and AI-powered autocomplete is definitely impressive when it correctly guesses the next 10 lines I was going to type effectively letter-for-letter. But I haven't gotten that large-scale win where I just type a tiny prompt in and see the outsized results from it.

I think that's because I'm working in a domain where the code I'm writing is already roughly the size of the prompt I'd have to give, at least in terms of the "payload" of the work I'm trying to do, because of the level of detail and maturity of the code base. There's no single sentence I can type that an AI can essentially decompress into 250 lines of code the way that Gemini in Android Studio could decompress "I would like to store user settings with a UI to set the user's name, and then display it on the home page" which involved bringing in hundreds of lines of new code for just that.

I think I recommend this approach to anyone who wants to give this approach a fair shake - try it in a language and environment you know nothing about and so aren't tempted to keep taking the wheel. The AI is almost the only tool I have in that environment, certainly the only one for writing code, so I'm forced to really exercise the AI.

btucker•11m ago
I've been starting to think of it like this:

Great Engineer + AI = Great Engineer++ (Where a great engineer isn't just someone who is a great coder, they also are a great communicator & collaborator, and love to learn)

Good Engineer + AI = Good Engineer

OK Engineer + AI = Mediocre Engineer

btown•9m ago
One of my mental models is that the notion of "effective engineer" used to mean "effective software developer" whether or not they were good at system design.

Now, an "effective engineer" can be a less battle-tested software developer, but they must be good at system design.

(And by system design, I don't just mean architecture diagrams: it's a personal culture of constantly questioning and innovating around "let's think critically to see what might go wrong when all these assumptions collide, and if one of them ends up being incorrect." Because AI will only suggest those things for cut-and-dry situations where a bug is apparent from a few files' context, and no ambitious idea is fully that cut-and-dry.)

The set of effective engineers is thus shifting - and it's not at all a valid assumption that every formerly good developer will see their productivity skyrocket.

bgwalter•35m ago
I still do not understand how one can integrate "AI" code into a project with a license at all. "AI" code is not copyrightable, "AI" cannot sign a contributor agreement.

So if the code is integrated, the license of the project lies about parts of the code.

paulddraper•20m ago
> I still do not understand

Your question makes sense. See U.S. Copyright Office publication:

> If a work's traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it.

> For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user...

> For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare's style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text.

> When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.

> In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that “the resulting work as a whole constitutes an original work of authorship.”

> Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection. In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of” and do “not affect” the copyright status of the AI-generated material itself.

> This policy does not mean that technological tools cannot be part of the creative process. Authors have long used such tools to create their works or to recast, transform, or adapt their expressive authorship. For example, a visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording. In each case, what matters is the extent to which the human had creative control over the work's expression and “actually formed” the traditional elements of authorship.

> https://www.federalregister.gov/documents/2023/03/16/2023-05...

In any but a pathological case, a real contribution code to a real project has sufficient human authorship to be copyrightable.

> the license of the project lies about parts of the code

That was a concern pre-AI too! E.g. copy-past from StackOverflow. Projects require contributors to sign CLAs, which doesn't guarantee compliance, but strengthens the legal position. Usually something like:

"You represent that your contribution is either your original creation or you have sufficient rights to submit it."

paulddraper•35m ago
Aren't a large majority of programmers using Copilot/Cursor/AI autocompletion?

This seems very noisy/unhelpful.

ovaistariq•32m ago
I don’t see much benefit from the disclosure alone. Ultimately, this is code that needs to be reviewed. There is going to continue to be more and more AI assisted code generation, to the point where we see the same level of adoption of these tools as "Autocomplete". Why not solve this through tooling? I have had great effect with tools like Greptile, Cursor's BugBot and Claude Code.
Jaxan•23m ago
Sure it needs to be reviewed. But the author does more than just reviewing, they help the person submitting the PR to improve their PR. If the other side is an AI, it can save them some time.
wmf•20m ago
If the code is obviously low quality and AI-generated then it doesn't need to be fully reviewed actually. You can just reject the PR.
philjohn•31m ago
I like the pattern of including each prompt used to make a given PR, yes, I know that LLM's aren't deterministic, but it also gives context of the steps required to get to the end state.
mock-possum•27m ago
I’m using specsytory in vscode + cursor for this - it keeps a nice little md doc of all your LLM interactions, and you can check that into source control if you like so it’s included in pull requests, and can be referenced during code review.
mglvsky•15m ago
Little offtop: would someone remember mitchellh's setup for working with AI tools? I remember someone posted in an AI-hate-love threads here and it's not in the his blog[1]

1: https://mitchellh.com/writing

epolanski•15m ago
This isn't an AI problem this is a human one.

Blaming it on the tool, and not the person's misusing it trying to get his name on a big os project, is like blaming the new automatic in the kitchen and not the chef for getting a raw pizza on the table.

stillpointlab•13m ago
I think ghostty is a popular enough project that it attracts a lot of attention, and that means it certainly attracts a larger than normal amount of interlopers. There are all kinds of bothersome people in this world, but some of the most bothersome you will find are well meaning people who are trying to be helpful.

I would guess that many (if not most) of the people attempting to contribute AI generated code are legitimately trying to help.

People who are genuinely trying to be helpful can often become deeply offended if you reject their help, especially if you admonish them. They will feel like the reprimand is unwarranted, considering the public shaming to be an injury to their reputation and pride. This is most especially the case when they feel they have followed the rules.

For this reason, if one is to accept help, the rules must be clearly laid out from the beginning. If the ghostty team wants to call out "slop", then it must make it clear that contributing "slop" may result in a reprimand. Then the bothersome want-to-be helpful contributors cannot claim injury.

This appears to me to be good governance.

king_geedorah•12m ago
Re: "What about my autocomplete?" which has shown up twice in this thread so far.

> As a small exception, trivial tab-completion doesn't need to be disclosed, so long as it is limited to single keywords or short phrases.

RTFA (RTFPR in this case)

AI tooling must be disclosed for contributions

https://github.com/ghostty-org/ghostty/pull/8289
213•freetonik•1h ago•86 comments

How does the US use water?

https://www.construction-physics.com/p/how-does-the-us-use-water
26•juliangamble•7h ago•3 comments

Building AI products in the probabilistic era

https://giansegato.com/essays/probabilistic-era
37•sdan•1h ago•8 comments

Beyond sensor data: Foundation models of behavioral data from wearables

https://arxiv.org/abs/2507.00191
168•brandonb•5h ago•36 comments

An interactive guide to SVG paths

https://www.joshwcomeau.com/svg/interactive-guide-to-paths/
93•joshwcomeau•3d ago•10 comments

Miles from the ocean, there's diving beneath the streets of Budapest

https://www.cnn.com/2025/08/18/travel/budapest-diving-molnar-janos-cave
44•thm•3d ago•4 comments

DeepSeek-v3.1 Release

https://api-docs.deepseek.com/news/news250821
64•wertyk•1h ago•4 comments

Weaponizing image scaling against production AI systems

https://blog.trailofbits.com/2025/08/21/weaponizing-image-scaling-against-production-ai-systems/
265•tatersolid•7h ago•66 comments

My other email client is a daemon

https://feyor.sh/blog/my-other-email-client-is-a-mail-daemon/
40•aebtebeten•11h ago•11 comments

D4D4

https://www.nmichaels.org/musings/d4d4/d4d4/
405•csense•4d ago•46 comments

Using Podman, Compose and BuildKit

https://emersion.fr/blog/2025/using-podman-compose-and-buildkit/
211•LaSombra•9h ago•59 comments

Cua (YC X25) is hiring design engineers in SF

https://www.ycombinator.com/companies/cua/jobs/a6UbTvG-founding-engineer-ux-design
1•frabonacci•3h ago

The power of two random choices

https://brooker.co.za/blog/2012/01/17/two-random.html
19•signa11•3d ago•2 comments

Crimes with Python's Pattern Matching (2022)

https://www.hillelwayne.com/post/python-abc/
3•agluszak•28m ago•0 comments

The contrarian physics podcast subculture

https://timothynguyen.org/2025/08/21/physics-grifters-eric-weinstein-sabine-hossenfelder-and-a-crisis-of-credibility/
103•Emerson1•3h ago•102 comments

Show HN: OS X Mavericks Forever

https://mavericksforever.com/
243•Wowfunhappy•3d ago•98 comments

Launch HN: Skope (YC S25) – Outcome-based pricing for software products

29•benjsm•5h ago•26 comments

The Core of Rust

https://jyn.dev/the-core-of-rust/
96•zdw•3h ago•56 comments

Adding my home electricity uptime to status.href.cat

https://aggressivelyparaphrasing.me/2025/08/21/adding-my-home-electricity-uptime-to-status-href-cat/
27•todsacerdoti•4h ago•22 comments

Unity reintroduces the Runtime Fee through its Industry license

https://unity.com/products/unity-industry
175•finnsquared•5h ago•85 comments

Mark Zuckerberg freezes AI hiring amid bubble fears

https://www.telegraph.co.uk/business/2025/08/21/zuckerberg-freezes-ai-hiring-amid-bubble-fears/
565•pera•9h ago•528 comments

The unbearable slowness of AI coding

https://joshuavaldez.com/the-unbearable-slowness-of-ai-coding/
52•aymandfire•1h ago•27 comments

Show HN: ChartDB Cloud – Visualize and Share Database Diagrams

https://app.chartdb.io
70•Jonathanfishner•7h ago•9 comments

Why is D3 so Verbose?

https://theheasman.com/short_stories/why-is-d3-code-so-long-and-complicated-or-why-is-it-so-verbose/
79•TheHeasman•10h ago•48 comments

Show HN: Using Common Lisp from Inside the Browser

https://turtleware.eu/posts/Using-Common-Lisp-from-inside-the-Browser.html
85•jackdaniel•8h ago•21 comments

You Should Add Debug Views to Your DB

https://chrispenner.ca/posts/views-for-debugging
60•ezekg•4d ago•18 comments

A summary of recent AI research (2016)

https://blog.plan99.net/the-science-of-westworld-ec624585e47
19•mike_hearn•4h ago•0 comments

Unmasking the Privacy Risks of Apple Intelligence

https://www.lumia.security/blog/applestorm
78•mroi•4h ago•17 comments

Margin debt surges to record high

https://www.advisorperspectives.com/dshort/updates/2025/07/23/margin-debt-surges-record-high-june-2025
182•pera•8h ago•229 comments

Bank forced to rehire workers after lying about chatbot productivity, union says

https://arstechnica.com/tech-policy/2025/08/bank-forced-to-rehire-workers-after-lying-about-chatbot-productivity-union-says/
233•ndsipa_pomu•4h ago•89 comments