frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ghostty's AI Policy

https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md
118•mefengl•2h ago

Comments

mefengl•2h ago
If you prefer not to use GitHub: https://gothub.lunar.icu/ghostty-org/ghostty/blob/main/AI_PO...
christoph-heiss•1h ago
Not sure why you are getting downvoted, given that the original site is such a jarringly user-hostile mess.
embedding-shape•1h ago
Without using a random 3rd party, and without the "jarring user-hostile mess":

https://raw.githubusercontent.com/ghostty-org/ghostty/refs/h...

flexagoon•36m ago
This option is pretty unreadable on mobile though
embedding-shape•32m ago
Is it? Just tried it in Safari, Firefox and Chrome on a iPhone 12 Mini and I can read all the text? Obviously it isn't formatted, as it's raw markdown, just like what parent's recommended 3rd party platform does, but nothing is cut off or missing for me.

Actually, trying to load that previous platform on my phone makes it worse for readability, seems there is ~10% less width and not as efficient use of vertical space. Together with both being unformatted markdown, I think the raw GitHub URL seems to render better on mobile, at least small ones like my mini.

user34283•1h ago
Whatever your opinion on the GitHub UI may be, at least the text formatting of the markdown is working, which can't be said for that alternative site.
postepowanieadm•1h ago
That's really nice - and fast ui!
kleiba•40m ago
It gets even better when you click on "raw", IMO... which is what you also get when clicking on "raw" on Github.
cxrpx•1h ago
with limited training data that llm generated code must be atrocious
jakozaur•1h ago
See x thread for rationale: https://x.com/mitchellh/status/2014433315261124760?s=46&t=FU...

“ Ultimately, I want to see full session transcripts, but we don't have enough tool support for that broadly.”

I have a side project, git-prompt-story to attach Claude Vode session in GitHub git notes. Though it is not that simple to do automatic (e.g. i need to redact credentials).

radarsat1•1h ago
I've thought about saving my prompts along with project development and even done it by hand a few times, but eventually I realized I don't really get much value from doing so. Are there good reasons to do it?
fragmede•1h ago
It's not for you. It's so others can see how you arrived to the code that was generated. They can learn better prompting for themselves from it, and also how you think. They can see which cases got considered, or not. All sorts of good stuff that would be helpful for reviewing giant PRs.
Ronsenshi•32m ago
Sounds depressing. First you deal with massive PRs and now also these agent prompts. Soon enough there won't be any coding at all, it seems. Just doomscrolling through massive prompt files and diffs in hopes of understanding what is going on.
simonw•55m ago
For me it's increasingly the work. I spend more time in Claude Code going back and forth with the agent than I do in my text editor hacking on the code by hand. Those transcripts ARE the work I've been doing. I want to save them in the same way that I archive my notes and issues and other ephemera around my projects.

My latest attempt at this is https://github.com/simonw/claude-code-transcripts which produces output like the is: https://gisthost.github.io/?c75bf4d827ea4ee3c325625d24c6cd86...

awesan•27m ago
If the AI generated most of the code based on these prompts, it's definitely valuable to review the prompts before even looking at the code. Especially in the case where contributions come from a wide range of devs at different experience levels.

At a minimum it will help you to be skeptical at specific parts of the diff so you can look at those more closely in your review. But it can inform test scenarios etc.

optimalsolver•50m ago
>I want to see full session transcripts, but we don't have enough tool support for that broadly

I think AI could help with that.

arjunbajaj•1h ago
I can see this becoming a pretty generally accepted AI usage policy. Very balanced.

Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.

On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.

imiric•42m ago
I agree with you on the policy being balanced.

However:

> AI generated code does not substitute human thinking, testing, and clean up/rewrite.

Isn't that the end goal of these tools and companies producing them?

According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.

Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.

[1]: https://blog.samaltman.com/the-gentle-singularity

Terretta•28m ago
Intern generated code does not substitute for tech lead thinking, testing, and clean up/rewrite.
alansaber•1h ago
"Pull requests created by AI must have been fully verified with human use." should always be a bare minimum requirement.
vegabook•1h ago
Ultimately what's happening here is AI is undermining trust in remote contributions, and in new code. If you don't know somebody personally, and know how they work, the trust barrier is getting higher. I personally am already ultra vigilant for any github repo that is not already well established, and am even concerned about existing projects' code quality into the future. Not against AI per se (which I use), but it's just going to get harder to fight the slop.
epolanski•59m ago
Honestly I don't care how people come with the code they create, but I hold them responsible for what they try to merge.

I work in a team of 5 great professionals, there hasn't been a single instance since Copilot launched in 2022 that anybody, in any single modification did not take full responsibility for what's been committed.

I know we all use it, to different extent and usage, but the quality of what's produced hasn't dipped a single bit, I'd even argue it has improved because LLMs can find answers easier in complex codebases. We started putting `_vendor` directories with our main external dependencies as git subtrees, and it's super useful to find information about those directly in their source code and tests.

It's really as simple. If your teammates are producing slop, that's a human and professional problem and these people should be fired. If you use the tool correctly, it can help you a lot finding information and connecting dots.

Any person with a brain can clearly see the huge benefit of these tools, but also the great danger of not reviewing their output line by line and forfeiting the constant work of resolving design tensions.

Of course, open source is a different beast. The people committing may not be professionals and have no real stakes so they get little to lose by producing slop whereas maintainers are already stretched in their time and attention.

embedding-shape•55m ago
> It's really as simple. If you or your teammates are producing slop, that's a human and professional problem and these people should be fired.

Agree, slop isn't "the tool is so easy to use I can't review the code I'm producing", slop is the symptom of "I don't care how it's done, as long as it looks correct", and that's been a problem before LLMs too, the difference is how quickly you reach the "slop" state now, not that you have gate your codebase and reject shit code.

As always, most problems in "software programming" isn't about software nor programming but everything around it, including communication and workflows. If your workflow allows people to not be responsible for what they produce, and if allows shitty code to get into production, then that's on you and your team, not on the tools that the individuals use.

altmanaltman•52m ago
I mean this policy only applies to outside contributors and not the maintainers.

> Ghostty is written with plenty of AI assistance, and many maintainers embrace AI tools as a productive tool in their workflow. As a project, we welcome AI as a tool!

> Our reason for the strict AI policy is not due to an anti-AI stance, but instead due to the number of highly unqualified people using AI. It's the people, not the tools, that are the problem.

Basically don't write slop and if you want to contribute as an outsider, ensure your contribution actually is valid and works.

kanzure•58m ago
Another project simply paused external contributions entirely: https://news.ycombinator.com/item?id=46642012

Another idea is to simply promote the donation of AI credits instead of output tokens. It would be better to donate credits, not outputs, because people already working on the project would be better at prompting and steering AI outputs.

lagniappe•50m ago
>people already working on the project would be better at prompting and steering AI outputs.

In an ideal world sure, but I've seen the entire gamut from amateurs making surprising work to experts whose prompt history looks like a comedy of errors and gotchas. There's some "skill" I can't quite put my finger on when it comes to the way you must speak to an LLM vs another dev. There's more monkey-paw involved in the LLM process, in the sense that you get what you want, but do you want what you'll get?

CrociDB•51m ago
I recently had to do a similar policy for my TUI feed reader, after getting some AI slop spammy PRs: https://github.com/CrociDB/bulletty?tab=contributing-ov-file...

The fact that some people will straight up lie after submitting you a PR with lots of _that type_ of comment in the middle of the code is baffling!

nutjob2•49m ago
A factor that people have not considered is that the copyright status of AI generated text is not settled law and precedent or new law may retroactively change the copyright status of a whole project.

Maybe a bit unlikely, but still an issue no one is really considering.

There has been a single ruling (I think) that AI generated code is uncopyrightable. There has been at least one affirmative fair use ruling. Both of these are from the lower courts. I'm still of the opinion that generative AI is not fair use because its clearly substitutive.

direwolf20•44m ago
This only matters if you get sued for copyright violation, though.
christoph-heiss•15m ago
No? Licenses still apply even if you _don't_ get sued?
consp•12m ago
At what time in the future does this not become an issue?
Version467•48m ago
The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have. I have a handful of open source contributions. All of them are for small-ish projects and the complexity of my contributions are in the same ball-park as what I work on day-to-day. And even though I am relatively confident in my competency as a developer, these contributions are probably the most thoroughly tested and reviewed pieces of code I have ever written. I just really, really don't want to bother someone with low quality "help" who graciously offers their time to work on open source stuff.

Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.

Etheryte•44m ago
I worked for a major open-source company for half a decade. Everyone thinks their contribution is a gift and you should be grateful. To quote Bo Burnham, "you think your dick is a gift, I promise it's not".
kleiba•41m ago
"Other people" might also just be junior devs - I have seen time and again how (over-)confident newbies can be in their code. (I remember one case where a student suspected a bug in the JVM when some Java code of his caused an error.)

It's not necessarily maliciousness or laziness, it could simply be enthusiasm paired with lack of experience.

xxs•9m ago
have found bugs in native JVM, usually it takes some effort, though. Printing the assembly is the easiest one. (I consider the bug in java.lang/util/io/etc. code not an interesting case)

Memory leaks and issues with the memory allocator are months long process to pin on the JVM...

In the early days (bug parade times), the bugs are a lot more common, nowadays -- I'd say it'd be an extreme naivete to consider JVM the culprit from the get-go.

DrewADesign•39m ago
To have that shame, you need to know better. If you don’t know any better, having access to a model that can make code and a cursory understanding of the language syntax probably feels like knowing how to write good code. Dunning-Krueger strikes again.

I’ll bet there are probably also people trying to farm accounts with plausible histories for things like anonymous supply chain attacks.

arbitrandomuser•38m ago
when it comes to enabling opportunities i dont think it becomes a matter of shame for them anymore. A lot of people (especially in regions where living is tough and competition is fierce) will do anything by hook or crook to get ahead in competition. And if github contributions is a metric for getting hired or getting noticed then you are going to see it become spammed.
flexagoon•38m ago
Keep in mind that many people also contribute to big open source projects just because they believe it will look good ok their CV/GitHub and help them get a job. They don't care about helping anyone, they just want to write "contributed to Ghostty" in their application.
nchmy•19m ago
I think this falls under the "have no shame" comment that they made
Ronsenshi•38m ago
It's good to regularly see such policies and discussions around them to remind me how staggeringly shameless some people could be and how many of such people out there. Interacting mostly with my peers, friends, acquaintances I tend to forget that they don't represent average population and after some time I start to assume all people are reasonable and act in good faith.
blell•25m ago
It's nothing but cultural expectations. We need to firewall the West off the rest of the world. Not joking.
6LLvveMx2koXfwn•25m ago
Shamelessness is very definitely in vogue at the moment. It will pass, let's hope for more than ruins.
monegator•18m ago
> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.

ever had a client second guess you by replying you a screenshot from GPT?

ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?

no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.

Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time

monooso•12m ago
Not OP, but I don't consider these the same thing.

The client in your example isn't a (presumably) professional developer, submitting code to a public repository, inviting the scrutiny of fellow professionals and potential future clients or employers.

Aeolun•9m ago
Random people don’t do this. Your boss however…
Sharlin•7m ago
Problem is people seriously believe that whatever GPT tells them must be true, because… I don't even know. Just because it sounds self-confident and authoritative? Because computers are supposed to not make mistakes? Because talking computers in science fiction do not make mistakes like that? The fact that LLMs ended up having this particular failure mode, out of all possible failure modes, is incredibly unfortunate and detrimental to the society.
Sharlin•15m ago
You just have to go take a look at what people write in social media, using their real name and photo, to conclude that no, some people have no shame at all.
weinzierl•7m ago
"The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have."

And this is one half of why I think

"Bad AI drivers will be [..] ridiculed in public."

isn't a good clause. The other is that ridiculing others, not matter what, is just no decent behavior. Putting it as a rule in your policy document makes it only worse.

ionwake•2m ago
TBH Im not sure if this is a "growing up in a good area" vibe. But over the last decade or so I have had to slowly learn the people around me have no sense of shame. This wasnt their fault, but mine. Society has changed and if you don't adapt you'll end up confused and abused.

I am not saying one has to lose their shame, but at best, understand it.

cranium•39m ago
A well crafted policy that, I think, will be adopted by many OSS.

You'd need that kind of sharp rules to compete against unhinged (or drunken) AI drivers and that's unfortunate. But at the same time, letting people DoS maintainers' time at essential no cost is not an option either.

Lucasoato•31m ago
> Bad AI drivers will be banned and ridiculed in public. You've been warned. We love to help junior developers learn and grow, but if you're interested in that then don't use AI, and we'll help you. I'm sorry that bad AI drivers have ruined this for you.

Finally an AI policy I can agree with :) jokes aside, it might sound a bit too agressive but it's also true that some people have really no shame into overloading you with AI generated shit. You need to protect your attention as much as you can, it's becoming the new currency.

weinzierl•2m ago
I don't think ridicule is an effective threat for people with no shame to begin with.
rikschennink•27m ago
> No AI-generated media is allowed (art, images, videos, audio, etc.). Text and code are the only acceptable AI-generated content, per the other rules in this policy.

I find this distinction between media and text/code so interesting. To me it sounds like "text and code" are free from the controversy surrounding AI-generated media.

But judging from how AI companies grabbed all the art, images, videos, and audio they could get their hands on to train their LLMs it's naive to think that they didn't do the same with text and code.

embedding-shape•11m ago
> To me it sounds like "text and code" are free from the controversy surrounding AI-generated media.

It really isn't, don't you recall the "protests" against Microsoft starting to use repositories hosted at GitHub for training their own coding models? Lots of articles and sentiments everywhere at the time.

Seems to have died down though, probably because most developers seemingly at this point use LLMs in some capacity today. Some just use it as a search engine replacement, others to compose snippets they copy-paste and others wholesale don't type code anymore, just instructions then review it.

I'm guessing Ghostty feels like if they'd ban generated text/code, they'd block almost all potential contributors. Not sure I agree with that personally, but I'm guessing that's their perspective.

antirez•8m ago
TLDR don't be an asshole and produce good stuff. But I have the feeling that this is not the right direction for the future. Distrust the process: only trust the results.

Moreover this policy is strictly unenforceable because good AI use is indistinguishable from good manual coding. And sometimes even the reverse. I don't believe in coding policies where maintainers need to spot if AI is used or not. I believe in experienced maintainers that are able to tell if a change looks sensible or not.

b3kart•3m ago
This doesn't work in the age of AI where producing crappy results is much cheaper than verifying them. While this is the case, metadata will be important to understand if you should even bother verifying the results.

Ghostty's AI Policy

https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md
121•mefengl•2h ago•57 comments

I built a light that reacts to radio waves [video]

https://www.youtube.com/watch?v=moBCOEiqiPs
219•codetheweb•6h ago•52 comments

AI Is a Horse (2024)

https://kconner.com/2024/08/02/ai-is-a-horse.html
63•zdw•3d ago•38 comments

Replacing Protobuf with Rust to go 5 times faster

https://pgdog.dev/blog/replace-protobuf-with-rust
44•whiteros_e•3h ago•32 comments

Proton Spam and the AI Consent Problem

https://dbushell.com/2026/01/22/proton-spam/
229•dbushell•5h ago•128 comments

Show HN: isometric.nyc – giant isometric pixel art map of NYC

https://cannoneyed.com/isometric-nyc/
994•cannoneyed•19h ago•190 comments

GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers

https://gptzero.me/news/neurips/
857•segmenta•20h ago•453 comments

Capital One to acquire Brex for $5.15B

https://www.reuters.com/legal/transactional/capital-one-buy-fintech-firm-brex-515-billion-deal-20...
309•personjerry•14h ago•244 comments

The State of Modern AI Text to Speech Systems for Screen Reader Users

https://stuff.interfree.ca/2026/01/05/ai-tts-for-screenreaders.html
12•tuukkao•2h ago•1 comments

Why does SSH send 100 packets per keystroke?

https://eieio.games/blog/ssh-sends-100-packets-per-keystroke/
508•eieio•16h ago•269 comments

I was banned from Claude for scaffolding a Claude.md file?

https://hugodaniel.com/posts/claude-code-banned-me/
571•hugodan•17h ago•514 comments

Qwen3-TTS family is now open sourced: Voice design, clone, and generation

https://qwen.ai/blog?id=qwen3tts-0115
628•Palmik•22h ago•197 comments

TI-99/4A: Leaning More on the Firmware

https://bumbershootsoft.wordpress.com/2026/01/17/ti-99-4a-leaning-more-heavily-on-the-firmware/
42•ibobev•4d ago•20 comments

Douglas Adams on the English–American cultural divide over "heroes"

https://shreevatsa.net/post/douglas-adams-cultural-divide/
465•speckx•22h ago•468 comments

Your app subscription is now my weekend project

https://rselbach.com/your-sub-is-now-my-weekend-project
393•robteix•4d ago•285 comments

Our collective obsession with boredom: Interview with a boredom lab researcher

https://nautil.us/why-the-do-nothing-challenge-doesnt-do-much-for-you-1262005/
9•akakievich•3d ago•2 comments

Bugs Apple Loves

https://www.bugsappleloves.com
705•nhod•9h ago•312 comments

Scaling PostgreSQL to power 800M ChatGPT users

https://openai.com/index/scaling-postgresql/
206•mustaphah•14h ago•96 comments

Show HN: AskUCP – UCP protocol explorer showing all products on Shopify

https://askucp.com/
3•possiblelion•4d ago•0 comments

Improving the usability of C libraries in Swift

https://www.swift.org/blog/improving-usability-of-c-libraries-in-swift/
120•timsneath•12h ago•15 comments

Why medieval city-builder video games are historically inaccurate (2020)

https://www.leidenmedievalistsblog.nl/articles/why-medieval-city-builder-video-games-are-historic...
155•benbreen•11h ago•96 comments

Project Mercury and the Sofar Bomb

https://www.thequantumcat.space/p/project-mercury-and-the-sofar-bomb
11•verzali•5d ago•2 comments

Google is ending full-web search for niche search engines

https://programmablesearchengine.googleblog.com/
109•01jonny01•2h ago•85 comments

Writing First, Tooling Second

https://susam.net/writing-first-tooling-second.html
40•blenderob•4d ago•4 comments

'Askers' vs. 'Guessers' (2010)

https://www.theatlantic.com/national/2010/05/askers-vs-guessers/340891/
159•BoorishBears•1d ago•106 comments

Show HN: Txt2plotter – True centerline vectors from Flux.2 for pen plotters

https://github.com/malvarezcastillo/txt2plotter
26•tsanummy•3d ago•7 comments

CSS Optical Illusions

https://alvaromontoro.com/blog/68091/css-optical-illusions
193•ulrischa•18h ago•16 comments

Stunnel

https://www.stunnel.org/
89•firesteelrain•11h ago•30 comments

In Europe, wind and solar overtake fossil fuels

https://e360.yale.edu/digest/europe-wind-solar-fossil-fuels
650•speckx•22h ago•665 comments

Launch HN: Constellation Space (YC W26) – AI for satellite mission assurance

40•kmajid•19h ago•15 comments