frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ghostty's AI Policy

https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md
94•mefengl•2h ago

Comments

mefengl•2h ago
If you prefer not to use GitHub: https://gothub.lunar.icu/ghostty-org/ghostty/blob/main/AI_PO...
christoph-heiss•1h ago
Not sure why you are getting downvoted, given that the original site is such a jarringly user-hostile mess.
embedding-shape•1h ago
Without using a random 3rd party, and without the "jarring user-hostile mess":

https://raw.githubusercontent.com/ghostty-org/ghostty/refs/h...

flexagoon•18m ago
This option is pretty unreadable on mobile though
embedding-shape•13m ago
Is it? Just tried it in Safari, Firefox and Chrome on a iPhone 12 Mini and I can read all the text? Obviously it isn't formatted, as it's raw markdown, just like what parent's recommended 3rd party platform does, but nothing is cut off or missing for me.

Actually, trying to load that previous platform on my phone makes it worse for readability, seems there is ~10% less width and not as efficient use of vertical space. Together with both being unformatted markdown, I think the raw GitHub URL seems to render better on mobile, at least small ones like my mini.

user34283•45m ago
Whatever your opinion on the GitHub UI may be, at least the text formatting of the markdown is working, which can't be said for that alternative site.
postepowanieadm•1h ago
That's really nice - and fast ui!
kleiba•21m ago
It gets even better when you click on "raw", IMO... which is what you also get when clicking on "raw" on Github.
cxrpx•1h ago
with limited training data that llm generated code must be atrocious
jakozaur•1h ago
See x thread for rationale: https://x.com/mitchellh/status/2014433315261124760?s=46&t=FU...

“ Ultimately, I want to see full session transcripts, but we don't have enough tool support for that broadly.”

I have a side project, git-prompt-story to attach Claude Vode session in GitHub git notes. Though it is not that simple to do automatic (e.g. i need to redact credentials).

radarsat1•1h ago
I've thought about saving my prompts along with project development and even done it by hand a few times, but eventually I realized I don't really get much value from doing so. Are there good reasons to do it?
fragmede•53m ago
It's not for you. It's so others can see how you arrived to the code that was generated. They can learn better prompting for themselves from it, and also how you think. They can see which cases got considered, or not. All sorts of good stuff that would be helpful for reviewing giant PRs.
Ronsenshi•13m ago
Sounds depressing. First you deal with massive PRs and now also these agent prompts. Soon enough there won't be any coding at all, it seems. Just doomscrolling through massive prompt files and diffs in hopes of understanding what is going on.
simonw•36m ago
For me it's increasingly the work. I spend more time in Claude Code going back and forth with the agent than I do in my text editor hacking on the code by hand. Those transcripts ARE the work I've been doing. I want to save them in the same way that I archive my notes and issues and other ephemera around my projects.

My latest attempt at this is https://github.com/simonw/claude-code-transcripts which produces output like the is: https://gisthost.github.io/?c75bf4d827ea4ee3c325625d24c6cd86...

awesan•8m ago
If the AI generated most of the code based on these prompts, it's definitely valuable to review the prompts before even looking at the code. Especially in the case where contributions come from a wide range of devs at different experience levels.

At a minimum it will help you to be skeptical at specific parts of the diff so you can look at those more closely in your review. But it can inform test scenarios etc.

optimalsolver•31m ago
>I want to see full session transcripts, but we don't have enough tool support for that broadly

I think AI could help with that.

arjunbajaj•57m ago
I can see this becoming a pretty generally accepted AI usage policy. Very balanced.

Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.

On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.

imiric•24m ago
I agree with you on the policy being balanced.

However:

> AI generated code does not substitute human thinking, testing, and clean up/rewrite.

Isn't that the end goal of these tools and companies producing them?

According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.

Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.

[1]: https://blog.samaltman.com/the-gentle-singularity

Terretta•9m ago
Intern generated code does not substitute for tech lead thinking, testing, and clean up/rewrite.
alansaber•49m ago
"Pull requests created by AI must have been fully verified with human use." should always be a bare minimum requirement.
vegabook•47m ago
Ultimately what's happening here is AI is undermining trust in remote contributions, and in new code. If you don't know somebody personally, and know how they work, the trust barrier is getting higher. I personally am already ultra vigilant for any github repo that is not already well established, and am even concerned about existing projects' code quality into the future. Not against AI per se (which I use), but it's just going to get harder to fight the slop.
epolanski•41m ago
Honestly I don't care how people come with the code they create, but I hold them responsible for what they try to merge.

I work in a team of 5 great professionals, there hasn't been a single instance since Copilot launched in 2022 that anybody, in any single modification did not take full responsibility for what's been committed.

I know we all use it, to different extent and usage, but the quality of what's produced hasn't dipped a single bit, I'd even argue it has improved because LLMs can find answers easier in complex codebases. We started putting `_vendor` directories with our main external dependencies as git subtrees, and it's super useful to find information about those directly in their source code and tests.

It's really as simple. If your teammates are producing slop, that's a human and professional problem and these people should be fired. If you use the tool correctly, it can help you a lot finding information and connecting dots.

Any person with a brain can clearly see the huge benefit of these tools, but also the great danger of not reviewing their output line by line and forfeiting the constant work of resolving design tensions.

Of course, open source is a different beast. The people committing may not be professionals and have no real stakes so they get little to lose by producing slop whereas maintainers are already stretched in their time and attention.

embedding-shape•37m ago
> It's really as simple. If you or your teammates are producing slop, that's a human and professional problem and these people should be fired.

Agree, slop isn't "the tool is so easy to use I can't review the code I'm producing", slop is the symptom of "I don't care how it's done, as long as it looks correct", and that's been a problem before LLMs too, the difference is how quickly you reach the "slop" state now, not that you have gate your codebase and reject shit code.

As always, most problems in "software programming" isn't about software nor programming but everything around it, including communication and workflows. If your workflow allows people to not be responsible for what they produce, and if allows shitty code to get into production, then that's on you and your team, not on the tools that the individuals use.

altmanaltman•33m ago
I mean this policy only applies to outside contributors and not the maintainers.

> Ghostty is written with plenty of AI assistance, and many maintainers embrace AI tools as a productive tool in their workflow. As a project, we welcome AI as a tool!

> Our reason for the strict AI policy is not due to an anti-AI stance, but instead due to the number of highly unqualified people using AI. It's the people, not the tools, that are the problem.

Basically don't write slop and if you want to contribute as an outsider, ensure your contribution actually is valid and works.

kanzure•39m ago
Another project simply paused external contributions entirely: https://news.ycombinator.com/item?id=46642012

Another idea is to simply promote the donation of AI credits instead of output tokens. It would be better to donate credits, not outputs, because people already working on the project would be better at prompting and steering AI outputs.

lagniappe•32m ago
>people already working on the project would be better at prompting and steering AI outputs.

In an ideal world sure, but I've seen the entire gamut from amateurs making surprising work to experts whose prompt history looks like a comedy of errors and gotchas. There's some "skill" I can't quite put my finger on when it comes to the way you must speak to an LLM vs another dev. There's more monkey-paw involved in the LLM process, in the sense that you get what you want, but do you want what you'll get?

CrociDB•33m ago
I recently had to do a similar policy for my TUI feed reader, after getting some AI slop spammy PRs: https://github.com/CrociDB/bulletty?tab=contributing-ov-file...

The fact that some people will straight up lie after submitting you a PR with lots of _that type_ of comment in the middle of the code is baffling!

nutjob2•30m ago
A factor that people have not considered is that the copyright status of AI generated text is not settled law and precedent or new law may retroactively change the copyright status of a whole project.

Maybe a bit unlikely, but still an issue no one is really considering.

There has been a single ruling (I think) that AI generated code is uncopyrightable. There has been at least one affirmative fair use ruling. Both of these are from the lower courts. I'm still of the opinion that generative AI is not fair use because its clearly substitutive.

direwolf20•26m ago
This only matters if you get sued for copyright violation, though.
Version467•30m ago
The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have. I have a handful of open source contributions. All of them are for small-ish projects and the complexity of my contributions are in the same ball-park as what I work on day-to-day. And even though I am relatively confident in my competency as a developer, these contributions are probably the most thoroughly tested and reviewed pieces of code I have ever written. I just really, really don't want to bother someone with low quality "help" who graciously offers their time to work on open source stuff.

Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.

Etheryte•26m ago
I worked for a major open-source company for half a decade. Everyone thinks their contribution is a gift and you should be grateful. To quote Bo Burnham, "you think your dick is a gift, I promise it's not".
kleiba•23m ago
"Other people" might also just be junior devs - I have seen time and again how (over-)confident newbies can be in their code. (I remember one case where a student suspected a bug in the JVM when some Java code of his caused an error.)

It's not necessarily maliciousness or laziness, it could simply be enthusiasm paired with lack of experience.

DrewADesign•21m ago
To have that shame, you need to know better. If you don’t know any better, having access to a model that can make code and a cursory understanding of the language syntax probably feels like knowing how to write good code. Dunning-Krueger strikes again.

I’ll bet there are probably also people trying to farm accounts with plausible histories for things like anonymous supply chain attacks.

arbitrandomuser•20m ago
when it comes to enabling opportunities i dont think it becomes a matter of shame for them anymore. A lot of people (especially in regions where living is tough and competition is fierce) will do anything by hook or crook to get ahead in competition. And if github contributions is a metric for getting hired or getting noticed then you are going to see it become spammed.
flexagoon•19m ago
Keep in mind that many people also contribute to big open source projects just because they believe it will look good ok their CV/GitHub and help them get a job. They don't care about helping anyone, they just want to write "contributed to Ghostty" in their application.
Ronsenshi•19m ago
It's good to regularly see such policies and discussions around them to remind me how staggeringly shameless some people could be and how many of such people out there. Interacting mostly with my peers, friends, acquaintances I tend to forget that they don't represent average population and after some time I start to assume all people are reasonable and act in good faith.
blell•7m ago
It's nothing but cultural expectations. We need to firewall the West off the rest of the world. Not joking.
6LLvveMx2koXfwn•6m ago
Shamelessness is very definitely in vogue at the moment. It will pass, let's hope for more than ruins.
cranium•21m ago
A well crafted policy that, I think, will be adopted by many OSS.

You'd need that kind of sharp rules to compete against unhinged (or drunken) AI drivers and that's unfortunate. But at the same time, letting people DoS maintainers' time at essential no cost is not an option either.

Lucasoato•13m ago
> Bad AI drivers will be banned and ridiculed in public. You've been warned. We love to help junior developers learn and grow, but if you're interested in that then don't use AI, and we'll help you. I'm sorry that bad AI drivers have ruined this for you.

Finally an AI policy I can agree with :) jokes aside, it might sound a bit too agressive but it's also true that some people have really no shame into overloading you with AI generated shit. You need to protect your attention as much as you can, it's becoming the new currency.

rikschennink•8m ago
> No AI-generated media is allowed (art, images, videos, audio, etc.). Text and code are the only acceptable AI-generated content, per the other rules in this policy.

I find this distinction between media and text/code so interesting. To me it sounds like "text and code" are free from the controversy surrounding AI-generated media.

But judging from how AI companies grabbed all the art, images, videos, and audio they could get their hands on to train their LLMs it's naive to think that they didn't do the same with text and code.

Plausible Analytics – Simple, privacy-friendly Google Analytics alternative

https://plausible.io/
1•janandonly•2m ago•0 comments

AI bot swarms threaten to undermine democracy

https://garymarcus.substack.com/p/ai-bot-swarms-threaten-to-undermine
1•chmaynard•3m ago•0 comments

What I learned building an opinionated and minimal coding agent

https://mariozechner.at/posts/2025-11-30-pi-coding-agent/
1•dade•4m ago•0 comments

Show HN: Whosthere: A LAN discovery tool with a modern TUI, written in Go

https://github.com/ramonvermeulen/whosthere
2•rvermeulen98•4m ago•1 comments

The Perfect Minimalist Wi-Fi-Connected LED Clock/Weather Display ESP32/ESP8266

https://www.hackster.io/news/the-perfect-minimalist-led-clock-49a4e4440518
1•m-factory•6m ago•0 comments

YouTubers will be able to make Shorts with their own AI likenesses

https://www.theverge.com/news/864610/youtube-shorts-ai-likenesses-neal-mohan-2026
1•taubek•7m ago•0 comments

A cosmic ring may challenge a key assumption about the universe

https://www.sciencenews.org/article/cosmic-ring-cosmology-principle
1•yusufaytas•8m ago•0 comments

How do I make $10k (What are you guys doing?)

1•b_mutea•9m ago•1 comments

The Trump Administration Admits More Ways DOGE Accessed Sensitive Personal Data

https://www.npr.org/2026/01/23/nx-s1-5684185/doge-data-social-security-privacy
2•backpackerBMW•10m ago•0 comments

Show HN: Carlton × CMP Signature AR NUME

https://github.com/Augmented-Reality-Virtual-Reality-AR-VR/Projects-in-AR-VR/pull/1
1•aroheir•13m ago•0 comments

The Inverse DevOps Principle

https://about.hannesortmeier.de/blog/inverse-devops-principle
1•sighansen•16m ago•0 comments

Major Canadian computer hardware online store compromised for months

https://old.reddit.com/r/bapccanada/comments/1qk4axy/canada_computers_online_card_skimmer/
1•bhouston•16m ago•1 comments

Hyundai Motor's Korean union warns of humanoid robot plan, sees threat to jobs

https://www.reuters.com/business/world-at-work/hyundai-motors-korean-union-warns-humanoid-robot-p...
2•tooltalk•18m ago•0 comments

A Management Philosopher with Heady Ideas About Beer (2009)

https://www.wsj.com/articles/SB125789690177942463
1•asplake•20m ago•0 comments

Show HN: Botnet of Ares – Hacking Simulator Open Playtest

1•tiniuclx•21m ago•0 comments

Show HN: ObsessionDB – We rebuilt ClickHouse infrastructure to cut our costs 50%

https://obsessiondb.com/
1•keks0r•21m ago•0 comments

Ask HN: What AI feature looked in demos and failed in real usage? Why?

2•kajolshah_bt•23m ago•1 comments

Ask HN: Anti-John the Baptist?

1•krautburglar•24m ago•0 comments

Show HN: Build agents via YAML with Prolog validation and 110 built-in tools

https://fabceolin.github.io/the_edge_agent/index.html
1•fabceolin•27m ago•0 comments

AI is not a NOT a horse (2023)

https://essays.georgestrakhov.com/ai-is-not-a-horse/
1•georgestrakhov•31m ago•0 comments

Partitioning a 17TB Table in PostgreSQL

https://www.tines.com/blog/futureproofing-tines-partitioning-a-17tb-table-in-postgresql/
2•shayonj•35m ago•0 comments

VS Code: Broken rendering on macOS after app resumed from idle state

https://github.com/microsoft/vscode/issues/284162
1•tosh•35m ago•0 comments

OpenAI Wants a Cut of Your Profits: Inside Its New Royalty-Based Plan

https://www.gizmochina.com/2026/01/21/openai-wants-a-cut-of-your-profits-inside-its-new-royalty-b...
1•thenaturalist•35m ago•0 comments

Shenzhou-20 Returns Safely After Historic In-Flight Debris Repairs

https://www.apollothirteen.com/article/orbital-resilience-shenzhou-20-returns-safely-following-hi...
1•darkmatternews•36m ago•0 comments

Alternatives to MinIO for single-node local S3

https://rmoff.net/2026/01/14/alternatives-to-minio-for-single-node-local-s3/
2•rymurr•37m ago•0 comments

Show HN: A verified foundation of mathematics in Coq (Theory of Systems)

1•Horsocrates•40m ago•0 comments

Heathrow's new scanners end dreaded rummage for liquids and laptops

https://www.reuters.com/world/heathrows-new-scanners-end-dreaded-rummage-liquids-laptops-2026-01-23/
1•comebhack•42m ago•0 comments

Can the prescription drug leucovorin treat autism? History says, probably not

https://www.npr.org/sections/shots-health-news/2026/01/22/nx-s1-5684294/leucovorin-autism-folic-f...
1•pseudolus•49m ago•0 comments

Davos Stops Pretending

https://messaging-custom-newsletters.nytimes.com/dynamic/render
1•doener•49m ago•2 comments

For the Children: A short story about the endgame of EU Chat Control

https://gigaprojects.online/post/1
2•giga_private•51m ago•2 comments