https://github.com/ghostty-org/ghostty/issues?q=is%3Aissue%2...
Have you tried the suggestions in https://ghostty.org/docs/help/terminfo#ssh? I don't know what issue you may be experiencing but this solved my issue with using htop in an ssh session.
[0] https://news.ycombinator.com/item?id=45359239There’s interesting things to discuss here about LLM tooling and approaches to coding. But of course we’d rather complain about cmd-f ;)
Word to the wise: Ghostty’s default scrollback buffer is only ~10MB, but it can easily be changed with a config option.
https://ghostty.org/docs/install/release-notes/1-2-0#roadmap
That said, it certainly made me appreciate the complexity of such a "basic" feature when I started thinking about how to make this work when tailing a stream of text.
This right here is the single biggest win for coding agents. I see and directionally agree with all the concerns people have about maintainability and sprawl in AI-mediated projects. I don't care, though, because the moment I can get a project up on its legs, to where I can interact with some substantial part of its functionality and refine it, I'm off to the races. It's getting to that golden moment that constitutes 80% of what's costly about programming for me.
This is the part where I simply don't understand the objections people have to coding agents. It seems so self-evidently valuable --- even if you do nothing else with an agent, even if you literally throw all the code away.
PS
Put a weight on that bacon!
Because I have a coworker who is pushing slop at unsustainable levels, and proclaiming to management how much more productive he is. It’s now even more of a risk to my career to speak up about how awful his PRs are to review (and I’m not the only one on the team who wishes to speak up).
The internet is rife with people who claim to be living in the future where they are now a 10x dev. Making these claims costs almost nothing, but it is negatively effecting mine and many others day to day.
I’m not necessarily blaming these internet voices (I don’t blame a bear for killing a hiker), but the damage they’re doing is still real.
Because agents are good on this one specific axis (which I agree with and use fwiw), there’s no reason to object to them as a whole
My argument is:
The juice isn’t worth the squeeze. The small win (among others) is not worth the amounts of slop devs now have to deal with.
The ones who spend more time developing their agentic coding as a skillset have gotten much better results.
In our team people are also more willing to respond to feedback because nitpicks and requests to restructure/rearchitect are evaluated on merit instead of how time-consuming or boring they would have been to take on.
Also, update your resume and get some applications out so you’re not just a victim.
Done right you should get mostly reasonable code out of the "execution focused peer".
Should we really advocate for using AI to both create and then destroy huge amounts of data that will never be used?
Edit: Do I advocate for this? 1000%. This isn't crypto burning electricity to make a ledger. This objectively will make the life of the craftsmanship focused engineer easier. Sloppy execution oriented engineers are not a new phenomenon, just magnified with the fire hose that an agentic AI can be.
That's what's valuable to you. For me the zero to one part is the most rewarding and fun part, because that's when the possibilities are near endless, and you get to create something truly original and new. I feel I'd lose a lot of that if I let an AI model prime me into one direction.
Also: productivity is for machines, not for people.
This isn’t selling your soul; it is possible to let AI scaffold some tedious garbage while also dreaming up cool stuff the old fashioned way.
No, not really: https://news.ycombinator.com/item?id=45232159
> This isn’t selling your soul;
There is a plethora of ethical reasons to reject AI even if it was useful.
But, if it is the interview from 2 years ago, it revolved more around autocomplete and language servers. Agentic tooling was still nascent so a lot of what we were seeing back then was basically tab models and chat models.
As the popular quote goes, "When the Facts Change, I Change My Mind. What Do You Do, Sir?"
The facts and circumstances have changed considerably in recent years, and I have too!
They even used the quote as the title of the accompanying blog post.
As I say, I didn’t mean this as a gotcha or anything- I totally agree with the change and I have done similarly. I’ve always disabled autocomplete, tool tips, suggestions etc but now I am actively using Cursor daily.
Yeah this is from 2021 (!!!) and is directly related to LSPs. ChatGPT didn't even get launched until Nov 2022. So I think the quote doesn't really work in the context of today, it's literally from an era where I was looking at faster horses when cars were right around the corner and I had not a damn clue. Hah.
Off topic: I still dislike [most] LSPs and don't use them.
External agent where I can say "rename this field from X to Y" or "rewrite this into a dedicated class and update all callers" and so on works way better. Obviously, you have to be careful reviewing it since its not working at the same level of guarantee an LSP is but its worth it.
> "Put a weight on that bacon!" ?
AI is an absolute boon for "getting off the ground" by offloading a lot of the boilerplate and scaffolding that one tends to lose enthusiasm for after having to do it for the 99th time.
> AI is excellent at being my muse.
I'm guessing we have a different definition for muse. Though I admit I'm speaking more about writing (than coding) here but for myself, a muse is the veritable fount of creation - the source of ideas.
Feel free to crank the "temperature" on your LLM until the literal and figurative oceans boil off into space, at the end of the day you're still getting the ultimate statistical distillation.
- People who like the act and craftsmanship of coding itself. AI can encourage slop from other engineers and it trivializes the work. AI is a negative.
- People who like general engineering. AI is positive for reducing the amount of (mundane) code to write, but still requires significant high-level architectural guidance. It’s a tool.
- People who like product. AI can be useful for prototyping but won’t won’t be able to make a good product on its own. It’s a tool.
- People who just want to build a MVP. AI is honestly amazing at making something that at least works. It might be bad code but you are testing product fit. Koolaid mode.
That’s why everyone has a totally different viewpoint.
I don't think it's an unfair statement that LLM-generated code typically is not very good - you can work with it and set up enough guard rails and guidance and whatnot that it can start to produce decent code, but out of the box, speed is definitely the selling point. They're basically junior interns.
If you consider an engineer's job to be writing code, sure, you could read OP's post as a shot, but I tend to switch between the personas they're listing pretty regularly in my job, and I think the read's about right.
To the OP's point, if the thing you like doing is actually crafting and writing the code, the LLMs have substantially less value - they're doing the thing you like doing and they're not putting the care into it you normally would. It's like giving a painter an inkjet printer - sure, it's faster, but that's not really the point here. Typically, when building the part of the system that's doing the heavy lifting, I'm writing that myself. That's where the dragons live, that's what's gotta be right, and it's usually not worth the effort to incorporate the LLMs.
If you're trying to build something that will provide long-term value to other people, the LLMs can reduce some of the boilerplate stuff (convert this spec into a struct, create matching endpoints for these other four objects, etc) - the "I build one, it builds the rest" model tends to actually work pretty well and can be a real force multiplier (alternatively, you can wind up in a state where the LLM has absolutely no idea what you're doing and its proposals are totally unhinged, or worse, where it's introducing bugs because it doesn't quite understand which objects are which).
If you've got your product manager hat on, being able to quickly prototype designs and interactions can make a huge, huge difference in what kind of feedback you get from your users - "hey try this out and let me know what you think" as opposed to "would you use this imaginary thing if I built it?" The point is to poke at the toy, not build something durable.
Same with the MVP/technical prototyping - usually the question you're trying to answer is "would this work at all", and letting the LLM crap out the shittiest version of the thing that could possibly work is often sufficient to find out.
The thing is, I think these are all things good engineers _do_. We're not always painting the Sistine Chapel, we also have to build the rest of the building, run the plumbing, design the thing, and try to get buy-in from the relevant parties. LLMs are a tool like any other - they're not the one you pull out when you're painting Adam, but an awful lot of our work doesn't need to be done to that standard.
No, I think my response was fair, if worded sharply. I stand by it.
The second is that a hand-done prototype still teaches you something about the tech stack and the implementation - yes, the primary purpose is to get it running quickly so you can feel how it works, but there's usually some learning you get on the technical side, and often I've found my prototypes inform the underlying technical direction. With vibe coded prototypes, you don't get this - not only is the code basically unusable, but you really are back to starting from scratch if you decide to move forward - you've tested the idea, but you haven't really tested the tech or design.
I still think they're useful - I'm a big proponent of "prototype early," and we've been able to throw together some surprisingly large systems almost instantly with the LLMs - but I think you've gotta shift your understanding of the process. Non-LLM prototypes tend to be around step 4 or 5 of a hypothetical 10-step production process, LLM prototypes are closer to step 2. That's fine, but you need to set expectations around how much is left to do past the prototype, because it's more than it was before.
It sounds like the blank page problem is a big issue for you, so tools that remove it are a big productivity boost.
Not everyone has the same problems, though. Software development is a very personal endeavor.
Just to be clear, I am not saying that people in category A or category B are better/worse programmers. Just that everyone’s workflow is different so everyone’s experience with tools is also different.
The key is to be empathetic and trust people when they say a tool does or doesn’t work for them. Both sides of the LLM argument tend to assume everyone is like them.
Also this article shows responsible use of AI when programming; I don't think it fits the original definition of vibe coding that caused hysterics.
He called it "vibing" in the headline, which matches my suspicion that the term "vibe" is evolving to mean anything that uses generative AI, see also Microsoft "vibe working": https://www.microsoft.com/en-us/microsoft-365/blog/2025/09/2...
I'm not baiting for general engagement, I don't really care about that. I'm baiting for people who are extremist on either side of the "vibe" spectrum such that it'd trigger them to read this, because either way I think it could be good for them.
If you're an extreme pro-vibe person, I wanted to give an example of what I feel is a positive usage of AI. There's a lot of extreme vibe hype boys who are... sloppy.
And if you're an extreme anti-vibe person, I wanted to give an example that clearly refutes many criticisms. (Not all, of course, e.g. there's no discussion here one way or another about say... resource usage).
So, sorry!
Yep. It's vibe engineering, which simonw coined here: https://simonwillison.net/2025/Oct/7/vibe-engineering/
And to clarify, I don't mean output as "this feature works, awesome", but "this feature works, it's maintainable and the code looks as beautiful as I can make it"
It’s not that I don’t know how to implement something, it’s that the agent can do so much of the tedious searching and trial and error that accompanies this ui framework code.
Notice that Mitchell maintains understanding of all the code through the session. It’s because he already understands what he needs to do. This is a far cry from the definition of “vibe coding” I think a lot of people are riding on. There’s no shortcut to becoming an expert.
Loving Ghostty!
This definitely relaxes my ai-hype anxiety
This is pretty much how I use AI. I don't have it in my editor, I always use it in a browser window, but I bounce ideas off it, use it like a better search engine and even if I don't use the exact code it produces I do feel there's some value.
- language - product - level of experience / seniority
I'll often make these few hail mary attempts to fix a bug. If the agent can figure it out, I can study it and learn myself. If it doesn't, it costs me very little. If the agent figures it out and I don't understand it, I back it out. I'm not shipping code I don't understand. While it's failing, I'm also tabbed out searching the issue and trying to figure it out myself.
"Awesome characterization ("slop zone"), pragmatic strategy (let it try; research in parallel) and essential principle ("I'm not shipping code I don't understand.")
IMHO this post is gold, for real-world project details and commentary from an expert doing their thing.
It looks like Mitchell is using an agentic framework called Amp (I’d never heard of it) - does anybody else here use it or tried it? Curious how it stacks up against Claude Code.
Claude Code, Codex CLI and Gemini CLI are all (loosely) locked to their own models.
Welp, that explains it. I haven't changed terminal in a while anyway...
ColinEberhardt•5h ago
https://github.com/ghostty-org/ghostty/pull/8289