Whatever your feelings are about OOP and memory management, the reality is that if you choose GTK, you're forced into interfacing in some way with the GObject type system. You can't avoid it.
Well you can avoid it and we did avoid it. And it leads to a mess trying to tie the lifetimes of your non-reference-counted objects to the reference counted ones. There was an entire class of bug that kept popping up in the Ghostty GTK application that could basically be summed up as: the Zig memory or the GTK memory has been freed, but not both.Being pragmatic about the design of an ecosystem and working with it, could be seen as good programming vs the alternative of fighting that ecosystem and thus creating entirely new classes of bugs.
And this was what the GP was responding to. They were was that the pragmatism of working with that ecosystem showed good judgement.
"There are only two kinds of programming languages: those people always bitch about and those nobody uses"
I feel there are no absolutely "good" systems, only the ones with added compromises over time. Battle scars on a beautiful body.
This is why a surprising number of GTK applications (more than I realized) are coded in Vala. I lowkey wish they had just adopted D instead of building up Vala. D is basically compiled C# to me in its own ways.
D needs a usable ecosystem, the point is that since Andrei Alexandrescu's book came out, the incomplete pivots have allowed the competition to get most of D features where it had an edge back 15 years ago.
Even Andrei eventually got back to C++, now working at NVidia.
Native AOT or .NET Native, plus all the Midori related improvements, and everything else C# has access to, was one example, there are others from other ecosystems.
My takeaway from all this is a little bit different though: I feel this whole situation ultimately some credence to the Scheme/Racket philosophy, where you can effectively make yourself a domain-specific language without ever leaving the base language. In an alternate universe, we don't choose between adopting D or inventing Vala. We can just stay in Racket.
In the past this has also been my assessment of GTK. It lead me to decide to take the other path, to never directly use GTK. I appreciate the value in having a unified user interface between applications, but I have always thought the features that GTK provides were not worth the penalties paid. I have worked around the edges of GTK in open source apps that are GTK based. That lead me to think that GTK and the GObject system is opinionated in a way that are not terribly compatible with my own opinions.
I don't hate that GTK exists, It is my choice not to use it and I am fine with that. However I also have encountered people who seem to think it is not my choice not to use it. There are a million and one other GUI toolkits out there, of which GTK is one of the most polished. I can't shake the feeling that if GTK were less dominant, some of the resources that go to polishing GTK might have been spent polishing a different framework with a nicer underlying architecture.
Of course what I consider nicer might not be what others consider nicer. Of those who use GTK, how many use it begrudgingly, and how many feel like it is the best tool for the job?
This might be amusing for me to say but... I also feel this way. I disagree a lot with the Gnome ecosystem's point of view. Funny!
Using GTK for Linux was a pragmatic choice. A goal of Ghostty is to be "platform-native" (defined here because there's no such thing on Linux: https://ghostty.org/docs/about#native). GTK is by various definitions the most popular, widespread GUI toolkit on Linux that makes your app fit into _most_ ecosystems. So, GTK it is.
I hope `libghostty` will give rise to other apprts (maintained by 3rd parties, it's hard enough for me to maintain macOS and GTK) so that you aren't forced into it. See https://ghostty.org/docs/about#libghostty For example Wraith is a Wayland-native Ghostty frontend (no GTK): https://github.com/gabydd/wraith Awesome.
I'm not at all surprised to see that this mindset extends to GTK.
And I like linux because there are plenty of people who also do not care about this and want to just use their computer.
The idea that more functionality makes software less usable makes no sense to me. No, it's more usable.
Its like when people argue that iOS not allowing anything other than safari is a good thing.
Okay... how? Because if you like safari, nothing changes for you. You just keep using it. You wouldn't even be able to tell.
Similarly, if you like KDE you can just... use it. You don't actually have to change anything. There's no gun to your head. If you're the type of person who just takes software and uses it, then great - you don't need Gnome for that. KDE is actually better at that, IMO.
Gnome devs not withstanding, you have access to the source to change it if you want or put up a PR?
I don't know which is true for this particular case, but I'd hazard a guess that it is a much bigger task than it needs to be.
I have seen Gnome devs talk of removing features because people were using them the wrong way. Not that people weren't using the features at all, just not for the purpose for which they were written. Experiences like that make me think that pull requests wouldn't get you very far either.
Unless, of course, all of those are shit, so a more design-competent person can drive by with an improvement, which will be rejected because it's too shiny
But that's beside the point, the point was the criticism of the "send a PR" even though that resolves nothing due to the mismatch.
I can do that on GNOME and macOS. And yes, with 21:9 it is very nice. With a vertical monitor or a 3:2 (Surface), not so much.
Both "monkey patch" the shell.
If you really need to move your clock with a bunch of clicks, there's also KDE which is less opinionated...
For many years I followed this whole outrage but wasn't really sure whom to believe because I didn't have any stakes in the matter. Then the GNOME/GTK devs removed support for key themes[0,1] and I still have no idea why. The arguments they brought forward make absolutely no sense to me.
Now I get where people were coming from.
[0]: https://old.reddit.com/r/emacs/comments/c22ff1/gtk_4_support...
Edit: s/protocol/interface
Consider input methods. That clearly belongs in the window manager process, not each application process. The application should simply ask for a text box, and then receive events when the text in the box changes. The fact that I'm typing Japanese into a fancy UI shouldn't make any difference to the application, other than what text it receives from the widget events.
And that should all be possible in a small, static executable, that is speaking the Wayland protocol, and has zero dependencies other than networking syscalls.
For example, your text input box's label might be displayed to the left of the box, to the top, or it might be one of those floating labels that are displayed inside the box but move to the top border as soon as you focus the box. Depending on that, you would choose the spacing to neighboring text boxes and buttons differently, and maybe also arrange things differently altogether, because of different space requirements.
In other words: Your application would need to know what exactly the Wayland "UI layer" is going to display in order to send it precise instructions. But then your abstraction is very leaky.
That's five words with unimaginable complexity.
(yes I'm aware of libdecor, but it sucks to have to pull in yet another complex dependency into clients)
Standard UI components on Windows are in user32.dll, and that's guaranteed to exist on each Windows installation all the way back to Windows NT.
And to just get an empty decorated window all I need to do is call CreateWindowEx() (which will always look consistent), all the rendering inside that window can then be done through one of the 3D APIs.
Same on macOS, I don't need any optional dependencies to create a decorated empty window which I can then render into with Metal.
For that same scenario (3D rendering into an empty window), Wayland only guarantees me a bare undecorated rectangle I can draw into with GL or Vulkan (and I wouldn't be surprised if even that is an optional feature I need to query first lol). And even if both GTK and Qt would be guaranteed to be installed on each Linux desktop system (so that I can pick one of two frameworks to give me a dedorated window, that would be overkill if I just want an empty window for 3D rendering - and even then I don't know if a window created through Qt would look consistent on a GNOME desktop or vice versa).
Of course that topic has been discussed to death without anybody who could fix that problem once and for all (by making the server-side decoration extension mandatory for desktop Linux systems) even recognizing that this is an issue that needs fixing.
Tbh, I don't even understand why desktop Linux needs more than a single Wayland implementation. So much wasted redundant work which could be put into actual improvements.
KDE, GTK or any other concrete UI toolkit register 'drivers/plugins' with this interface library to do the actual work.
Minimal 'framework-agnostic' UI apps can then use this standard shim library and always look fully native, while applications which require the full KDE or GTK feature set can still use those frameworks directly.
Call it Common Desktop Environment or something idk...
In theory yeah, but that consistency is out the window anyway when some apps I'm using are GTK, others Qt, and yet others Electron with the JS framework flavour of the week.
A standard shim library which would map to the user's preferred framework (e.g. GTK or Qt) would at least be visually consistent between GTK and Qt - and not just for the window chrome) - even if it only offers a common subset of GTK vs Qt vs ... features.
[0]: https://celluloid-player.github.io/
[1]: https://iina.io/
Such an idea only works if it is less work for application developers, not more. And I think the right place for that is the operating system layer, not the application layer.
macOS is a good example of what you're proposing. The developer toolkit have a lot of nice frameworks (libraries) to help build apps. But that ties you to Apple. You can't expect consistencies across OS unless someone does the work to provide an intermediate layer, and then you loose on each OS nicest capabilities.
It may not make business sense, but the most elegant solution is to have a core that is your feature set, and then flesh out each dependency according to the platform.
We haven't committed to our CSS classes yet as a stable API but we plan to do that in the next release cycle. Still, they haven't changed much in a year. :)
GTK has gotten so fat that it's more like a 2nd-rate Electron. Not that Electron would have done Ghostty any good...
I have a bad feeling that the only decent & consistent UI option under Linux is to target Win32 API, then run it under Wine.
We rewrote the Ghostty *G* *T* *K* application
The article about struggling... with GTK.
> CSS classes
To spell it out, I think it is all excessive.
https://docs.gtk.org/gtk3/css-overview.html
It even has a DOM inspector window:
https://askubuntu.com/questions/597259/how-do-i-open-gtk-ins...
I haven't used Qt, nor C++, in many moons, but my understanding was that `moc` [1] filled a gap of run-time dynamicism that C++ didn't natively support when it was invented. Has C++ since evolved to make that obsolete? Seems like moc is still a thing.
It started out as a toolkit for application development and leaned heavily into the needs of the C developer who was writing an application with a GUI. It was really a breath of fresh air to us crusties who started out with Xaw and Motif. That's the GTK I want to remember.
What it is now is (IMO) mostly a product of the economics of free software development. There's not a lot of bread out there to build a great, free, developer experience for Linux apps. Paid GTK development is just in service of improving the desktop platform that the big vendors ship. This leads to more abstraction breaks between the toolkit, the desktop, and the theme, because nobody cares as long as all the core desktop apps work. "Third party" app developers, who used to be the only audience for GTK, are a distant second place. The third party DX is only good if you follow a cookie-cutter app template.
I switched my long-term personal projects from GTK2 to Dear ImGui, which IMO is the only UI toolkit going that actually prioritizes developer experience. Porting from GTK2 to GTK3 would have been almost as much work since I depended on Clutter (which was at one point a key element of the platform, but got dropped/deprecated -- maybe its corporate sponsor folded? not sure).
I disagree. Qt is also quite popular and much better at adapting to the environment it runs in.
The runner up Qt is much more tightly tied to C++ and Python to its detriment. You really need to meet devs where they’re at instead of insisting that they adopt a particular language to use your UI toolkit.
Aside from that, at the end of the day, if you’re building a full fat complex desktop app an old style imperative UI toolkit is probably one of the more practical choices you can make. They have an exhaustive set of battle tested, accessible widgets built in and their pitfalls are well known. Newer approaches expect you to write or import everything and start requiring increased contortions from the developer past a certain point of complexity.
Too bad it's so poorly documented. It seems to me like advanced technology left over from a dead civilization, that's being handled by cave people.
So we are left with Electron…
So Electron does not have downsides? Like far worse platform integration.
Qt the company is annoying because they make it super hard to use community edition and it is super annoying having per developer licensing with all kinds of super annoying BS one have to deal with when developing for Qt.
For Electron I can get 20 devs hired tomorrow or I myself can start project right away and be up and running in minutes.
That said, Electron has made the number of Linux apps go up, which is a win. Still highly prefer native macOS apps on Mac and GTK/Qt apps on Linux, but I know that's a losing battle.
The only exception I can imagine is maybe a framework that uses the Unreal model, where it’s free to use under USD$1m/year in revenue and beyond that requires only a modest fee.
Qt is a hodgepodge of many different components which have different licenses. This is made worse by Qt being an application development framework and not just a GUI toolkit. It includes much more than necessary and a lot of apps will use those other features and become tightly coupled with Qt. Not that its easy to switch toolkits, but it can be extra hard to move away from Qt.
No one is pointing guns forcing people to write Electron crap.
I'm curious what the main opposing opinion you've hit there is? Is it accessibility and non-roman-alphabet input? Those are the things that kinda stand out to me as "gtk does this pretty well, and most developers who hand-roll stuff wouldn't think about it".
> 2. All other memory issues revolved around C API boundaries.
Is this something Rust, or any other language, has the ability to prevent any more than any other language? Or once you introduce a C API boundary is it back to tools like Valgrind?
Maybe you could even say Zig is at an advantage there, because tools like Valgrind are considered part of the game, whereas in Rust they are way less commonly used (pure-Rust codebases don't need anything of the sort, unless you are using `unsafe`).
In addition, it's often considered permissible for C APIs to exhibit undefined behavior if their API contracts are violated. This means that a bug in Rust code that calls into a C API incorrectly can (indirectly) cause undefined behavior. For this reason, all FFI calls are marked unsafe in Rust.
The typical approach to using C libraries from Rust is to create a "safe wrapper" around the unsafe FFI calls that uses Rust's type system and lifetimes to enforce the safety invariants. Of course it's possible to mess up this wrapper and have accidental undefined behavior, but you're much less likely to do so through a safe wrapper than if you use the unsafe FFI calls directly (or use C or Zig) for a couple reasons:
- Writing the safe wrapper forces you to sit down and think about safety invariants instead of glossing over it.
- Once you're done, the compiler will check the safety invariants for you every time you use the API -- no chance of making a mistake or forgetting to read a comment.
[0]: This could be avoided/mitigated with some kind of lightweight in-process sandboxing (e.g. Intel MPK + seccomp) to prevent C libraries from accessing memory that they don't own or performing syscalls they shouldn't. There's some academic research on this (and I experimented with it myself for a masters thesis project), but it generally requires some (minimal) performance overhead and code changes at language boundaries.
Calling it a "safe" wrapper, when that safety is entirely dependent on (a) the correctness of the hand-written wrapper, (b) the safety of the underlying FFI code, has always been a huge stretch of terminology. It's more like a veil of safety, so we can shield our modest eyes from impure code that flaunts its unsafeness.
Rust has no magical ways of turning unsafe code safe, nor is it in any way special in being able to create statically-verifiable abstractions around an unsafe core.
Don't get me wrong, Rust is memory safe when used as a cohesive system, and I would encourage its use as such where memory safety is desired. But the idea of "safely" wrapping unsafe FFIs reminds me of the idea of packaging underwater mortgages into mortgage-backed securities and selling them as a safe investment.
Fully agreed on that -- the unsafe keyword means it's unsafe.
> nor is it in any way special in being able to create statically-verifiable abstractions around an unsafe core.
This isn't really true. Rust's type system has:
- Affine types
- Lifetimes
- Borrowed and exclusive references
- Thread-safety
- Explicit delineation between safe and unsafe code -- with no UB in safe code, and a (cultural) design principle that unsafe code is not allowed to make unchecked assumptions about the behavior of safe code
Most mainstream languages do not have any one of these features, let alone all of them. Thus, Rust's ability to "create statically-verifiable abstractions around an unsafe core" is in practice far more powerful than in C, C++, or Zig. You certainly have the ability to create some such abstractions in other languages, but you cannot statically verify nearly as many properties. (And of course there are languages like SPARK that can verify even more statically than Rust.)
In my experience, as a systems programmer who heavily uses Rust to interact with FFI, hardware MMIO and DMA, interrupt handling, networking code, etc., Rust's ability to safely abstract unsafe primitives is easily the most practically useful aspect of the language. It's not a silver bullet that turns incorrect code into correct code, but it is incredibly good at verifying the correctness of a large application built from small low-level primitives. If you're interfacing with buggy C code, Rust certainly isn't going to help you much -- but it does makes a huge difference in preventing "you're holding it wrong" bugs when interacting with a C API.
An unsafe block declares an axiom: you're asserting to the compiler that you've verified (statically with the type system or dynamically with runtime assertions) all preconditions necessary for some primitive to be sound (free of UB). The compiler can then use that axiom to prove the soundness of all code that interacts with that primitive. When I write a driver to perform a DMA transfer, I only have to think through all the concurrency, alignment, lifetime, moveability, caching, and cancellation requirements once, and the compiler will check them for me every time I (or anyone else) use that driver in an application.
It's much much easier to say that a reference is alive here, and only here can we derefence it, AND every other business code elsewhere.
The one Rust would've prevented was a simple undefined memory access: https://github.com/ghostty-org/ghostty/pull/7982 (At least, I'm pretty sure Rust would've caught this). In practice, it meant that we were copying garbage memory on the first rendered frame, but that memory wasn't used or sent anywhere so in practice it was mostly safe. Still, its not correct!
Thanks for validating my assumption that once you introduce a big blob o' C all bets are off and you're back to Valgrind (or similar tooling).
> One argument is that the richer, more proven ecosystem of wrapper libraries may have prevented it
Yeah but where's the fun in that? ;)
Rust's bindings fully embrace GTK's refcounting, so there's no mismatch in memory management.
It is built on top of gtk4-rs, and fairly usable: https://github.com/hackclub/burrow/blob/main/burrow-gtk/src/...
I'm sure the gtk-rs bindings are pretty good, but I do wonder if anyone ran Valgrind on them. When it comes to C interop, Rust feels weirdly less safe just because of the complexity.
> gtk-rs you are pulling in 100+ crate dependencies and who knows what lurks in those?
gtk-rs is a GNOME project. A lot of it is equivalent to .h files, but each file is counted as a separate crate. The level of trust or verification required isn't that different, especially if pulling a bunch of .so files from the same org is uncontroversial.
Cargo keeps eliciting reactions to big numbers of "dependencies", because it gives you itemized lists of everything being used, including build deps. You just don't see as much inner detail when you have equivalent libs pre-built and pre-installed.
Crates are not the same unit as a typical "dependency" in the C ecosystem. Many "dependencies" are split into multiple crates, even when it's one codebase in one repo maintained by one person. Crates are Rust's compilation unit, so kinda like .o files, but not quite comparable either.
A Cargo Workspace would be conceptually closer to a typical small C project/dependency, but Cargo doesn't support publishing Workspaces as a unit, so every subdirectory becomes a separate crate.
As Windows proves, it is more than enough to write production GUI components, and the industry leading 3D games API, since the days of Visual Basic 5.
In fact that is how most new Rust components on Windows by Microsoft have been written, as COM implementations.
EDIT:
I looked it up because I was curious, and a Drop trait is exactly what they do: https://github.com/gtk-rs/gtk-rs-core/blob/b7559d3026ce06838... and as far as I can tell this is manually written, not automatically generated from gir.
So the safety does rely on the human, not the machine.
The Zig version seems to be a fix for one crash in a destructor of a particular window type. It doesn't look like a systemic solution preventing weak refs crashes in general.
Maybe they wouldn’t even be able to write gtk bindings in rust
Good to hear the Mitchell and the team are still hacking away at it! Thanks for the great software!
- has tons of OOP concepts: classes, virtual methods, properties, signals, etc
- a C API to work with all of those concepts, define your own objects, properties, and so on
- manages the lifetimes of any engine objects (you can attach userdata to any of them)
- a whole tree of reference counted objects
it's a huge headache trying to figure out how to tie it into Zig idioms in a way that is an optimal API (specifically, dealing with lifetimes). we've come pretty far, but I am wondering if you have any additional insights or code snippets I should look at.working on this problem produced this library, which I am not proud of: https://github.com/gdzig/oopz
here's a snippet that kind of demonstrates the state of the API at the moment: https://github.com/gdzig/gdzig/blob/master/example/src/Signa...
also.. now I want to attempt to write a Ghostty frontend as a Godot extension
For example this is something you can do with typescript.
function(args: Arguments) { ... }
type Arguments = { a: number, b: number } | { a: number, b: string, c: number }
the Arguments { a: 1, b: 1, c: 1 } is not representable.Only if there is a niche optimization happens if T is never null, otherwise it's a tagged union.
That's not what you're replying to is about.
With GDExtension, the core builtin types like `Vector3` are passed by value. Avoid unnecessarily wrapping them in Variant, a specialized tagged union, where you can. You can see the documentation here; you have direct access to the float fields: https://gdzig.github.io/gdzig/#gdzig.builtin.vector3.Vector3
Engine objects are passed around as opaque pointers into engine managed memory. The memory for your custom types is managed by you. You allocate the engine object and essentially attach userdata to it, tagged with a unique token provided to your extension. You can see the header functions that manage this: https://github.com/godotengine/godot/blob/e67074d0abe9b69f3d...
But, this is how the lifetimes for the objects gets slightly hairy (for us, the people creating the language bindings). Our goal with the Zig bindings is to make the allocations and lifetimes extremely obvious, a la Zig's philosophy of "No hidden memory allocations". It is proving somewhat challenging, but I think we can get there.
There's still a lot of surprising or unintuitive allocations that can happen when calling into Godot, but we hope to expose those. My current idea is to accept a `GodotAllocator` on those functions (and do nothing with it; just use it to signal the allocation). You can read the code for the `GodotAllocator` implementation: https://github.com/gdzig/gdzig/blob/master/gdzig/heap.zig#L8...
If we succeed, I think Zig can become the best language to write highly performant Godot extensions in.
They had to copy that bad idea from Unity, where methods are named in a specific way and then extracted via reflection.
Either provide specific interfaces that components have to implement, use attributes, or make use of generics with type constraints.
Maybe for Unity that made sense as they started with scripting languages, and then bolted on Mono on the side, but it never made sense to me.
I think you are talking about dispatch of virtual methods, which is still a thing, but the performance cost can be somewhat mitigated.
the names of the methods are interned strings (called `StringName`). a naive implementation will allocate the `StringName`, but you can avoid the allocation with a static lifetime string. we expose a helper for comptime strings in Zig[0].
then, extension classes need to provide callback(s)[1] on registration that lookup and call the virtual methods. as far as I know, the lookup happens once, and the engine stores the function pointer, but I haven't confirmed this yet. it would be unfortunate if not.
at least right now in GDZig, though this might change, we use Zig comptime to generate a unique function pointer for every virtual function on a class[2]. this implementation was by the original `godot-zig` author (we are a fork). in theory we could inline the underlying function here with `@call(.always_inline)` and avoid an extra layer indirection, among other optimizations. it is an area I am still figuring out the best approach for
virtual methods are only ever necessary when interfacing with the editor, GDScript, or other extensions. don't pay the cost of a virtual method when you can just use a plain method call without the overhead.
[0]: https://gdzig.github.io/gdzig/#gdzig.builtin.string_name.Str...
[1]: https://github.com/godotengine/godot/blob/e67074d0abe9b69f3d...
[2]: https://github.com/gdzig/gdzig/blob/5abe02aa046162d31ed5c52f...
Thanks for the overview.
In the end it wasn't that messy, but probably confusing for anyone used to writing dogmatic GTK applications.
Yes! This is huge, I previously gave up on Ghostty because the title bar wasted so much space on my laptop screen: https://bsky.app/profile/reillywood.bsky.social/post/3lebapf...
I found the PR in case anyone else is curious what the new functionality looks like: https://github.com/ghostty-org/ghostty/pull/8166
“App XYZ is mobile ready…” but not on Android.
Same vibe
You probably don’t have this telemetry but have you thought about how having window support helps Zig adoption?
Anecdotal, at day job, we started to use Zig build system to target Windows because that’s where the customers are.
Thus I have some warm fuzzies when something new is cross-platform, but not on Windows. (But hey I heard WSL supports Wayland?) Even more warm fuzzies if Windows users come to whine about availability. Happy to have lived to see this turning of tables come.
I’m not a good person, I guess.
> I find it uncommon to see anyone who chooses to use a terminal regularly using the MacOS default terminal
If you are going to say that about windows the same could be said about macOS. In fact, I only know of one mac user within my circle that has used the non default terminal.
Anecdotally, at day job, every terminal I see on windows is riced out. The Windows terminal app video from MS probably helped with that lol.
Ironically they've all made my DX worse, by highlighting how terrible the nvidia drivers actually worked on both my old Regolith i3wm/compositor-less or new sway/wayland setup.
Like it's ridiculously terrible.
I've tried every magical env flag that Claude can think of, and 4 of the various 550/560/575/580 driver versions--they all suck at screensharing, or resume from sleep, or just out-right buginess/crashes/segfaults in the nvidia drivers.
It must have always been this bad, but I just didn't notice, b/c I never actually used my GPU? lol
This is...both extremely obvious, and also not something that I've ever done before, thought of doing, or seen anyone else do.
Every single instance of Valgrind usage I've encountered or initiated has been triggered by a specific bug or performance regression.
Proactive use of the Valgrind suite (Memcheck and Helgrind at least) as part of the development process would probably result in massively better stability of most tools - and would also make it far easier to find bugs (as you could find them when they were introduced rather than hundreds of commits later).
Yeah, except that it's: slow and expensive and therefore can't be a part of the "short cycle" (edit code, compile, test, iterate) only the "long cycle" (nightly build, test suite). Also valgrind is not exactly designed to directly integrate in a test suite and that requires work.
The errors that are caught by valgrind (and asan) will very often not cause immediate crashes or any kind of obvious bugs. Instead, they cause intermittent bugs that are very hard to catch otherwise. They may even be security vulnerabilities.
Another good reason for using it proactively is that small memory leaks tend to sneak in over time. They might not cause any issues and most of them aren't accumulative. However, having all those leaks in there makes it much more difficult to find big leaks when they happen because you have to sift through all the insignificant ones before you can fix the one that's causing problems.
Bit it's also the most mentioned issue: https://github.com/ghostty-org/ghostty/issues/189
$ nano
Error opening terminal: xterm-ghostty.
Works fine in macos terminal and built in vs code terminal.Yeah, except that the specific terminfo needed for ghostty isn't installed anywhere on the boxes you ssh into ... you need to manually install it on every single one of them.
That in and of itself makes it truly painful to switch to ghostty.
And there are still a lot of other issues, like e.g. building the tip is a freaking nightmare of dependencies and weird issues (hard reliance on specific versions of the zig compiler and of something called "blueprint compiler", etc...)
Not ready for prime time by a mile IMO.
Yeah this is going to be an issue with any of the newer terminal emulators. No big deal. Updating terminfo is easy. If you can't then just set TERM=xterm
> Not ready for prime time by a mile IMO.
Nah, the issue is your lack of experience and understanding of the basics is terminals.
lol.
sure, very easy to do this on order of magnitude 1000 remote machines whose various OS's are entirely managed by automation.
Says the man who accuses others of lack of expertise without a shred of evidence.
As for $TERM, you can simply default it to `xterm-256color` which is more than enough
Also, every program ever depends on a certain version of a compiler, so I don't understand this complaint. Ghostty requires Zig 0.14. That's it, not a specific compiler hash. blueprint-compiler is packaged for pretty much every distribution these days.
Ha, that's a nice way of wording that. I'd take it a step or two further. :)
https://bugzilla.gnome.org/show_bug.cgi?id=750994
There are no plans for a fix. The maintainer recommends waiting for Wayland.
It seems to boil down to an issue in the underlying X11 machinery and it would need to be fixed there first to build a basis on which proper fixes can be implemented.
Given that X11 is in maintenance mode (and as its fans keep saying: It works perfectly fine and doesn't need any more work done on it), it's not likely that's happening.
So, yes, given that information (and I just arrived at that bug report through your post), I would indeed say that waiting for Wayland is the only option they have. All other attempts ended up causing worse issues.
The issue comes from XInput2 (https://www.x.org/releases/X11R7.7/doc/inputproto/XI2proto.t...)
So I guess the "fix" would be to have two completely separate input handlers on X11, one of which supporting smooth scrolling and multitouch, the other not and then offering users a toggle in the style of
[ ] do not ignore the first scroll input after focus loss, but disable smooth scrolling and multitouch
Plus handling all the potential issues by having two separate input handlers.
That's asking a bit much for this particular issue und greatly smells like a case of XKCD 1172
Native would be talking to the compositor directly.
GTK provides a cross-platform layer of abstractions over the compositor. That’s the opposite of native.
There’s countless bugs in the Linux port for applications (eg: Firefox) which can’t be fixed because of a he abstractions done by GTK.
Linux people get really worked up when I say "platform-native". There is no such thing on Linux, but reasonable people agree that something like a GTK app (or Qt) feels "native" on *most desktops* over other applications.
https://mitchellh.com/writing/ghostty-gtk-rewrite#user-conte...
corsica•5mo ago
Maybe they're planning for more, like those GUI configuration dialogs that iterm2 has?
Kitty uses OpenGL for everything and draws its own tabs, they're fully customizable and can be made to look however you want. By not wasting time on integrating with massive frameworks for drawing tabs, Kovid was able to quickly implement really useful things that Ghostty is sorely missing, like wrapping the output of the last command in a pager (run 'ps -auxf' and press Ctrl+Shift+G — this thing so useful it's hard to go without it now. It also works for remote shells across SSH sessions.)
hyperbolablabla•5mo ago
ChocolateGod•5mo ago
WD-42•5mo ago
mitchellh•5mo ago
do_not_redeem•5mo ago
akulbe•5mo ago
TUSF•5mo ago
mitchellh•5mo ago
- Tabs
- Splits
- "this process has exited" banner
- Close confirmation dialogs
- Change title dialog
- Unsafe paste detection dialogs
- Context menus
- Animated bells (opt in)
- "Quake-style" dropdown terminals (cross platform but different mechanisms)
- Progress bars (ConEmu OSC 9;4)
- macOS: Apple Shortcuts Integration
- macOS: Spotlight Integration
Probably more I'm not thinking of. It's unfair to say it's just tabs. Could we have done this without a GUI toolkit? Of course! But the whole mission statement of this project was always to use platform-native (for various definitions) toolkits so that it _feels_ native.
That's not for everyone, and that's the great thing about the wonderful vibrant terminal ecosystem we have.
> is it really worth the integration pain and now this rewrite?
There's definitely a lot more on the way.
The first goal and primary focus of the project was to build a stable, feature rich (terminal sequences) terminal emulator. We're basically there. Next up, we're expanding GUI functionality significantly, including having more escape sequences result in more native GUI elements. But also traditional things like preferences GUIs, yes.
We're also integrating a lot more deeply with native features provided by each platform (somewhat related to the GUI toolkit choice), such as automatic iCloud syncing of your configuration on macOS. Now that the terminal is stable, we can start to really lean in to application features.
This isn't for everyone. Some people like Kitty's textual tabs. That's fine! It's a tradeoff. That's the beauty of choice. :) Kitty is a great terminal, if you prefer it, please use it. But it has completely different tradeoffs than Ghostty.
LambdaComplex•5mo ago
troad•5mo ago
SDL2 is more of a drawing and graphics library. You tell it to put a triangle here, it puts a triangle there. But it has no idea what a button should look like - how it should behave, how it should be animated - on Mac, on Windows, on KDE, on Gnome, etc. You could try to painstakingly recreate this look and feel for each platform, but it's a lot of effort, you probably won't get it quite right, you'll make oversights on accessibility and internationalisation, and your hard work will instantly look dated once the platform evolves.
To make native GUIs, you need to talk to the libraries that draw platform-native components, like buttons, for you. But of course each platform works totally differently, and the whole affair is honestly kind of a mess, which is why truly native cross-platform applications tend to be fairly rare in practice. Maintaining five different GUI code bases for all sorts of fringe platforms is not, in most cases, a good use of time. For most apps, either you stick to native and cut less significant platforms, or you abandon native altogether and just use a cross-platform wrapper like Electron.
bitwize•5mo ago
t_mahmood•5mo ago
And the worst is, Terminals works with texts, sdl2, in my knowledge do not have proper way to display large amount of text, let alone select and copy.
There are probably other issues, these came from what I experienced
bpbp-mango•5mo ago
WesolyKubeczek•5mo ago
That’s how you invent HTML forms (those were modeled on IBM 3270)
Lerc•5mo ago
That counts for something.
Perhaps it is just it lacks an obvious reason to move away from it. Usually, the thing that made me try another terminal was because of something I couldn't do. It wasn't a matter of listing all the pros and cons and going with the best one. It has just found a home with me because it hasn't outstayed its welcome.
alberth•5mo ago
It’s like the “WebKit” for terminal, as I understand it.
Anyone could drop-in libghostty, and immediately have a fully functional terminal.
sevg•5mo ago
Is this new account actually Kovid?
Context: “aumerle” had posted only about Kitty for years as though impartial. And talked about Kovid as if they were a different person. Turns out “aumerle” is Kovid and got banned. He was trash-talking ghostty and advertising Kitty in every ghostty thread. (He does the same using “aumerlex” on reddit, but nobody has called him out on there yet.)
I’m suspicious here because Kovid spent hours trying to force me to admit that he writes code faster than Mitchell XD https://news.ycombinator.com/item?id=42567224 (you’ll need “showdead” enabled, all of Kovid’s comments got flagged to death).
iammrpayments•5mo ago