It's strange that Wayland didn't do it this way from the start given its philosophy of delegating most things to the clients. All you really need to do arbitrary scaling is tell apps "you're rendering to a MxN pixel buffer and as a hint the scaling factor of the output you'll be composited to is X.Y". After that the client can handle events in real coordinates and scale in the best way possible for its particular context. For a browser, PDF viewer or image processing app that can render at arbitrary resolutions not being able to do that is very frustrating if you want good quality and performance. Hopefully we'll be finally getting that in Wayland now.
Originally OS X defaulted to drawing at 2x scale without any scaling down because the hardware was designed to have the right number of pixels for 2x scale. The earliest retina MacBook Pro in 2012 for example was 2x in both width and height of the earlier non-retina MacBook Pro.
Eventually I guess the cost of the hardware made this too hard. I mean for example how many different SKUs are there for 27-inch 5K LCD panels versus 27-inch 4K ones?
But before Apple committed to integer scaling factors and then scaling down, it experimented with more traditional approaches. You can see this in earlier OS X releases such as Tiger or Leopard. The thing is, it probably took too much effort for even Apple itself to implement in its first-party apps so Apple knew there would be low adoption among third party apps. Take a look at this HiDPI rendering example in Leopard: https://cdn.arstechnica.net/wp-content/uploads/archive/revie... It was Apple's own TextEdit app and it was buggy. They did have a nice UI to change the scaling factor to be non-integral: https://superuser.com/a/13675
That's an interesting related discussion. The idea that there is a physically correct 2x scale and fractional scaling is a tradeoff is not necessarily correct. First because different users will want to place the same monitor at different distances from their eyes, or have different eyesight, or a myriad other differences. So the ideal scaling factor for the same physical device depends on the user and the setup. But more importantly because having integer scaling be sharp and snapped to pixels and fractional scaling a tradeoff is mostly a software limitation. GUI toolkits can still place all ther UI at pixel boundaries even if you give them a target scaling of 1.785. They do need extra logic to do that and most can't. But in a weird twist of destiny the most used app these days is the browser and the rendering engines are designed to output at arbitrary factors natively and in most cases can't because the windowing system forces these extra transforms on them. 3D engines are another example, where they can output whatever arbitrary resolution is needed but aren't allowed to. Most games can probably get around that in some kind of fullscreen mode that bypasses the scaling.
I think we've mostly ignored these issues because computers are so fast and monitors have gotten so high resolution that the significant performance penalty (2x easily) and introduced blurryness mostly goes unnoticed.
> Take a look at this HiDPI rendering example in Leopard
That's a really cool example, thanks. At one point Ubuntu's Unity had a fake fractional scaling slider that just used integer scaling plus font size changes for the intermediate levels. That mostly works very well from the point of view of the user. Because of the current limitations in Wayland I mostly do that still manually. It works great for single monitor and can work for multiple monitors if the scaling factors work out because the font scaling is universal and not per output.
All major compositors support fractional scaling extension these days which allows pixel perfect rendering afaik, and I believe Qt6 and GTK4 also support it.
https://wayland.app/protocols/fractional-scale-v1#compositor...
I'm generally a strong wayland proponent and believe it's a big step forward over X in many ways, but some decisions just make me scratch my head.
This may take some getting used to if you're familiar with DPI and already know the value you like, but for non-technical users it's more approachable. Not everyone knows DPI or how many dots they want to their inches.
That the 145% is 1.45 under the hood is really an implementation detail.
I challenge you, tell a non-technical user to set two monitors (e.g. laptop and external) to display text/windows at the same size. I will guarantee you that it will take them significant amount of time moving those relative sliders around. If we had an absolute metric it would be trivial. Similarly, for people who regularly plug into different monitors, they would simply set a desired DPI and everywhere they plug into things would look the same instead of having to open the scale menu every time.
I will also say though that in the most common cases where people request mixed scale factor support from us (laptop vs. docked screen, screen vs. TV) there are also other form factor differences such as viewing distance that doesn't make folks want to match DPI, and "I want things bigger/smaller there" is difficult to respond to with "calculate what that means to you in terms of DPI".
For the case "I have two 27" monitors side-by-side and only one of them is 4K and I want things to be the same size on them" I feel like the UI offering a "Match scale" action/suggestion and then still offering a single scale slider when it sees that scenario might be a nice approach.
Basically scale factor neatly encapsulates things like viewing distance, user eyesight, dexterity, and preference, different input device accuracy, and many others. It is easier to have human say how big/small they want things to be than have gazillion flags for individual attributes and then some complicated heuristics to deduce the scale.
> It is easier to have human say how big/small they want things to be than have gazillion flags for individual attributes and then some complicated heuristics to deduce the scale.
I don't understand why I need gazillion flags, I just set desired DPI (instead of scale). But an absolute metric is almost always better than a relative metric, especially if the relative point is device dependent.
Not all displays accurately report their DPI (or can, such as projectors). Not all users, such as myself, know their monitors DPI. Finally the scaling algorithm will ultimately use a scale factor, so at a protocol level that might as well be what is passed.
There is of course nothing stopping a display management widget/settings page/application from asking for DPI and then converting it to a scale factor, I just don't known of any that exist.
Because certain ratios work a lot better than others, and calculating the exact DPI to get those benefits is a lot harder than estimating the scaling factor you want.
Also the scaling factor calculation is more reliable.
The standardized protocols are more recent (and of course we heavily argued for them).
Regarding the way the protocol works and something having to be retrofitted, I think you are maybe a bit confused about the way the scale factor and buffer scale work on wl_output and wl_surface?
But in any case, yes, I think the happy camper days are coming for you! I also find the macOS approach attrocious, so I appreciate the sentiment.
When you say you supported this for quite some years was there a custom protocol in KWin to allow clients to render directly to the fractionally scaled resolution? ~4 years ago I was frustrated by this when I benchmarked a 2x slowdown from RAW file to the same number of pixels on screen when using fractional scaling and at least in sway there wasn't a way to fix it or much appetite to implement it. It's great to see it is mostly in place now and just needs to be enabled by all the stack.
> When you say you supported this for quite some years was there a custom protocol in KWin to allow clients to render directly to the fractionally scaled resolution?
Qt had a bunch of different mechanisms for how you could tell it to use a fractional scale factor, from setting an env var to doing it inside a "platform plugin" each Qt process loads at runtime (Plasma provides one), etc. We also had a custom-protocol-based mechanism (zwp_scaler_dev iirc) that basically had a set_scale with a 'fixed' instead of an 'int'. Ultimately this was all pretty Qt-specific though in practice. To get adoption outside of just our stack a standard was of course needed, I guess what we can claim though is that we were always pretty firm we wanted proper fractional and to put in the work.
This is horrifying! It implies that, for some scaling factors, the lines of text of your terminal will be of different height.
Not that the alternative (pretend that characters can be placed at arbitrary sub-pixel positions) is any less horrifying. This would make all the lines in your terminal of the same height, alright, but then the same character at different lines would look different.
The bitter truth is that fractional scaling is impossible. You cannot simply scale images without blurring them. Think about an alternating pattern of white and black rows of pixels. If you try to scale it to a non-integer factor the result will be either blurry or aliased.
The good news is that fractional scaling is unnecessary. You can just use fonts of any size you want. Moreover, nowadays pixels are so small that you can simply use large bitmap fonts and they'll look sharp, clean and beautiful.
(And when it's an integer multiple, you don't need scaling at all. You just need a font of that exact size.)
The way the terminal handles the (literal) edge case you mention is no different from any other time its window size is not a multiple of the line height: It shows empty rows of pixels at the top or bottom.
Fonts are only a "exact size" if they're bitmap-based (and when you scale bitmap fonts you are indeed in for sampling difficulties). More typical is to have a font storing vectors and rasterizing glyphs to to the needed size at runtime.
That's overly prescriptive in terms of what users want. In my experience users who are used to macOS don't mind slightly blurred text. And users who are traditionalists and perhaps Windows users prefer crisper text at the expense of some height mismatches. It's all very subjective.
> Because what is probably 90% of wayland install base only supports communicating integer scales to clients.
As someone shipping a couple of million cars per year running Wayland, the install base is a lot bigger than you think it is :)
The reason Apple started with 2x scaling is because this turned out to not be true. Free-scaling UIs were tried for years before that and never once got to acceptable quality. Not if you want to have image assets or animations involved, or if you can't fix other people's coordinate rounding bugs.
Other platforms have much lower standards for good-looking UIs, as you can tell from eg their much worse text rendering and having all of it designed by random European programmers instead of designers.
The web is a free-scaling UI, which scales "responsively" in a seamless way from feature phones with tiny pixelated displays to huge TV-sized ultra high-resolution screens. It's fine.
They did make another attempt at it for apps with Dynamic Type though.
Thinking that two finger zooming style scaling is the goal is probably the result of misguided design-centric thinking instead of user-centric thinking.
QuickDraw in Carbon was included to allow for porting MacOS 9 apps, was always discouraged, and is long gone today (it was never supported in 64-bit).
Nobody wants to deal with vectors for everything. They're not performant enough (harder to GPU accelerate) and you couldn't do the skeumorphic UIs of the time with them. They have gotten more popular since, thanks to flat UIs and other platforms with free scaling.
There were multiple problems making it actually look good though - ranging from making things line up properly at fractional sizes (e.g. a "1 point line" becomes blurry at 1.25 scale), and that most applications use bitmap images and not vector graphics for their icons (and this includes the graphic primitives Apple used for the "lickable" button throughout the OS.
edit: I actually have an iMac G4 here so I took some screenshots since I couldn't find any online. Here is MacOS X 10.4 natively rendering windows at fractional sizes: https://kalleboo.com/linked/os_x_fractional_scaling/
IIRC later versions of OS X than this actually had vector graphics for buttons/window controls
Linux does not do that.
> It's strange that Wayland didn't do it this way from the start
It did (initially for integer scale factors, later also for fractional ones, though some Wayland-based environments did it earlier downstream).
It did (or at least Wayland compositors did).
> It did
It didn't.
I complained about this a few years ago on HN [0], and produced some screenshots [1] demonstrating the scaling artifacts resulting from fractional scaling (1.25).
This was before fractional scaling existed in the Wayland protocol, so I assume that if I try it again today with updated software I won't observe the issue (though I haven't tried yet).
In some of my posts from [0] I explain why it might not matter that much to most people, but essentially, modern font rendering already blurs text [2], so further blurring isn't that noticable.
[0] https://news.ycombinator.com/item?id=32021261
The real answer is just it's hard to bolt this on later, the UI toolkit needs to support it from the start
I know people mention 1 pixel lines (perfectly horizontal or vertical). Then they go multiply by 1.25 or whatever and go like: oh look 0.25 pixel is a lie therefore fractional scaling is fake (sway documentation mentions this to this day). This doesn't seem like it holds in practice other than from this very niche mental exercise. At sufficiently high resolution, which is the case for the display we are talking about, do you even want 1 pixel lines? It will be barely visible. I have this problem now on Linux. Further, if the line is draggable, the click zones becomes too small as well. You probably want something that is of some physical dimension which will probably take multiple pixels anyways. At that point you probably want some antialiasing that you won't be able to see anyways. Further, single pixel lines don't have to be exactly the color the program prescribed anyway. Most of the perfectly horizontal and vertical lines on my screen are all grey-ish. Having some AA artifacts will change its color slightly but don't think it will have material impact. If this is the case, then super resolution should work pretty well.
Then really what you want is something as follows:
1. Super-resolution scaling for most "desktop" applications.
2. Give the native resolution to some full screen applications (games, video playback), and possibly give the native resolution of a rectangle on screen to applications like video playback. This avoids rendering at a higher resolution then downsampling which can introduce information loss for these applications.
3. Now do this on a per-application basis, instead of per-session basis. No Linux DE implements this. KDE implements per-session which is not flexible enough. You have to do it for each application on launch.
Then when you're on Wayland using fractional scaling, XWayland apps look very blurry all the time while Wayland-native apps look great.
And doing so actually using X not OpenGL.
All circular UI elements are haram.
The author did exactly this:
> Even better, I didn’t mention that I wasn’t actually running this program on my laptop. It was running on my router in another room, but everything worked as if
At the moment only Windows handles that use case perfectly, not even macOS. Wayland comes second if the optional fractional scaling is implemented by the toolkit and the compositor. I am skeptical of the Linux desktop ecosystem to do correct thing there though. Both server-side decorations and fractional scaling being optional (i.e. requires runtime opt-in from compositor and the toolkit) are missteps for a desktop protocol. Both missing features are directly attributable to GNOME and their chokehold of GTK and other core libraries.
There is no mechanism for the user to specify a per-screen text DPI in X11.
(Or maybe there secretly is, and i should wait for the author to show us?)
I hadn't heard of WSLg, vcxsrv was the best I could do for free.
Every GUI application on Windows runs an infinite event loop. In that loop you handle messages like [WM_INPUT](https://learn.microsoft.com/en-us/windows/win32/inputdev/wm-...). With Windows 8, Microsoft added a new message type: [WM_DPICHANGED](https://learn.microsoft.com/en-us/windows/win32/hidpi/wm-dpi...). To not break the existing applications with an unknown message, Windows requires the applications to opt-in. The application needs to report its DPI awareness using the function [SetProcessDpiAwareness](https://learn.microsoft.com/en-us/windows/win32/api/shellsca...). The setting of the DPI awareness state can also be done by attaching an XML manifest file to the .exe file.
With the message Windows not only provides the exact DPI to render the Window contents for the display but also the size of the window rectangle for the perfect pixel alignment and to prevent weird behavior while switching displays. After receiving the DPI, it is up to application to draw things at that DPI however it desires. The OS has no direct access to dictate how it is drawn but it does provide lots of helper libraries and functions for font rendering and for classic Windows UI elements.
If the application is using a Microsoft-implemented .NET UX library (WinForms, WPF or UWP), Microsoft has already implemented the redrawing functions. You only need to include manifest file into the .exe resources.
After all of this implementation, why does one get blurry apps? Because those applications don't opt in to handle WM_DPICHANGED. So, the only option that's left for Windows is to let the application to draw itself at the default DPI and then stretch its image. Windows will map the input messages to the default DPI pixel positions.
Microsoft does provide a half way between a fully DPI aware app and an unaware app, if the app uses the old Windows resource files to store the UI in the .exe resources. Since those apps are guaranteed to use Windows standard UI elements, Windows can intercept the drawing functions and at least draw the standard controls with the correct DPI. That's called "system aware". Since it is intercepting the application's way of drawing, it may result in weird UI bugs though.
https://archive.org/details/xlibprogrammingm01adri/page/144/...
Xlib Programming Manual and Xlib Reference Manual, Section 6.1.4, pp 144:
>To be more precise, the filling and drawing versions of the rectangle routines don't draw even the same outline if given the same arguments.
>The routine that fills a rectangle draws an outline one pixel shorter in width and height than the routine that just draws the outline, as shown in Figure 6-2. It is easy to adjust the arguments for the rectangle calls so that one draws the outline and another fills a completely different set of interior pixels. Simply add 1 to x and y and subtract 1 from width and height. In the case of arcs, however, this is a much more difficult proposition (probably impossible in a portable fashion).
https://news.ycombinator.com/item?id=11484148
DonHopkins on April 12, 2016 | parent | context | favorite | on: NeWS – Network Extensible Window System
>There's no way X can do anti-aliasing, without a ground-up redesign. The rendering rules are very strictly defined in terms of which pixels get touched and how.
>There is a deep-down irreconcilable philosophical and mathematical difference between X11's discrete half-open pixel-oriented rendering model, and PostScript's continuous stencil/paint Porter/Duff imaging model.
>X11 graphics round differently when filling and stroking, define strokes in terms of square pixels instead of fills with arbitrary coordinate transformations, and is all about "half open" pixels with gravity to the right and down, not the pixel coverage of geometric region, which is how anti-aliasing is defined.
>X11 is rasterops on wheels. It turned out that not many application developers enjoyed thinking about pixels and coordinates the X11 way, displays don't always have square pixels, the hardware (cough Microvax framebuffer) that supports rasterops efficiently is long obsolete, rendering was precisely defined in a way that didn't allow any wiggle room for hardware optimizations, and developers would rather use higher level stencil/paint and scalable graphics, now that computers are fast enough to support it.
>I tried describing the problem in the Unix-Haters X-Windows Disaster chapter [1]:
>A task as simple as filing and stroking shapes is quite complicated because of X's bizarre pixel-oriented imaging rules. When you fill a 10x10 square with XFillRectangle, it fills the 100 pixels you expect. But you get extra "bonus pixels" when you pass the same arguments to XDrawRectangle, because it actually draws an 11x11 square, hanging out one pixel below and to the right!!! If you find this hard to believe, look it up in the X manual yourself: Volume 1, Section 6.1.4. The manual patronizingly explains how easy it is to add 1 to the x and y position of the filled rectangle, while subtracting 1 from the width and height to compensate, so it fits neatly inside the outline. Then it points out that "in the case of arcs, however, this is a much more difficult proposition (probably impossible in a portable fashion)." This means that portably filling and stroking an arbitrarily scaled arc without overlapping or leaving gaps is an intractable problem when using the X Window System. Think about that. You can't even draw a proper rectangle with a thick outline, since the line width is specified in unscaled pixel units, so if your display has rectangular pixels, the vertical and horizontal lines will have different thicknesses even though you scaled the rectangle corner coordinates to compensate for the aspect ratio.
[1] The X-Windows Disaster: http://www.art.net/~hopkins/Don/unix-haters/x-windows/disast...
And doing this for everything in the entire ecosystem of ancient GUI libraries? And dealing with the litany of different ways folks have done icons, text, and even just drawing lines onto the screen? That's where you run into a lot of trouble.
In the article, the author uses OpenGL to make sure that they're interacting with the screen at a "lower level" than plenty of apps that were written against X. But that's the rub, I think the author neatly sidestepped by mostly using stuff that's not in "vanilla" X11. In fact, the "standard" API of X via Xlib seems to only expose functions for working in raw pixels and raw pixel coordinates without any kind of scaling awareness. See XDrawLine as an example: https://www.x.org/releases/current/doc/man/man3/XDrawLine.3....
It seems to me that the RandR extension through xrandr is the thing providing the scaling info, not X11 itself. You can see that because the author calls `XRRGetScreenResourcesCurrent()` a function that's not a part of vanilla X11 (see list of X library functions here as example: https://www.x.org/releases/current/doc/man/man3/ )
Now, xrandr has been a thing since the early 2000s hence why xrandr is ubiquitous, but due to it's nature as an extension and plenty of existing code sitting around that's totally scale-unaware, I can see why folks believe X11 is scale unaware.
xrandr --output eDP --scale 0.8x0.8
For years and years, and I never really noticed any problems with it. Guess I don't run any "bad" scale-unaware programs? Or maybe I just never noticed(?)At least from my perspective, for all practical purposes it seems to "just work".
--output eDP
This parameter specifies which display to scale, so only the built-in display will be scaled. Running xrandr without any parameters returns all available outputs, as well as the resolutions the currently connected displays support.I still think X11 forwarding over SSH is a super cool and unsung/undersung feature. I know there are plenty of good reasons we don't really "do it these days" but I have had some good experiences where running the UI of a server app locally was useful. (Okay, it was more fun than useful, but it was useful.)
It felt like a prototype feature that never became production-ready for that reason alone. Then there's all the security concerns that solidify that.
But yes, it does work reasonably well, and it is actually really cool. I just wish it were... better.
For example, both versions of Emacs would download the lengths of each line on the screen when you started a selection, so you could drag and select the text and animation the selection overlay without any network traffic at all, without sending mouse move events over the network, only sending messages when you autoscrolled or released the button.
http://www.bitsavers.org/pdf/sun/NeWS/800-5543-10_The_NeWS_T... document page 2, pdf page 36:
>Thin wire
>TNT programs perform well over low bandwidth client-server connections such as telephone lines or overloaded networks because the OPEN LOOK components live in the window server and interact with the user without involving the client program at all.
>Application programmers can take advantage of the programmable server in this way as well. For example, you can download user-interaction code that animates some operation.
UniPress Emacs NeWS Driver:
https://github.com/SimHacker/NeMACS/blob/b5e34228045d544fcb7...
Selection support with local feedback:
https://github.com/SimHacker/NeMACS/blob/b5e34228045d544fcb7...
Gnu Emacs 18 NeWS Driver (search for LocalSelectionStart):
https://donhopkins.com/home/code/emacs18/src/tnt.ps
https://news.ycombinator.com/item?id=26113192
DonHopkins on Feb 12, 2021 | parent | context | favorite | on: Interview with Bill Joy (1984)
>Bill was probably referring to what RMS calls "Evil Software Hoarder Emacs" aka "UniPress Emacs", which was the commercially supported version of James Gosling's Unix Emacs (aka Gosling Emacs / Gosmacs / UniPress Emacs / Unimacs) sold by UniPress Software, and it actually cost a thousand or so for a source license (but I don't remember how much a binary license was). Sun had the source installed on their file servers while Gosling was working there, which was probably how Bill Joy had access to it, although it was likely just a free courtesy license, so Gosling didn't have to pay to license his own code back from UniPress to use at Sun. https://en.wikipedia.org/wiki/Gosling_Emacs
>I worked at UniPress on the Emacs display driver for the NeWS window system (the PostScript based window system that James Gosling also wrote), with Mike "Emacs Hacker Boss" Gallaher, who was charge of Emacs development at UniPress. One day during the 80's Mike and I were wandering around an East coast science fiction convention, and ran into RMS, who's a regular fixture at such events.
>Mike said: "Hello, Richard. I heard a rumor that your house burned down. That's terrible! Is it true?"
>RMS replied right back: "Yes, it did. But where you work, you probably heard about it in advance."
>Everybody laughed. It was a joke! Nobody's feelings were hurt. He's a funny guy, quick on his feet!
In the late 80's, if you had a fast LAN and not a lot of memory and disk (like a 4 meg "dickless" Sun 3/50), it actually was more efficient to run X11 Emacs and even the X11 window manager itself over the LAN on another workstation than on your own, because then you didn't suffer from frequent context switches and paging every keystroke and mouse movement and click.
The X11 server and Emacs and WM didn't need to context switch to simply send messages over the network and paint the screen if you ran emacs and the WM remotely, so Emacs and the WM weren't constantly fighting with the X11 server for memory and CPU. Context switches were really expensive on a 68k workstation, and the way X11 is designed, especially with its outboard window manager, context switching from ping-ponging messages back and forth and back and forth and back and forth and back and forth between X11 and the WM and X11 and Emacs every keystroke or mouse movement or click or window event KILLED performance and caused huge amounts of virtual memory thrashing and costly context switching.
Of course NeWS eliminated all that nonsense gatling gun network ping-ponging and context switching, which was the whole point of its design.
That's the same reason using client-side Google Maps via AJAX of 20 years ago was so much better than the server-side Xerox PARC Map Viewer via http of 32 years ago.
https://en.wikipedia.org/wiki/Xerox_PARC_Map_Viewer
Outboard X11 ICCCM window managers are the worst possible most inefficient way you could ever possibly design a window manager, and that's not even touching on their extreme complexity and interoperability problems. It's the one program you NEED to be running in the same context as the window system to synchronously and seamlessly handle events without dropping them on the floor and deadlocking (google "X11 server grab" if you don't get what this means), but instead X11 brutally slices the server and window manager apart like King Solomon following through with his child-sharing strategy.
https://tronche.com/gui/x/xlib/window-and-session-manager/XG...
While NeWS not only runs the window manager efficiently in the server without any context switching or network overhead, but it also lets you easily plug in your own customized window frames (with tabs and pie menus), implement fancy features like rooms and virtual scrolling desktops, and all kinds of cool stuff! At Sun were even managing X11 windows with a NeWS ICCCM window manager written in PostScript, wrapping tabbed windows with pie menus around your X-Windows!
https://donhopkins.com/home/archive/NeWS/owm.ps.txt
https://donhopkins.com/home/archive/NeWS/win/xwm.ps
https://www.donhopkins.com/home/catalog/unix-haters/x-window...
$ ssh <remote> glxgears
runs fine!Yes, you can! Waypipe came out 6 years ago. Its express purpose is to make a Wayland equivalent to ssh -X. https://gitlab.freedesktop.org/mstoeckl/waypipe/
It's nice to not have tearing. But IMO the functionality loss vs X11 isn't worth it for anything but a dedicated media playback/editing device.
Skill issue. You probably held your keyboard wrong or something. Simple xrandr commands work fine like they have for decades. (Of course if you've moved to Wayland then who knows).
Traditionally it's used to launch a full-screen application, usually a game, but you can launch your window manager through it, if you want your desktop session to use a custom resolution with custom scaling and/or letterboxing.
Not particularly if you are on a low latency network. Modern UI toolkits make applications way less responsive that classical X11 applications running across gigabit ethernet.
And even on a fast network the wayland alternative of 'use RDP' is almost unusable.
The thing X11 really is missing (at least most importantly) is DPI virtualization. UI scaling isn't a feature most display servers implement because most display servers don't implement the actual UI bits. The lack of DPI virtualization is a problem though, because it leaves windows on their own to figure out how to logically scale input and output coordinates. Worse, they have to do it per monitor, and can't do anything about the fact that part of the window will look wrong if it overlaps two displays with different scaling. If anything doesn't do this or does it slightly differently, it will look wrong, and the user has little recourse beyond searching for environment variables or X properties that might make it work.
Explaining all of that is harder than saying that X11 has poor display scaling support. Saying it "doesn't support UI/display scaling" is kind of a misnomer though; that's not exactly the problem.
It's silly that people keep complaining about this. It's a very minor effect, and one that can be solved in principle only by moving to pure vector rendering for everything. Generally speaking, a window will only ever span a single screen. It's convenient to be able to drag a window to a separate monitor, but having that kind of overlap as a permanent feature of one's workflow is just crazy.
> The thing X11 really is missing (at least most importantly) is DPI virtualization.
Shouldn't that kind of DPI virtualization be a concern for toolkits rather than the X server or protocol? As long as X is getting accurate DPI information from the hardware and reporting that to clients, what else is needed?
If you have DPI virtualization, a very sufficient solution already exists: pick a reasonable scale factor for the underlying buffer and use it, then resample for any outputs that don't match. This is what happens in most Wayland compositors. Exactly what you pick isn't too important. You could pick whichever output overlaps the most with the window, or the output that has the highest scale factor, or some other criteria. It will not result in perfect pixels everywhere, but it is perfectly sufficient to clean up the visual artifacts.
Another solution would be to simply only present the surface on whatever output it primarily overlaps with. MacOS does this and it's seemingly sufficient. Unfortunately, as far as I understand, this isn't really trivial to do in X11 for the same reasons why DPI virtualization isn't trivial: whether you render it or not, the window is still in that region and will still receive input there.
> Generally speaking, a window will only ever span a single screen. It's convenient to be able to drag a window to a separate monitor, but having that kind of overlap as a permanent feature of one's workflow is just crazy.
The issue with the overlap isn't that people routinely need this; if they did, macOS or Windows would also need a more complete solution. In reality though, it's just a very janky visual glitch that isn't really too consequential for your actual workflow. Still, it really can make moving windows across outputs super janky, especially since in practice different applications do sometimes choose different behaviors. (e.g. will your toolkit choose to resize the window so it has the same logical size? will this impact the window dragging operation?)
So really, the main benefit of solving this particular edge case is just to make the UX of window management better.
While UX and visual jank concerns are below concerns about functionality, I still think they have non-zero (and sometimes non-low) importance. Laptop users expect to be able to dock and manage windows effectively regardless of whether the monitors they are using have the same ideal scale factor as the laptop's internal panel; the behavior should be clean and effective and legacy apps should ideally at least appear correct even if blurry. Being able to do DPI virtualization solves the whole set of problems very cleanly. MacOS is doing this right, Windows is finally doing this right, Wayland is doing this right, X11 still can't yet. (It's not physically impossible, but it would require quite a lot of work since it would require modifying everything that handles coordinate spaces I believe.)
> Shouldn't that kind of DPI virtualization be a concern for toolkits rather than the X server or protocol? As long as X is getting accurate DPI information from the hardware and reporting that to clients, what else is needed?
Accurate DPI information is insufficient as users may want to scale differently anyways, either due to preference, higher viewing distance, or disability. So that already isn't enough.
That said, the other issue is that there already exists applications that don't do perfect per monitor scaling, and there doesn't exist a single standard way to have the per-monitor scaling preferences propagated in X11. It's not even necessarily a solved problem among the latest versions of all of the toolkits, since it at minimum requires support for desktop environment settings daemons and etc.
In the past, the problem with UI toolkits doing proportional sizing was because they used bitmaps for UI elements. Since newer versions of Qt and Gtk 4 render programmatically, they can do it the right way. Windows mostly does this, too, even with win32 as long as you're using the newer themes. MacOS is the only one that has assets prerendered at integer factors everywhere and needs to perform framebuffer scaling to change sizes. But Apple doesn't care because they don't want you using third-party monitors anyway.
Edit: I'm not sure about Apple's new theme. Maybe this is their transition point away from fixed asset sizes.
Win32 controls have always been DPI independent, as far back as Windows 95. There is DPI choice UX as part of the "advanced" display settings.
In practice Windows and macOS both do bitmap scaling when necessary. macOS scales the whole frame buffer, Windows scales windows individually.
Can you do an entire windowing pipeline where it's vectors all the way until the actual compositing? Well, sure! We were kind of close in the pre-compositing era sometimes. Is it worth it to do so? I don't think so for now. Most desktop displays are made up of standard-ish pixels so buffers full of pixels makes a very good primitive. So making the surfaces themselves out of pixels seems like a fine approach, and the scaling problem is relatively easy to solve if you start with a clean slate. The fact that it can handle the "window splitting across outputs" case slightly better is not a particularly strong draw; I don't believe most users actually want to use windows split across outputs, it's just better UX if things at least appear correct. Same thing for legacy apps, really: if you run an old app that doesn't support scaling it's still better for it to work and appear blurry than to be tiny and unusable.
What to make of this. Well, the desktop platform hasn't moved so fast; ten years of progress has become little more than superficial at this point. So I think we can expect with relatively minor concessions that barring an unforeseen change, desktops we use 10 to 20 years from now probably won't be that different from what we have today; what we have today isn't even that different from what we already had 20 years ago as it is. And you can see that in people's attitudes; why fix what isn't broken? That's the sentiment of people who believe in an X11 future. Of course in practice, there's nothing particularly wrong with trying to keep bashing X11 into modernity; with much pain they definitely managed to take X.org and make it shockingly good. Ironically, if some of the same people working on Wayland today had put less work into keeping X.org working well, the case for Wayland would be much stronger by now. Still, I really feel like roughly nobody actually wants to sit there and try to wedge HDR or DPI virtualization into X11, and retooling X11 without regard for backwards compatibility is somewhat silly since if you're going to break old apps you may as well just start fresh. Wayland has always had tons of problems yet I always bet on it as the most likely option simply because it just makes the most sense to me and I don't see any showstoppers that seem like they would be insurmountable. Lo and behold, it sure seems to me that the issues remaining for Wayland adoption have started to become more and more minor. KDE maintains a nice list of more serious drawbacks. It used to be a whole hell of a lot larger!
https://community.kde.org/Plasma/Wayland_Known_Significant_I...
Users think they want a lot of things they don't really need. Do we really want to hand users that loaded gun so that they can choose incorrectly where to fire?
Going to OpenGL is a nice tactic, since OpenGL doesn't give a flip about screen coördinates anyway.
I miss NeWS - it actually brought a number of great capabilities to a window system - none of which, AFAIK, are offered by Wayland.
cpach•8h ago
sho_hn•7h ago
No one with a good grasp of the space ever claimed that it wasn't possible on X11 to call into APIs to retrieve physical display size and map that to how many pixels to render. This has been possible for decades, and while not completely trivial is not the hard part about doing good UI scaling.
Doing good UI scaling requires infrastructure for dynamic scaling changes, for different scale factors per display within the same scene, for guaranteeing crisp hairlines at any scale factor, and so on and so forth.
Many of these problems could have been solved in X11 with additional effort, and some even made it to partial solutions available. The community simply chose to put its energy into bringing it all together in the Wayland stack instead.
kvemkon•6h ago
KDE developer wrote recently:
> X11 isn’t able to perform up to the standards of what people expect today with respect to .., 10 bits-per-color monitors,.. multi-monitor setups (especially with mixed DPIs or refresh rates),... [1]
Multi-monitor setups are working since 20+ years. 10 bits are also supported (otherwise how would the PRO versions of graphic cards support this feature).
> chose to put its energy into bringing it all together in
I cannot recall, was there any paper analyzing why working and almost working X11 features do not fit, few additional X11 extensions cannot be proposed anymore and another solution from scratch is inevitable. What is a significant difference of a X11 and a wayland protocol extension.
[1] https://pointieststick.com/2025/06/21/about-plasmas-x11-sess...
sho_hn•6h ago
That's quite similar to how I chose to phrase is, and comes down to where the community chose to spend the effort to solve all the integration issues to make it so.
Did the community decide that after a long soul-seeking process that ended with a conclusion that things were impossible to make happen in X11, and does that paper you invoke exist? No, not really. Conversations like this certainly did take place, but I would say more in informal settings, e.g. discussions on lists and at places like the X.org conference. Plenty of "Does it make sense to that in X11 still or do we start over?" chatter in both back in the day.
If I recall right, the most serious effort was a couple of people taking a few weeks to entertain a "Could we fix this in an X12 and how much would that break?" scenario. Digging up the old fdo wiki pages on that one would for sure be interesting for the history books.
The most close analogue I can think of that most in the HN audience are familiar with is probably the Python 2->3 transition and decision to clean thing up at the expense of backward compat. To this day, you will of course find folks arguing emotionally on either side of the Python argument as well.
For the most part, the story of how this happened is a bit simpler: It used to be that the most used X11 display server was a huge monolith that did many things the kernel would not, all the way to crazy things like managing PCI bus access in user space.
This slowly changed over the years, with strengthening kernel infra like DRM, the appearance of Kernel Mode Setting, with the evolution of libraries like Mesa. Suddenly implementing a display server became a much simpler affair that mostly could call into a bunch of stuff elsewhere.
This created an opening for a new smaller project fully focused on the wire protocol and protocol semantics part, throwing away a lot of old baggage and code. Someone took the time to do that and demonstrate how it looks like, other people liked what they saw and Wayland was born.
This also means: Plenty of the useful code of the X11 era actually still exists. One of the biggest myths is that Wayland somehow started over from scratch. A lot of the aforementioned stuff that over the years migrated from the X11 server to e.g. the kernel is obviously still what makes things work now, and libraries such as libinput, xkbcommon that nearly every Wayland display server implementation uses are likewise factored out of the X11 stack.
denkmoon•5h ago
kelnos•5m ago