frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Microsoft Releases Classic MS-DOS Editor for Linux Written in Rust

https://github.com/microsoft/edit
106•ethanpil•3h ago•60 comments

Fun with uv and PEP 723

https://www.cottongeeks.com/articles/2025-06-24-fun-with-uv-and-pep-723
365•deepakjois•9h ago•117 comments

Writing toy software is a joy

https://blog.jsbarretto.com/post/software-is-joy
553•bundie•12h ago•232 comments

ChatGPT's enterprise success against Copilot fuels OpenAI/Microsoft rivalry

https://www.bloomberg.com/news/articles/2025-06-24/chatgpt-vs-copilot-inside-the-openai-and-microsoft-rivalry
179•mastermaq•11h ago•161 comments

Thick Nickels

https://thick-coins.net/?_bhlid=8a5736885893b7837e681aa73f890b9805a4673e
46•jxmorris12•3h ago•13 comments

Managing time when time doesn't exist

https://multiverseemployeehandbook.com/blog/temporal-resources-managing-time-when-time-doesnt-exist/
50•TMEHpodcast•3h ago•32 comments

Mid-sized cities outperform major metros at turning economic growth into patents

https://www.governance.fyi/p/booms-not-busts-drives-innovation
36•guardianbob•4h ago•29 comments

PlasticList – Plastic Levels in Foods

https://www.plasticlist.org/
323•homebrewer•13h ago•143 comments

Build your first iOS app on Linux / Windows

https://xtool.sh/tutorials/xtool/first-app/
25•todsacerdoti•2h ago•1 comments

Ancient X11 scaling technology

https://flak.tedunangst.com/post/forbidden-secrets-of-ancient-X11-scaling-technology-revealed
192•todsacerdoti•8h ago•146 comments

Canal Boat Simulator

https://jacobfilipp.com/boat/
33•surprisetalk•2d ago•9 comments

Finding a 27-year-old easter egg in the Power Mac G3 ROM

https://www.downtowndougbrown.com/2025/06/finding-a-27-year-old-easter-egg-in-the-power-mac-g3-rom/
319•zdw•14h ago•89 comments

PicoEMP: low-cost Electromagnetic Fault Injection (EMFI) tool

https://github.com/newaetech/chipshouter-picoemp
4•transpute•49m ago•0 comments

Subsecond: A runtime hotpatching engine for Rust hot-reloading

https://docs.rs/subsecond/0.7.0-alpha.1/subsecond/index.html
104•varbhat•8h ago•16 comments

XBOW, an autonomous penetration tester, has reached the top spot on HackerOne

https://xbow.com/blog/top-1-how-xbow-did-it/
173•summarity•12h ago•88 comments

How to Think About Time in Programming

https://shanrauf.com/archive/how-to-think-about-time-in-programming
86•rmason•7h ago•29 comments

The bitter lesson is coming for tokenization

https://lucalp.dev/bitter-lesson-tokenization-and-blt/
223•todsacerdoti•13h ago•96 comments

Starship: The minimal, fast, and customizable prompt for any shell

https://starship.rs/
385•benoitg•16h ago•180 comments

National Archives at College Park, MD, will become a restricted federal facility

https://www.archives.gov/college-park
268•LastTrain•6h ago•78 comments

The Jumping Frenchmen of Maine

https://www.amusingplanet.com/2025/06/the-jumping-frenchmen-of-maine.html
29•bookofjoe•2d ago•3 comments

Basic Facts about GPUs

https://damek.github.io/random/basic-facts-about-gpus/
247•ibobev•15h ago•55 comments

Show HN: VSCan - Detect Malicious VSCode Extensions

https://vscan.dev/
30•shadow-ninja•5h ago•22 comments

Playing First Contact in Eclipse, a 3-Day Sci-Fi Larp

https://mssv.net/2025/06/15/playing-first-contact-in-eclipse-a-spectacular-3-day-sci-fi-larp/
3•adrianhon•2d ago•0 comments

Gemini Robotics On-Device brings AI to local robotic devices

https://deepmind.google/discover/blog/gemini-robotics-on-device-brings-ai-to-local-robotic-devices/
165•meetpateltech•13h ago•65 comments

Advanced Python Function Debugging with MCP Integration

https://github.com/kordless/gnosis-mystic
6•kordlessagain•2d ago•0 comments

Show HN: Autumn – Open-source infra over Stripe

https://github.com/useautumn/autumn
106•ayushrodrigues•15h ago•32 comments

Mapping LLMs over excel saved my passion for game dev

https://danieltan.weblog.lol/2025/06/map-llms-excel-saved-my-passion-for-game-dev
51•danieltanfh95•3d ago•18 comments

Expand.ai (YC S24) is hiring a founding engineer

1•timsuchanek•10h ago

Few Americans pay for news when they encounter paywalls

https://www.pewresearch.org/short-reads/2025/06/24/few-americans-pay-for-news-when-they-encounter-paywalls/
20•mooreds•1h ago•19 comments

Timdle – Place historical events in chronological order

https://www.timdle.com/
160•maskinberg•1d ago•51 comments
Open in hackernews

Ancient X11 scaling technology

https://flak.tedunangst.com/post/forbidden-secrets-of-ancient-X11-scaling-technology-revealed
192•todsacerdoti•8h ago

Comments

cpach•8h ago
Love this post. Reminds me of my former coworker G. He had exactly this attitude, and it made it possible for him to deliver results on most tasks that he set out for.
sho_hn•7h ago
It's actually a somewhat bad and uninformed post, or perhaps the mistake (unclear whether knowingly or not) is to disprove a claim made by uninformed people.

No one with a good grasp of the space ever claimed that it wasn't possible on X11 to call into APIs to retrieve physical display size and map that to how many pixels to render. This has been possible for decades, and while not completely trivial is not the hard part about doing good UI scaling.

Doing good UI scaling requires infrastructure for dynamic scaling changes, for different scale factors per display within the same scene, for guaranteeing crisp hairlines at any scale factor, and so on and so forth.

Many of these problems could have been solved in X11 with additional effort, and some even made it to partial solutions available. The community simply chose to put its energy into bringing it all together in the Wayland stack instead.

kvemkon•6h ago
> to disprove a claim made by uninformed people

KDE developer wrote recently:

> X11 isn’t able to perform up to the standards of what people expect today with respect to .., 10 bits-per-color monitors,.. multi-monitor setups (especially with mixed DPIs or refresh rates),... [1]

Multi-monitor setups are working since 20+ years. 10 bits are also supported (otherwise how would the PRO versions of graphic cards support this feature).

> chose to put its energy into bringing it all together in

I cannot recall, was there any paper analyzing why working and almost working X11 features do not fit, few additional X11 extensions cannot be proposed anymore and another solution from scratch is inevitable. What is a significant difference of a X11 and a wayland protocol extension.

[1] https://pointieststick.com/2025/06/21/about-plasmas-x11-sess...

sho_hn•6h ago
Nate (the author of the blog post you linked), who I know personally very well, is a QA/product person focused on integration and fit and finish issues. What he means to say is that as a polished product, this is now available in the form of a Wayland-based desktop session without fiddling, while the same cannot be said of X11-based ones. It's meant as a pragmatic take, not as a history lesson.

That's quite similar to how I chose to phrase is, and comes down to where the community chose to spend the effort to solve all the integration issues to make it so.

Did the community decide that after a long soul-seeking process that ended with a conclusion that things were impossible to make happen in X11, and does that paper you invoke exist? No, not really. Conversations like this certainly did take place, but I would say more in informal settings, e.g. discussions on lists and at places like the X.org conference. Plenty of "Does it make sense to that in X11 still or do we start over?" chatter in both back in the day.

If I recall right, the most serious effort was a couple of people taking a few weeks to entertain a "Could we fix this in an X12 and how much would that break?" scenario. Digging up the old fdo wiki pages on that one would for sure be interesting for the history books.

The most close analogue I can think of that most in the HN audience are familiar with is probably the Python 2->3 transition and decision to clean thing up at the expense of backward compat. To this day, you will of course find folks arguing emotionally on either side of the Python argument as well.

For the most part, the story of how this happened is a bit simpler: It used to be that the most used X11 display server was a huge monolith that did many things the kernel would not, all the way to crazy things like managing PCI bus access in user space.

This slowly changed over the years, with strengthening kernel infra like DRM, the appearance of Kernel Mode Setting, with the evolution of libraries like Mesa. Suddenly implementing a display server became a much simpler affair that mostly could call into a bunch of stuff elsewhere.

This created an opening for a new smaller project fully focused on the wire protocol and protocol semantics part, throwing away a lot of old baggage and code. Someone took the time to do that and demonstrate how it looks like, other people liked what they saw and Wayland was born.

This also means: Plenty of the useful code of the X11 era actually still exists. One of the biggest myths is that Wayland somehow started over from scratch. A lot of the aforementioned stuff that over the years migrated from the X11 server to e.g. the kernel is obviously still what makes things work now, and libraries such as libinput, xkbcommon that nearly every Wayland display server implementation uses are likewise factored out of the X11 stack.

denkmoon•5h ago
Multi monitor with mixed DPIs absolutely does not work well in x11 in 2025. I don’t know about 20+ years ago.
kelnos•5m ago
It could, though. GTK has support for mixed DPI, just only for Wayland. There's no reason why it couldn't work on X11. It might be more tricky to get right, but it's just a matter of work.
wosined•8h ago
Very nice.
pedrocr•8h ago
That's probably better than most scaling done on Wayland today because it's doing the rendering directly at the target resolution instead of doing the "draw at 2x scale and then scale down" dance that was popularized by OSX and copied by Linux. If you do it that way you both lose performance and get blurry output. The only corner case a compositor needs to cover is when a client is straddling two outputs. And even in that case you can render at the higher size and get perfect output in one output and the same downside in blurryness in the other, so it's still strictly better.

It's strange that Wayland didn't do it this way from the start given its philosophy of delegating most things to the clients. All you really need to do arbitrary scaling is tell apps "you're rendering to a MxN pixel buffer and as a hint the scaling factor of the output you'll be composited to is X.Y". After that the client can handle events in real coordinates and scale in the best way possible for its particular context. For a browser, PDF viewer or image processing app that can render at arbitrary resolutions not being able to do that is very frustrating if you want good quality and performance. Hopefully we'll be finally getting that in Wayland now.

wmf•8h ago
None of the toolkits (Motif, Tk, Gtk, Qt, etc.) could handle fractional scaling so if Wayland had taken the easy way out it would break every app.
lostmsu•7h ago
Why is Wayland trying to monkey patch something that's broken elsewhere?
wmf•7h ago
Do you want to be right or do you want to display apps.
lostmsu•59m ago
How many apps will you display if you don't display them right? Are you ready to tell me poor graphics is not one of the reasons people not use Linux? You won't display apps to the users you lost. Instead Windows will.
nixosbestos•7h ago
Except for the fact that Wayland has had a fractional scaling protocol for some time now. Qt implements it. There's some unknown reason that GTK won't pick it up. But anyway, it's definitely there. There's even a beta-level implementation in Firefox, etc.
kccqzy•8h ago
> doing the "draw at 2x scale and then scale down" dance that was popularized by OSX

Originally OS X defaulted to drawing at 2x scale without any scaling down because the hardware was designed to have the right number of pixels for 2x scale. The earliest retina MacBook Pro in 2012 for example was 2x in both width and height of the earlier non-retina MacBook Pro.

Eventually I guess the cost of the hardware made this too hard. I mean for example how many different SKUs are there for 27-inch 5K LCD panels versus 27-inch 4K ones?

But before Apple committed to integer scaling factors and then scaling down, it experimented with more traditional approaches. You can see this in earlier OS X releases such as Tiger or Leopard. The thing is, it probably took too much effort for even Apple itself to implement in its first-party apps so Apple knew there would be low adoption among third party apps. Take a look at this HiDPI rendering example in Leopard: https://cdn.arstechnica.net/wp-content/uploads/archive/revie... It was Apple's own TextEdit app and it was buggy. They did have a nice UI to change the scaling factor to be non-integral: https://superuser.com/a/13675

pedrocr•6h ago
> Originally OS X defaulted to drawing at 2x scale without any scaling down because the hardware was designed to have the right number of pixels for 2x scale.

That's an interesting related discussion. The idea that there is a physically correct 2x scale and fractional scaling is a tradeoff is not necessarily correct. First because different users will want to place the same monitor at different distances from their eyes, or have different eyesight, or a myriad other differences. So the ideal scaling factor for the same physical device depends on the user and the setup. But more importantly because having integer scaling be sharp and snapped to pixels and fractional scaling a tradeoff is mostly a software limitation. GUI toolkits can still place all ther UI at pixel boundaries even if you give them a target scaling of 1.785. They do need extra logic to do that and most can't. But in a weird twist of destiny the most used app these days is the browser and the rendering engines are designed to output at arbitrary factors natively and in most cases can't because the windowing system forces these extra transforms on them. 3D engines are another example, where they can output whatever arbitrary resolution is needed but aren't allowed to. Most games can probably get around that in some kind of fullscreen mode that bypasses the scaling.

I think we've mostly ignored these issues because computers are so fast and monitors have gotten so high resolution that the significant performance penalty (2x easily) and introduced blurryness mostly goes unnoticed.

> Take a look at this HiDPI rendering example in Leopard

That's a really cool example, thanks. At one point Ubuntu's Unity had a fake fractional scaling slider that just used integer scaling plus font size changes for the intermediate levels. That mostly works very well from the point of view of the user. Because of the current limitations in Wayland I mostly do that still manually. It works great for single monitor and can work for multiple monitors if the scaling factors work out because the font scaling is universal and not per output.

sho_hn•6h ago
What you want is exactly how fractional scaling works (on Wayland) in KDE Plasma and other well-behaved Wayland software: The scale factor can be something quirky like your 1.785, and the GUI code will generally make sure that things nevertheless snap to the pixel grid to avoid blurry results, as close to the requested scaling as possible. No "extra window system transforms".
pedrocr•6h ago
That's what I referred to with "we'll be finally getting that in Wayland now". For many years the Wayland protocol could only communicate integer scale factors to clients. If you asked for 1.5 what the compositors did was ask all the clients to render at 2x at a suitably fake size and then scale that to the final output resolution. That's still mostly the case in what's shipping right now I believe. And even in integer scaling things like events are sent to clients in virtual coordinates instead of just going "here's your NxM buffer, all events are in those physical coordinates, all scaling is just metadata I give you to do whatever you want with". There were practical reasons to do that in the beginning for backwards compatibility but the actual direct scaling is having to be retrofitted now. I'll be really happy when I can just set 1.3 scaling in sway and have that just mean that sway tells Firefox that 1.3 is the scale factor and just gets back the final buffer that doesn't need any transformations. I haven't checked very recently but it wasn't possible not too long ago. If it is now I'll be a happy camper and need to upgrade some software versions.
zokier•6h ago
> That's still mostly the case in what's shipping right now I believe

All major compositors support fractional scaling extension these days which allows pixel perfect rendering afaik, and I believe Qt6 and GTK4 also support it.

https://wayland.app/protocols/fractional-scale-v1#compositor...

pedrocr•6h ago
Seems like the support is getting there. I just checked Firefox and it has landed the code but still has it disabled by default. Most users that set 1.5x on their session are probably still getting needless scaling but hopefully that won't last too long.
cycomanic•5h ago
That's great, however why do we use a "scale factor" in the first place? We had a perfectly fitting metric in DPI, why can't I set the desired DPI for every monitor, but instead need to calculate some arbitrary scale factor?

I'm generally a strong wayland proponent and believe it's a big step forward over X in many ways, but some decisions just make me scratch my head.

sho_hn•5h ago
The end-user UIs don't ask you to calculate anything. Typically they have a slider from 100% to, say, 400% and let you set this to something like 145%.

This may take some getting used to if you're familiar with DPI and already know the value you like, but for non-technical users it's more approachable. Not everyone knows DPI or how many dots they want to their inches.

That the 145% is 1.45 under the hood is really an implementation detail.

atq2119•3h ago
Not to mention that only a small fraction of the world uses inches...
cycomanic•2h ago
I don't care about what we call the metric, I argue that a relative metric, where the reference point is device dependent is simply bad design.

I challenge you, tell a non-technical user to set two monitors (e.g. laptop and external) to display text/windows at the same size. I will guarantee you that it will take them significant amount of time moving those relative sliders around. If we had an absolute metric it would be trivial. Similarly, for people who regularly plug into different monitors, they would simply set a desired DPI and everywhere they plug into things would look the same instead of having to open the scale menu every time.

sho_hn•2h ago
I see where you are coming from and it makes sense.

I will also say though that in the most common cases where people request mixed scale factor support from us (laptop vs. docked screen, screen vs. TV) there are also other form factor differences such as viewing distance that doesn't make folks want to match DPI, and "I want things bigger/smaller there" is difficult to respond to with "calculate what that means to you in terms of DPI".

For the case "I have two 27" monitors side-by-side and only one of them is 4K and I want things to be the same size on them" I feel like the UI offering a "Match scale" action/suggestion and then still offering a single scale slider when it sees that scenario might be a nice approach.

zokier•5h ago
DPI (or PPI) is an absolute measurement. Scale factor is intentionally relative. Different circumstances will want to have different scale factor : dpi ratios; most software do not care if certain UI element is exactly x mm in size, but instead just care that their UI element scale matches the rest of the system.

Basically scale factor neatly encapsulates things like viewing distance, user eyesight, dexterity, and preference, different input device accuracy, and many others. It is easier to have human say how big/small they want things to be than have gazillion flags for individual attributes and then some complicated heuristics to deduce the scale.

cycomanic•2h ago
I disagree, I don't want a relative metric. You're saying scale factor neatly encapsulates viewing distance, eyesight, preference, but compared to what? Scale is meaningless if I don't have a reference point. If I have two different size monitors you have now created a metric where a scale of 2x means something completely different. So to get things look the same I either have to manually calculate DPI or I have to manually try and error until it looks right. Same thing if I change monitors, I now have to try until I get the desired scale, while if I had DPI I would not have to change a thing.

> It is easier to have human say how big/small they want things to be than have gazillion flags for individual attributes and then some complicated heuristics to deduce the scale.

I don't understand why I need gazillion flags, I just set desired DPI (instead of scale). But an absolute metric is almost always better than a relative metric, especially if the relative point is device dependent.

MadnessASAP•4h ago
I'm not privy to what discussions happened during the protocol development. However using scale within the protocol seems more practical to me.

Not all displays accurately report their DPI (or can, such as projectors). Not all users, such as myself, know their monitors DPI. Finally the scaling algorithm will ultimately use a scale factor, so at a protocol level that might as well be what is passed.

There is of course nothing stopping a display management widget/settings page/application from asking for DPI and then converting it to a scale factor, I just don't known of any that exist.

Dylan16807•14m ago
> We had a perfectly fitting metric in DPI, why can't I set the desired DPI for every monitor, but instead need to calculate some arbitrary scale factor?

Because certain ratios work a lot better than others, and calculating the exact DPI to get those benefits is a lot harder than estimating the scaling factor you want.

Also the scaling factor calculation is more reliable.

sho_hn•6h ago
In KDE Plasma we've supported the way you like for quite some years, because Qt is a cross-platform toolkit that supported fractional on e.g. Windows already and we just went ahead and put the mechanisms in place to make use of that on Wayland.

The standardized protocols are more recent (and of course we heavily argued for them).

Regarding the way the protocol works and something having to be retrofitted, I think you are maybe a bit confused about the way the scale factor and buffer scale work on wl_output and wl_surface?

But in any case, yes, I think the happy camper days are coming for you! I also find the macOS approach attrocious, so I appreciate the sentiment.

pedrocr•6h ago
Thanks! By retrofitting I mean having to have a new protocol with this new opt-in method where some apps will be getting integer scales and go through a transform and some apps will be getting a fractional scale and rendering directly to the output resolution. If this had worked "correctly" from the start the compositors wouldn't even need to know anything about scaling. As far as they knew the scaling metadata could have been an opaque value that they passed from the user config to the clients to figure out. I assume we're stuck forever with all compositors having to understand all this instead of just punting the problem completely to clients.

When you say you supported this for quite some years was there a custom protocol in KWin to allow clients to render directly to the fractionally scaled resolution? ~4 years ago I was frustrated by this when I benchmarked a 2x slowdown from RAW file to the same number of pixels on screen when using fractional scaling and at least in sway there wasn't a way to fix it or much appetite to implement it. It's great to see it is mostly in place now and just needs to be enabled by all the stack.

sho_hn•5h ago
Oh, ok. Yeah, this I agree with, and I think plenty of people do - having integer-only scaling in the core protocol at the start was definitely a regretable oversight and is a wart on things.

> When you say you supported this for quite some years was there a custom protocol in KWin to allow clients to render directly to the fractionally scaled resolution?

Qt had a bunch of different mechanisms for how you could tell it to use a fractional scale factor, from setting an env var to doing it inside a "platform plugin" each Qt process loads at runtime (Plasma provides one), etc. We also had a custom-protocol-based mechanism (zwp_scaler_dev iirc) that basically had a set_scale with a 'fixed' instead of an 'int'. Ultimately this was all pretty Qt-specific though in practice. To get adoption outside of just our stack a standard was of course needed, I guess what we can claim though is that we were always pretty firm we wanted proper fractional and to put in the work.

atq2119•3h ago
Thank you for that. The excellent fractional scaling and multi-monitor support is why I finally switched back to KDE full time (after first switching away during the KDE 3 to 4 mess).
enriquto•5h ago
> The scale factor can be something quirky like your 1.785, and the GUI code will generally make sure that things nevertheless snap to the pixel grid to avoid blurry results

This is horrifying! It implies that, for some scaling factors, the lines of text of your terminal will be of different height.

Not that the alternative (pretend that characters can be placed at arbitrary sub-pixel positions) is any less horrifying. This would make all the lines in your terminal of the same height, alright, but then the same character at different lines would look different.

The bitter truth is that fractional scaling is impossible. You cannot simply scale images without blurring them. Think about an alternating pattern of white and black rows of pixels. If you try to scale it to a non-integer factor the result will be either blurry or aliased.

The good news is that fractional scaling is unnecessary. You can just use fonts of any size you want. Moreover, nowadays pixels are so small that you can simply use large bitmap fonts and they'll look sharp, clean and beautiful.

sho_hn•5h ago
The way it works for your terminal emulator example is that it figures out what makes sense to do for a value of 1.785, e.g. rasterizing text appropriately and making sure that line heights and baselines are at sensible consistent values.
enriquto•4h ago
the problem is that there's no reasonable thing to do when the height of the terminal in pixels is not an integer multiple of the height of the font in pixels. Whatever "it" does, will be wrong.

(And when it's an integer multiple, you don't need scaling at all. You just need a font of that exact size.)

sho_hn•4h ago
You're overthinking things a bit and are also a bit confused about how font sizes work and what "scaling" means in a windowing system context. You are thinking taking a bunch of pixels and resampling. In the context we're talking about "scaling" means telling the software what it's expected to output and giving it an opportunity to render accordingly.

The way the terminal handles the (literal) edge case you mention is no different from any other time its window size is not a multiple of the line height: It shows empty rows of pixels at the top or bottom.

Fonts are only a "exact size" if they're bitmap-based (and when you scale bitmap fonts you are indeed in for sampling difficulties). More typical is to have a font storing vectors and rasterizing glyphs to to the needed size at runtime.

bscphil•2h ago
Given that the context here is talking about terminals, they probably are literally thinking in terms of bitmap based rendering with integer scaling.
sho_hn•2h ago
Right, but most users of terminal emulators typically don't use bitmap fonts anymore and haven't for quite some time (just adding this for general clarity, I'm sure you know it).
kccqzy•3h ago
> The bitter truth is that fractional scaling is impossible.

That's overly prescriptive in terms of what users want. In my experience users who are used to macOS don't mind slightly blurred text. And users who are traditionalists and perhaps Windows users prefer crisper text at the expense of some height mismatches. It's all very subjective.

0x457•5h ago
Is it actually in Wayland or is it "implementation should handle it somehow" like most of wayland? Because what is probably 90% of wayland install base only supports communicating integer scales to clients.
sho_hn•4h ago
It's in Wayland in the same way everything else is, i.e. fractional scaling is now a protocol included in the standard protocol suite.

> Because what is probably 90% of wayland install base only supports communicating integer scales to clients.

As someone shipping a couple of million cars per year running Wayland, the install base is a lot bigger than you think it is :)

0x457•4h ago
Hmmm, sorry, but I don't care about install base of wayland in a highly controlled environment (how many different monitor panels you ship is probably less amount of displays with different DPI in my living room right now).
sho_hn•4h ago
90% is still nonsense even in desktop Linux, tho.
astrange•6h ago
> But more importantly because having integer scaling be sharp and snapped to pixels and fractional scaling a tradeoff is mostly a software limitation. GUI toolkits can still place all ther UI at pixel boundaries even if you give them a target scaling of 1.785. They do need extra logic to do that and most can't.

The reason Apple started with 2x scaling is because this turned out to not be true. Free-scaling UIs were tried for years before that and never once got to acceptable quality. Not if you want to have image assets or animations involved, or if you can't fix other people's coordinate rounding bugs.

Other platforms have much lower standards for good-looking UIs, as you can tell from eg their much worse text rendering and having all of it designed by random European programmers instead of designers.

zozbot234•5h ago
> Free-scaling UIs were tried for years before that and never once got to acceptable quality.

The web is a free-scaling UI, which scales "responsively" in a seamless way from feature phones with tiny pixelated displays to huge TV-sized ultra high-resolution screens. It's fine.

astrange•5h ago
That's actually a different kind of scaling. The one at issue here is closer to cmd-plus/minus on desktop browsers, or two-finger zooming on phones. It's hard to make that look good unless you only have simple flat UIs like the one on this website.

They did make another attempt at it for apps with Dynamic Type though.

atq2119•3h ago
I'm certain that web style scaling is what the vast majority of desktop users actually want from fractional desktop scaling.

Thinking that two finger zooming style scaling is the goal is probably the result of misguided design-centric thinking instead of user-centric thinking.

cosmic_cheese•6h ago
Even today you run into the occasional foreign UI toolkit app that only renders at 1x and gets scaled up. We’re probably still years out from all desktop apps handling scaling correctly.
zozbot234•7h ago
Isn't OS X graphics supposed to be based on Display Postscript/PDF technology throughout? Why does it have to render at 2x and downsample, instead of simply rendering vector-based primitives at native resolution?
wmf•7h ago
No, I think integer coordinates are pervasive in Carbon and maybe even Cocoa. To do fractional scaling "properly" you need to use floating point coordinates everywhere.
kalleboo•3h ago
Cocoa/Quartz 2D/Core Graphics uses floating-point coordinates everywhere and drawing is resolution-independent (e.g., the exact same drawing commands are used for screen vs print). Apple used to tout OS X drawing was "based on PDF" but I think that only meant it had the same drawing primitives and could be captured in a PDF output context.

QuickDraw in Carbon was included to allow for porting MacOS 9 apps, was always discouraged, and is long gone today (it was never supported in 64-bit).

astrange•6h ago
No, CoreGraphics just happened to have drawing primitives similar to PDF.

Nobody wants to deal with vectors for everything. They're not performant enough (harder to GPU accelerate) and you couldn't do the skeumorphic UIs of the time with them. They have gotten more popular since, thanks to flat UIs and other platforms with free scaling.

qarl•5h ago
You're thinking of NeXTSTEP. Before OS X.
kergonath•3h ago
NeXTSTEP was Display PostScript. MacOS X uses Display PDF since way back in the developer previews.
kalleboo•3h ago
OS X could do it, they actually used to support enabling fractional rendering like this through a developer tool (Quartz Debug)

There were multiple problems making it actually look good though - ranging from making things line up properly at fractional sizes (e.g. a "1 point line" becomes blurry at 1.25 scale), and that most applications use bitmap images and not vector graphics for their icons (and this includes the graphic primitives Apple used for the "lickable" button throughout the OS.

edit: I actually have an iMac G4 here so I took some screenshots since I couldn't find any online. Here is MacOS X 10.4 natively rendering windows at fractional sizes: https://kalleboo.com/linked/os_x_fractional_scaling/

IIRC later versions of OS X than this actually had vector graphics for buttons/window controls

sho_hn•7h ago
> doing the "draw at 2x scale and then scale down" dance that was popularized by OSX and copied by Linux

Linux does not do that.

> It's strange that Wayland didn't do it this way from the start

It did (initially for integer scale factors, later also for fractional ones, though some Wayland-based environments did it earlier downstream).

maxdamantus•3h ago
> Linux does not do that.

It did (or at least Wayland compositors did).

> It did

It didn't.

I complained about this a few years ago on HN [0], and produced some screenshots [1] demonstrating the scaling artifacts resulting from fractional scaling (1.25).

This was before fractional scaling existed in the Wayland protocol, so I assume that if I try it again today with updated software I won't observe the issue (though I haven't tried yet).

In some of my posts from [0] I explain why it might not matter that much to most people, but essentially, modern font rendering already blurs text [2], so further blurring isn't that noticable.

[0] https://news.ycombinator.com/item?id=32021261

[1] https://news.ycombinator.com/item?id=32024677

[2] https://news.ycombinator.com/item?id=43418227

sho_hn•2h ago
The "It did" was about the mechanism (Wayland did tell the clients the scale and expected them to render acccordingly). Yes, fractional wasn't in the core protocol at the start, but that wasn't the object of discussion (it was elsewhere, as you can see in the sibling threads that evolved, where I also totally agree this was a huge wart).
ndiddy•7h ago
Wayland has supported X11 style fractional scaling since 2022: https://wayland.app/protocols/fractional-scale-v1 . Both Qt and GTK support fractional scaling on Wayland.
bscphil•2h ago
Rather annoyingly, the compositor support table on this page seems to be showing only the latest version of each compositor (plus or minus a month or two, e.g. it's behind on KWin). I assume support for the protocol predates these versions for the most part? Do you know when the first versions of KDE and Gnome to support the protocol were released? Asking because some folks in this thread have claimed that a large majority of shipped Wayland systems don't support it, and it would be interesting to know if that's not the case (e.g. if Debian stable had support in Qt and GTK applications).
sho_hn•2h ago
We first shipped support for wp-fractional-scale-v1 in Plasma 5.27 in early 2023, support for it in our own software vastly improved with Plasma 6 (and Qt 6) however.
6510•6h ago
If the initial picture is large enough the blur from down-scaling isn't so bad. Say 1.3 pixel per pixel vs 10.1 pixels per pixel.
jdsully•5h ago
Windows tried this for a long time and literally no app was able to make it work properly. I spent years of my life making Excel have a sane rendering model that worked on device independent pixels and all that, but its just really hard for people not to think in raw pixels.
kllrnohj•3h ago
And yet every Android app does it just fine :)

The real answer is just it's hard to bolt this on later, the UI toolkit needs to support it from the start

pwnna•2h ago
So I don't understand where the meme of the blurry super-resolution based down sampling comes from. If that is the case, what is super-resolution antialiasing[1] then? Images when rendered at higher resolution than downsampled is usually sharper than an image rendered at the downsampled resolution. This is because it will preserve the high frequency component of the signal better. There are multiple other downsampling-based anti-aliasing technique which all will boost signal-to-noise ratio. Does this not work for UI as well? Most of it is vector graphics. Bitmap icons will need to be updated but the rest of UI (text) should be sharp.

I know people mention 1 pixel lines (perfectly horizontal or vertical). Then they go multiply by 1.25 or whatever and go like: oh look 0.25 pixel is a lie therefore fractional scaling is fake (sway documentation mentions this to this day). This doesn't seem like it holds in practice other than from this very niche mental exercise. At sufficiently high resolution, which is the case for the display we are talking about, do you even want 1 pixel lines? It will be barely visible. I have this problem now on Linux. Further, if the line is draggable, the click zones becomes too small as well. You probably want something that is of some physical dimension which will probably take multiple pixels anyways. At that point you probably want some antialiasing that you won't be able to see anyways. Further, single pixel lines don't have to be exactly the color the program prescribed anyway. Most of the perfectly horizontal and vertical lines on my screen are all grey-ish. Having some AA artifacts will change its color slightly but don't think it will have material impact. If this is the case, then super resolution should work pretty well.

Then really what you want is something as follows:

1. Super-resolution scaling for most "desktop" applications.

2. Give the native resolution to some full screen applications (games, video playback), and possibly give the native resolution of a rectangle on screen to applications like video playback. This avoids rendering at a higher resolution then downsampling which can introduce information loss for these applications.

3. Now do this on a per-application basis, instead of per-session basis. No Linux DE implements this. KDE implements per-session which is not flexible enough. You have to do it for each application on launch.

[1]: https://en.wikipedia.org/wiki/Supersampling

resonious•1h ago
As someone who just uses Linux but doesn't write compositor code or really know how they work: Wayland supports fractional scaling way better than X11. At least I was unable to get X11 to do 1.5x scale at all. The advice was always "just increase font size in every app you use".

Then when you're on Wayland using fractional scaling, XWayland apps look very blurry all the time while Wayland-native apps look great.

wmf•8h ago
Drawing a circle is kind of cheating. The hard part of scaling is drawing UI elements like raster icons or 1px hairlines to look non-blurry.
phkahler•7h ago
>> The hard part of scaling is drawing UI elements like raster icons or 1px hairlines to look non-blurry.

And doing so actually using X not OpenGL.

kllrnohj•3h ago
Yeah this is kinda the big elephant in the room here? They didn't prove what they set out to prove. Yes obviously OpenGL does scaling just fine, the entire point of Wayland is to get the compositor to just being a compositor. They didn't do any scaling with X. They didn't do anything at all with X other than ask it some basic display information.
slackfan•2h ago
X shouldn't be displaying anything that isn't a right angle anyway.

All circular UI elements are haram.

kelnos•10m ago
Toolkits don't use X to do much (if any) drawing these days. They all use something like cairo or skia or -- yes -- OpenGL to render offscreen, and then upload to X for display (or in the case of OpenGL, they can also do direct rendering).
dark-star•7h ago
yeah, exactly. Nobody claimed that it is impossible to determin the physical geometry of your display (but that might be tricky for remote X sessions, I don't know if it would work there too?)
kvemkon•7h ago
> tricky for remote X sessions, I don't know if it would work there too

The author did exactly this:

> Even better, I didn’t mention that I wasn’t actually running this program on my laptop. It was running on my router in another room, but everything worked as if

okanat•7h ago
And also doing it for multiple monitors with differing scales. Nobody claims X11 doesn't support different DPIs. The problems occur when you have monitors with differing pixel densities.

At the moment only Windows handles that use case perfectly, not even macOS. Wayland comes second if the optional fractional scaling is implemented by the toolkit and the compositor. I am skeptical of the Linux desktop ecosystem to do correct thing there though. Both server-side decorations and fractional scaling being optional (i.e. requires runtime opt-in from compositor and the toolkit) are missteps for a desktop protocol. Both missing features are directly attributable to GNOME and their chokehold of GTK and other core libraries.

akdor1154•6h ago
This is exactly right.

There is no mechanism for the user to specify a per-screen text DPI in X11.

(Or maybe there secretly is, and i should wait for the author to show us?)

okanat•5h ago
Natively in X11? No. Even with Xrandr. It is no. But you can obtain the display size and then draw things differently using OpenGL but now you're reinventing the display protocol in your drawing engine (which is what GLX is after all but I digress). You need to onboard every toolkit to your protocol.
somat•2h ago
X11 has had this since day one. However the trade offs to actually employing it are... unfortunate. It leans real hard on the application to actually cross screen boundaries and very few applications were willing to put the work in. so xrandr was invented. which does more of what people want with multiple screens by treating them as parts of one large virtual screen but you loose the per screen dpi.

http://wok.oblomov.eu/tecnologia/mixed-dpi-x11/

axus•6h ago
Speaking of X11 and Windows, any recommended Windows Xservers to add to this StackOverflow post? https://stackoverflow.com/questions/61110603/how-to-set-up-w...

I hadn't heard of WSLg, vcxsrv was the best I could do for free.

okanat•5h ago
With WSLg, Windows runs a native Wayland server under Windows and it will use Xwayland to display X11 apps. You should be able to use any GUI app without any extra setup. You should double check the environment variables though. Sometimes .bashrc etc. or WSL's systemd support interferes with them.
Avamander•4h ago
Where does Windows handle it? It's a hodgepodge of different frameworks that often look absolutely abysmal at any scale besides 100%.
okanat•3h ago
Every UI framework that runs on Windows has to communicate using Win32 API at the lowest level. Here is the guide: https://learn.microsoft.com/en-us/windows/win32/hidpi/high-d...

Every GUI application on Windows runs an infinite event loop. In that loop you handle messages like [WM_INPUT](https://learn.microsoft.com/en-us/windows/win32/inputdev/wm-...). With Windows 8, Microsoft added a new message type: [WM_DPICHANGED](https://learn.microsoft.com/en-us/windows/win32/hidpi/wm-dpi...). To not break the existing applications with an unknown message, Windows requires the applications to opt-in. The application needs to report its DPI awareness using the function [SetProcessDpiAwareness](https://learn.microsoft.com/en-us/windows/win32/api/shellsca...). The setting of the DPI awareness state can also be done by attaching an XML manifest file to the .exe file.

With the message Windows not only provides the exact DPI to render the Window contents for the display but also the size of the window rectangle for the perfect pixel alignment and to prevent weird behavior while switching displays. After receiving the DPI, it is up to application to draw things at that DPI however it desires. The OS has no direct access to dictate how it is drawn but it does provide lots of helper libraries and functions for font rendering and for classic Windows UI elements.

If the application is using a Microsoft-implemented .NET UX library (WinForms, WPF or UWP), Microsoft has already implemented the redrawing functions. You only need to include manifest file into the .exe resources.

After all of this implementation, why does one get blurry apps? Because those applications don't opt in to handle WM_DPICHANGED. So, the only option that's left for Windows is to let the application to draw itself at the default DPI and then stretch its image. Windows will map the input messages to the default DPI pixel positions.

Microsoft does provide a half way between a fully DPI aware app and an unaware app, if the app uses the old Windows resource files to store the UI in the .exe resources. Since those apps are guaranteed to use Windows standard UI elements, Windows can intercept the drawing functions and at least draw the standard controls with the correct DPI. That's called "system aware". Since it is intercepting the application's way of drawing, it may result in weird UI bugs though.

zozbot234•7h ago
That depends on what kind of filtering is used when upscaling those icons. If you use modern resampling filters, you are more likely to get a subtle "oil painting" or "watercolor"-like effect with some very minor ringing effects next to sharp transitions (the effect of correctly-applied antialiasing, with a tight limit on spatial frequencies) as opposed to any visible blur. These filters may be somewhat compute-intensive when used for upscaling the entire screen - but if you only upscale small raster icons or other raster images, and use native-resolution rendering for everything else, that effect is negligible.
DonHopkins•2h ago
Ha ha, funny you should mention circles! It's so just much fun filling and stroking arcs and circles correctly with X11. From the horse's mouth:

https://archive.org/details/xlibprogrammingm01adri/page/144/...

Xlib Programming Manual and Xlib Reference Manual, Section 6.1.4, pp 144:

>To be more precise, the filling and drawing versions of the rectangle routines don't draw even the same outline if given the same arguments.

>The routine that fills a rectangle draws an outline one pixel shorter in width and height than the routine that just draws the outline, as shown in Figure 6-2. It is easy to adjust the arguments for the rectangle calls so that one draws the outline and another fills a completely different set of interior pixels. Simply add 1 to x and y and subtract 1 from width and height. In the case of arcs, however, this is a much more difficult proposition (probably impossible in a portable fashion).

https://news.ycombinator.com/item?id=11484148

DonHopkins on April 12, 2016 | parent | context | favorite | on: NeWS – Network Extensible Window System

>There's no way X can do anti-aliasing, without a ground-up redesign. The rendering rules are very strictly defined in terms of which pixels get touched and how.

>There is a deep-down irreconcilable philosophical and mathematical difference between X11's discrete half-open pixel-oriented rendering model, and PostScript's continuous stencil/paint Porter/Duff imaging model.

>X11 graphics round differently when filling and stroking, define strokes in terms of square pixels instead of fills with arbitrary coordinate transformations, and is all about "half open" pixels with gravity to the right and down, not the pixel coverage of geometric region, which is how anti-aliasing is defined.

>X11 is rasterops on wheels. It turned out that not many application developers enjoyed thinking about pixels and coordinates the X11 way, displays don't always have square pixels, the hardware (cough Microvax framebuffer) that supports rasterops efficiently is long obsolete, rendering was precisely defined in a way that didn't allow any wiggle room for hardware optimizations, and developers would rather use higher level stencil/paint and scalable graphics, now that computers are fast enough to support it.

>I tried describing the problem in the Unix-Haters X-Windows Disaster chapter [1]:

>A task as simple as filing and stroking shapes is quite complicated because of X's bizarre pixel-oriented imaging rules. When you fill a 10x10 square with XFillRectangle, it fills the 100 pixels you expect. But you get extra "bonus pixels" when you pass the same arguments to XDrawRectangle, because it actually draws an 11x11 square, hanging out one pixel below and to the right!!! If you find this hard to believe, look it up in the X manual yourself: Volume 1, Section 6.1.4. The manual patronizingly explains how easy it is to add 1 to the x and y position of the filled rectangle, while subtracting 1 from the width and height to compensate, so it fits neatly inside the outline. Then it points out that "in the case of arcs, however, this is a much more difficult proposition (probably impossible in a portable fashion)." This means that portably filling and stroking an arbitrarily scaled arc without overlapping or leaving gaps is an intractable problem when using the X Window System. Think about that. You can't even draw a proper rectangle with a thick outline, since the line width is specified in unscaled pixel units, so if your display has rectangular pixels, the vertical and horizontal lines will have different thicknesses even though you scaled the rectangle corner coordinates to compensate for the aspect ratio.

[1] The X-Windows Disaster: http://www.art.net/~hopkins/Don/unix-haters/x-windows/disast...

wmf•4m ago
I think that stuff was all fixed long ago by Cairo/Skia on XRender.
creatonez•7h ago
> Perhaps not the most exciting task, but I figure it’s isomorphic to any other scaling challenge

And doing this for everything in the entire ecosystem of ancient GUI libraries? And dealing with the litany of different ways folks have done icons, text, and even just drawing lines onto the screen? That's where you run into a lot of trouble.

q3k•7h ago
Right, it's much easier to also re-engineer all of X11 into Wayland at the same time :).
kllrnohj•3h ago
The whole point of Wayland is to not include all of X11. There's a lot of X11 that's just obsolete cruft, like the entire drawing model
lelandbatey•7h ago
I admire your tenacity. I think folks say "X11 doesn't support DPI scaling" when they should say "most programs written against X11 to use official Xlib functionality don't understand scaling".

In the article, the author uses OpenGL to make sure that they're interacting with the screen at a "lower level" than plenty of apps that were written against X. But that's the rub, I think the author neatly sidestepped by mostly using stuff that's not in "vanilla" X11. In fact, the "standard" API of X via Xlib seems to only expose functions for working in raw pixels and raw pixel coordinates without any kind of scaling awareness. See XDrawLine as an example: https://www.x.org/releases/current/doc/man/man3/XDrawLine.3....

It seems to me that the RandR extension through xrandr is the thing providing the scaling info, not X11 itself. You can see that because the author calls `XRRGetScreenResourcesCurrent()` a function that's not a part of vanilla X11 (see list of X library functions here as example: https://www.x.org/releases/current/doc/man/man3/ )

Now, xrandr has been a thing since the early 2000s hence why xrandr is ubiquitous, but due to it's nature as an extension and plenty of existing code sitting around that's totally scale-unaware, I can see why folks believe X11 is scale unaware.

arp242•7h ago
So on my laptop I've been doing:

  xrandr --output eDP --scale 0.8x0.8
For years and years, and I never really noticed any problems with it. Guess I don't run any "bad" scale-unaware programs? Or maybe I just never noticed(?)

At least from my perspective, for all practical purposes it seems to "just work".

nixosbestos•7h ago
Good luck if you plug in an external monitor. (Not to speak of refresh rates)
rwmj•6h ago
At the office I plug in a monitor over USB-C and that just works on my X11 laptop. If something in a browser on the monitor was too large or too small I'd just zoom in/out until it was fine.
arp242•5h ago
I don't know about that; I use just one screen (laptop or HDMI, not both at the same time which is presumably what you're referring to) and it works for that. That's not really what the previous person was talking about either.
0x457•4h ago
If you have two monitors with very different DPI, for example, I almost poke my eyes out when I tried 5k and 1440p together, you only have two choices: render for 5k and scale down to 1440p or render at 1440p and upscale to 5k. Well, you can also pick a middle ground that makes both monitors look blurry. Either way, at least one monitor will be _very_ blurry.
dlcarrier•3h ago

    --output eDP
This parameter specifies which display to scale, so only the built-in display will be scaled. Running xrandr without any parameters returns all available outputs, as well as the resolutions the currently connected displays support.
kunzhi•7h ago
Interesting article, I'll admit when I first saw the title I was thinking of a different kind of "scaling" - namely the client/server decoupling in X11.

I still think X11 forwarding over SSH is a super cool and unsung/undersung feature. I know there are plenty of good reasons we don't really "do it these days" but I have had some good experiences where running the UI of a server app locally was useful. (Okay, it was more fun than useful, but it was useful.)

xioxox•7h ago
It's certainly very useful. I do half my work using X11 over ssh and it works reasonably well over a LAN (at least using emacs, plotting, etc).
inetknght•3h ago
"reasonably well" as in... yeah it works. But it's extremely laggy (for comparison, I know people who forwarded DirectX calls over 10Mbit ethernet and could get ~15 frames/sec playing Unreal Tournament in the early 00's), and any network blip is liable to cause a window that you can neither interact with nor forcefully close.

It felt like a prototype feature that never became production-ready for that reason alone. Then there's all the security concerns that solidify that.

But yes, it does work reasonably well, and it is actually really cool. I just wish it were... better.

DonHopkins•2h ago
I worked on the NeWS drivers for Emacs (both "Evil Software Hoarder" Gosling UniPress Emacs 2.20 and later "Free" Gnu Emacs 18), which were extremely efficient and smoothly interactive over low baud rate modems (which we called "thin wire" as opposed to i.e. the "thick wire" coaxial 10BASE5 Ethernet of the time), because instead of using the extraordinarily inefficient, chatty, pong-pongy X-Windows protocol, Emacs could simply download PostScript code to the window server that defined a highly optimized application specific client/server protocol and intelligent front-end (now termed "AJAX"), which performed as much real time interaction in the window system as possible, without any network activity, like popping up and tracking pie menus, and providing real time feedback and autoscroll when selecting and highlighting text.

For example, both versions of Emacs would download the lengths of each line on the screen when you started a selection, so you could drag and select the text and animation the selection overlay without any network traffic at all, without sending mouse move events over the network, only sending messages when you autoscrolled or released the button.

http://www.bitsavers.org/pdf/sun/NeWS/800-5543-10_The_NeWS_T... document page 2, pdf page 36:

>Thin wire

>TNT programs perform well over low bandwidth client-server connections such as telephone lines or overloaded networks because the OPEN LOOK components live in the window server and interact with the user without involving the client program at all.

>Application programmers can take advantage of the programmable server in this way as well. For example, you can download user-interaction code that animates some operation.

UniPress Emacs NeWS Driver:

https://github.com/SimHacker/NeMACS/blob/b5e34228045d544fcb7...

Selection support with local feedback:

https://github.com/SimHacker/NeMACS/blob/b5e34228045d544fcb7...

Gnu Emacs 18 NeWS Driver (search for LocalSelectionStart):

https://donhopkins.com/home/code/emacs18/src/tnt.ps

https://news.ycombinator.com/item?id=26113192

DonHopkins on Feb 12, 2021 | parent | context | favorite | on: Interview with Bill Joy (1984)

>Bill was probably referring to what RMS calls "Evil Software Hoarder Emacs" aka "UniPress Emacs", which was the commercially supported version of James Gosling's Unix Emacs (aka Gosling Emacs / Gosmacs / UniPress Emacs / Unimacs) sold by UniPress Software, and it actually cost a thousand or so for a source license (but I don't remember how much a binary license was). Sun had the source installed on their file servers while Gosling was working there, which was probably how Bill Joy had access to it, although it was likely just a free courtesy license, so Gosling didn't have to pay to license his own code back from UniPress to use at Sun. https://en.wikipedia.org/wiki/Gosling_Emacs

>I worked at UniPress on the Emacs display driver for the NeWS window system (the PostScript based window system that James Gosling also wrote), with Mike "Emacs Hacker Boss" Gallaher, who was charge of Emacs development at UniPress. One day during the 80's Mike and I were wandering around an East coast science fiction convention, and ran into RMS, who's a regular fixture at such events.

>Mike said: "Hello, Richard. I heard a rumor that your house burned down. That's terrible! Is it true?"

>RMS replied right back: "Yes, it did. But where you work, you probably heard about it in advance."

>Everybody laughed. It was a joke! Nobody's feelings were hurt. He's a funny guy, quick on his feet!

In the late 80's, if you had a fast LAN and not a lot of memory and disk (like a 4 meg "dickless" Sun 3/50), it actually was more efficient to run X11 Emacs and even the X11 window manager itself over the LAN on another workstation than on your own, because then you didn't suffer from frequent context switches and paging every keystroke and mouse movement and click.

The X11 server and Emacs and WM didn't need to context switch to simply send messages over the network and paint the screen if you ran emacs and the WM remotely, so Emacs and the WM weren't constantly fighting with the X11 server for memory and CPU. Context switches were really expensive on a 68k workstation, and the way X11 is designed, especially with its outboard window manager, context switching from ping-ponging messages back and forth and back and forth and back and forth and back and forth between X11 and the WM and X11 and Emacs every keystroke or mouse movement or click or window event KILLED performance and caused huge amounts of virtual memory thrashing and costly context switching.

Of course NeWS eliminated all that nonsense gatling gun network ping-ponging and context switching, which was the whole point of its design.

That's the same reason using client-side Google Maps via AJAX of 20 years ago was so much better than the server-side Xerox PARC Map Viewer via http of 32 years ago.

https://en.wikipedia.org/wiki/Xerox_PARC_Map_Viewer

Outboard X11 ICCCM window managers are the worst possible most inefficient way you could ever possibly design a window manager, and that's not even touching on their extreme complexity and interoperability problems. It's the one program you NEED to be running in the same context as the window system to synchronously and seamlessly handle events without dropping them on the floor and deadlocking (google "X11 server grab" if you don't get what this means), but instead X11 brutally slices the server and window manager apart like King Solomon following through with his child-sharing strategy.

https://tronche.com/gui/x/xlib/window-and-session-manager/XG...

While NeWS not only runs the window manager efficiently in the server without any context switching or network overhead, but it also lets you easily plug in your own customized window frames (with tabs and pie menus), implement fancy features like rooms and virtual scrolling desktops, and all kinds of cool stuff! At Sun were even managing X11 windows with a NeWS ICCCM window manager written in PostScript, wrapping tabbed windows with pie menus around your X-Windows!

https://donhopkins.com/home/archive/NeWS/owm.ps.txt

https://donhopkins.com/home/archive/NeWS/win/xwm.ps

https://www.donhopkins.com/home/catalog/unix-haters/x-window...

kragen•6h ago
I think it was yesterday that people's on HN were saying GLX doesn't work over the network?
rwmj•6h ago

  $ ssh <remote> glxgears
runs fine!
jeffbee•4h ago
Are there people who believe this? What do they think Indirect GLX is? XQuartz as the server and some Linux box as the client has always worked perfectly for me, GLX included.
rwmj•6h ago
It's like the "oh no, X11 suffers from tearing video" problem that they pull out all the time. (A) I have no idea what "video tear" is and (B) I play video all the time on my crappy laptop running X11 and it seems fine for me. But can I ssh to my remote server and run emacs or another program completely transparently yet with Wayland? Nope. I do that with X11 continuously.
toast0•6h ago
I've seen video tearing on X. Usually it doesn't happen, often it's hard to notice anyway; and when it did happen and was annoying, I just was missing some setting or other. No big deal.
ndiddy•4h ago
> But can I ssh to my remote server and run emacs or another program completely transparently yet with Wayland? Nope.

Yes, you can! Waypipe came out 6 years ago. Its express purpose is to make a Wayland equivalent to ssh -X. https://gitlab.freedesktop.org/mstoeckl/waypipe/

nullc•3h ago
If someone shows it to you, you'll recognize it. It's when one frame shows part of the prior frame and part of the next. It's most visible in moderate speed horizontal pans as an interruption in vertical lines in the picture.

It's nice to not have tearing. But IMO the functionality loss vs X11 isn't worth it for anything but a dedicated media playback/editing device.

kiwijamo•28m ago
I used to see it all the time on X11. I'd see it on YouTube/Firefox. I'd see it on VLC. I'd see it on MPV. Any video player, playing any fast paced video you'll see X11 struggle to keep up with drawing full frames that it'd just give up and draw half of one frame and another half of another frame and call it a day. The Intel driver luckily had an xorg.conf setting I could add to make this less of an issue -- I guess it'd turn on some internal Intel driver logic to skip frames or something else if it wasn't able to draw the entire video frame in time for display. However as soon as Debian made Wayland the default this issue 100% disappared and I no longer needed to edit a conf file to make my display work correctly. This is hands-down the singular reason I love Wayland. It just works without any faffing around as Windows, MacOS, etc has done since the mid 1990's. Wayland has achieved more in 5 years than X11 has done in the last 25 years.
jekwoooooe•6h ago
It’s astounding to me that in Linux, in 2025, I can’t just simply output a custom resolution. You are probably typing a response right now with some xrandr nonsense and I PROMISE you, it won’t do it. I can’t even scale my screen within a normal resolution to make it fit within a boundary. But I can do this in windows with an nvidis gpu. Crazy
lmm•3h ago
> You are probably typing a response right now with some xrandr nonsense and I PROMISE you, it won’t do it.

Skill issue. You probably held your keyboard wrong or something. Simple xrandr commands work fine like they have for decades. (Of course if you've moved to Wayland then who knows).

dlcarrier•3h ago
You don't have to use xrandr to create a custom framebuffer with scaling and/or centering, although it is capable of doing so. You can also use Gamescope (https://wiki.archlinux.org/title/Gamescope), which works on both X11 and Wayland, and with any GPU.

Traditionally it's used to launch a full-screen application, usually a game, but you can launch your window manager through it, if you want your desktop session to use a custom resolution with custom scaling and/or letterboxing.

jekwoooooe•1h ago
Thanks I will look into this
compiler-devel•6h ago
Brilliant. This is another piece of evidence on the pile of why we got Wayland: it's because people who understood X11 mostly retired and everyone else couldn't be bothered to learn X11 because it's "yucky C code" or something. And it bothers me that we lose remote rendering with Wayland (unless one fights with waypipe) that was just built-in to X11. Yes, it was slow, but actually if you're running a VM on your local system and using SSH to connect to it, then it works great. Sigh. I'm an old person yelling at clouds.
sho_hn•6h ago
This is nonsensical myth-making. Despite the clickbait title, the APIs called in those code samples are very basic and not some forgotten wizardry.
compiler-devel•6h ago
What part is nonsensical? Because Wayland is basically a fulfillment of jwz's CADT.
sho_hn•6h ago
The part where we got Wayland because we lost a magic caste of rockstar engineers who could call XRRGetOutputInfo/XRRGetCrtcInfo.
nullc•3h ago
> Yes, it was slow,

Not particularly if you are on a low latency network. Modern UI toolkits make applications way less responsive that classical X11 applications running across gigabit ethernet.

And even on a fast network the wayland alternative of 'use RDP' is almost unusable.

kllrnohj•37m ago
the approach used in this blog post requires rdp. It's not drawing using X, so there's no vector network transparency.
jchw•6h ago
Sigh. And now that it's been long enough, everyone will conveniently forget all of the reasons why this wound up being insufficient, and think that all of the desktop environment and toolkit developers are simply stupid. (Importantly, applications actually did do this by default at one point. I remember a wonky-looking nvidia-xsettings because of this.)

The thing X11 really is missing (at least most importantly) is DPI virtualization. UI scaling isn't a feature most display servers implement because most display servers don't implement the actual UI bits. The lack of DPI virtualization is a problem though, because it leaves windows on their own to figure out how to logically scale input and output coordinates. Worse, they have to do it per monitor, and can't do anything about the fact that part of the window will look wrong if it overlaps two displays with different scaling. If anything doesn't do this or does it slightly differently, it will look wrong, and the user has little recourse beyond searching for environment variables or X properties that might make it work.

Explaining all of that is harder than saying that X11 has poor display scaling support. Saying it "doesn't support UI/display scaling" is kind of a misnomer though; that's not exactly the problem.

zozbot234•6h ago
> can't do anything about the fact that part of the window will look wrong if it overlaps two displays with different scaling

It's silly that people keep complaining about this. It's a very minor effect, and one that can be solved in principle only by moving to pure vector rendering for everything. Generally speaking, a window will only ever span a single screen. It's convenient to be able to drag a window to a separate monitor, but having that kind of overlap as a permanent feature of one's workflow is just crazy.

> The thing X11 really is missing (at least most importantly) is DPI virtualization.

Shouldn't that kind of DPI virtualization be a concern for toolkits rather than the X server or protocol? As long as X is getting accurate DPI information from the hardware and reporting that to clients, what else is needed?

jchw•5h ago
> It's silly that people keep complaining about this. It's a very minor effect, and one that can be solved in principle only by moving to pure vector rendering for everything.

If you have DPI virtualization, a very sufficient solution already exists: pick a reasonable scale factor for the underlying buffer and use it, then resample for any outputs that don't match. This is what happens in most Wayland compositors. Exactly what you pick isn't too important. You could pick whichever output overlaps the most with the window, or the output that has the highest scale factor, or some other criteria. It will not result in perfect pixels everywhere, but it is perfectly sufficient to clean up the visual artifacts.

Another solution would be to simply only present the surface on whatever output it primarily overlaps with. MacOS does this and it's seemingly sufficient. Unfortunately, as far as I understand, this isn't really trivial to do in X11 for the same reasons why DPI virtualization isn't trivial: whether you render it or not, the window is still in that region and will still receive input there.

> Generally speaking, a window will only ever span a single screen. It's convenient to be able to drag a window to a separate monitor, but having that kind of overlap as a permanent feature of one's workflow is just crazy.

The issue with the overlap isn't that people routinely need this; if they did, macOS or Windows would also need a more complete solution. In reality though, it's just a very janky visual glitch that isn't really too consequential for your actual workflow. Still, it really can make moving windows across outputs super janky, especially since in practice different applications do sometimes choose different behaviors. (e.g. will your toolkit choose to resize the window so it has the same logical size? will this impact the window dragging operation?)

So really, the main benefit of solving this particular edge case is just to make the UX of window management better.

While UX and visual jank concerns are below concerns about functionality, I still think they have non-zero (and sometimes non-low) importance. Laptop users expect to be able to dock and manage windows effectively regardless of whether the monitors they are using have the same ideal scale factor as the laptop's internal panel; the behavior should be clean and effective and legacy apps should ideally at least appear correct even if blurry. Being able to do DPI virtualization solves the whole set of problems very cleanly. MacOS is doing this right, Windows is finally doing this right, Wayland is doing this right, X11 still can't yet. (It's not physically impossible, but it would require quite a lot of work since it would require modifying everything that handles coordinate spaces I believe.)

> Shouldn't that kind of DPI virtualization be a concern for toolkits rather than the X server or protocol? As long as X is getting accurate DPI information from the hardware and reporting that to clients, what else is needed?

Accurate DPI information is insufficient as users may want to scale differently anyways, either due to preference, higher viewing distance, or disability. So that already isn't enough.

That said, the other issue is that there already exists applications that don't do perfect per monitor scaling, and there doesn't exist a single standard way to have the per-monitor scaling preferences propagated in X11. It's not even necessarily a solved problem among the latest versions of all of the toolkits, since it at minimum requires support for desktop environment settings daemons and etc.

BearOso•4h ago
I think having any kind of "scaling" preferences focuses too much on the technical aspect. It could be narrowed down to one setting like "zoom level" or just "size." This would mean that all UI elements change size exactly proportionately to one another. Ideally, rendering should happen at the exact resolution of the display, and scaling, as in resizing a bitmap using bilinear interpolation or whatever, doesn't need to be part of the pipeline except for outdated legacy programs.

In the past, the problem with UI toolkits doing proportional sizing was because they used bitmaps for UI elements. Since newer versions of Qt and Gtk 4 render programmatically, they can do it the right way. Windows mostly does this, too, even with win32 as long as you're using the newer themes. MacOS is the only one that has assets prerendered at integer factors everywhere and needs to perform framebuffer scaling to change sizes. But Apple doesn't care because they don't want you using third-party monitors anyway.

Edit: I'm not sure about Apple's new theme. Maybe this is their transition point away from fixed asset sizes.

zozbot234•3h ago
> Windows mostly does this, too, even with win32 as long as you're using the newer themes.

Win32 controls have always been DPI independent, as far back as Windows 95. There is DPI choice UX as part of the "advanced" display settings.

jchw•2h ago
Using vector pipelines isn't new, of course: Windows has been doing DPI-independent rendering since almost the beginning with GDI. The actual issue with GDI's scaling is all about text: for something to be "scalable" it has to maintain its proportions when the scale factor changes, but this was not the case for text in Win32/GDI, due to pixel grid fitting. Because of this, it was common in the Windows XP era to see ill-sized text when changing the DPI to anything other than 96, resulting in things being cut off and generally broken. Also, although the rendering itself was DPI-independent and scalable, that doesn't mean that applications would properly handle scalable rendering themselves, when they do things like deal with pixels directly or what have you. If you did this again today, you could almost certainly account for this and make an API much harder to misuse. HTML applications really have to try to not be resolution-independent, for example.

In practice Windows and macOS both do bitmap scaling when necessary. macOS scales the whole frame buffer, Windows scales windows individually.

Can you do an entire windowing pipeline where it's vectors all the way until the actual compositing? Well, sure! We were kind of close in the pre-compositing era sometimes. Is it worth it to do so? I don't think so for now. Most desktop displays are made up of standard-ish pixels so buffers full of pixels makes a very good primitive. So making the surfaces themselves out of pixels seems like a fine approach, and the scaling problem is relatively easy to solve if you start with a clean slate. The fact that it can handle the "window splitting across outputs" case slightly better is not a particularly strong draw; I don't believe most users actually want to use windows split across outputs, it's just better UX if things at least appear correct. Same thing for legacy apps, really: if you run an old app that doesn't support scaling it's still better for it to work and appear blurry than to be tiny and unusable.

What to make of this. Well, the desktop platform hasn't moved so fast; ten years of progress has become little more than superficial at this point. So I think we can expect with relatively minor concessions that barring an unforeseen change, desktops we use 10 to 20 years from now probably won't be that different from what we have today; what we have today isn't even that different from what we already had 20 years ago as it is. And you can see that in people's attitudes; why fix what isn't broken? That's the sentiment of people who believe in an X11 future. Of course in practice, there's nothing particularly wrong with trying to keep bashing X11 into modernity; with much pain they definitely managed to take X.org and make it shockingly good. Ironically, if some of the same people working on Wayland today had put less work into keeping X.org working well, the case for Wayland would be much stronger by now. Still, I really feel like roughly nobody actually wants to sit there and try to wedge HDR or DPI virtualization into X11, and retooling X11 without regard for backwards compatibility is somewhat silly since if you're going to break old apps you may as well just start fresh. Wayland has always had tons of problems yet I always bet on it as the most likely option simply because it just makes the most sense to me and I don't see any showstoppers that seem like they would be insurmountable. Lo and behold, it sure seems to me that the issues remaining for Wayland adoption have started to become more and more minor. KDE maintains a nice list of more serious drawbacks. It used to be a whole hell of a lot larger!

https://community.kde.org/Plasma/Wayland_Known_Significant_I...

archy_•27m ago
>users may want to scale differently anyways

Users think they want a lot of things they don't really need. Do we really want to hand users that loaded gun so that they can choose incorrectly where to fire?

kelnos•2m ago
[delayed]
jeffbee•4h ago
I sort of wanted Fresco (previously Berlin, inspired by InterViews) to succeed, because in their model the UI toolkits really were server-side and they could be changed out while the application was running. Because they were targeting an abstract device (could be a 1200 dpi printer and a 72 dpi display at the same time) they got the property you mentioned, for free.
GranPC•6h ago
Can this handle the case in which you have two displays with different DPIs side-by-side, and a window is placed in the middle of them both?
amiga386•4h ago
It's not "can you provide the screen DPI to a window?" people bemoan, it's "can you draw one window across two screens with differing DPIs, transparent to the application?"
kelnos•8m ago
You can absolutely do that on X11, but you have to do it in the client, and no one cares to do it.
erlkonig•2h ago
This whole issue just seems so pathetic. PostScript and DPS, notably NeWS, have device-independent scaling from the outset - you can completely omit even mentioning pixels, even though they're 2D. Wayland braying on about scalability here just highlights how they don't even understand the game.

Going to OpenGL is a nice tactic, since OpenGL doesn't give a flip about screen coördinates anyway.

I miss NeWS - it actually brought a number of great capabilities to a window system - none of which, AFAIK, are offered by Wayland.

NooneAtAll3•1h ago
I hope XLibre will keep making X11 better