What you're referring to stems from the assumption made a long time ago by Microsoft to simplify the development of Windows, later adopted as a de facto standard by most computer software. The assumption was that the pixel density of every display is 96 pixels-per-inch [1].
As the pixel density of today's displays has grown much beyond that, mostly popularized by Apple's Retina, a solution was needed to accommodate legacy software written under this assumption. This resulted in the decoupling of "logical" pixels from "physical" pixels, with the logical resolution being most commonly defined as the "what the resolution of the display would be given its physical size and a PPI of 96" [2], and the physical resolution representing the real amount of pixels. The 100x100 and 200x200 values in your example are respectively the logical and physical resolutions of your screenshot.
Different software vendors refer to these "logical" pixels differently, but the most names you're going to encounter are points (Apple), density-independent pixels ("DPs", Google), and device-independent pixels ("DIPs", Microsoft).
[1]: https://learn.microsoft.com/en-us/archive/blogs/fontblog/whe...
[2]: https://developer.mozilla.org/en-US/docs/Web/API/Window/devi...
From what I recall only Microsoft had problems with this, and specifically on Windows. You might be right about software that was exclusive to desktop Windows. I don't remember having scaling issues even on other Microsoft products such as Windows Mobile.
It's the desktop software that mostly had problems scaling. I'm not sure about Windows Mobile. Windows Phone and UWP have adopted an Android-like model.
If a video file only stores a singular color value for each pixel, why does it care what shape the pixel is in when it's displayed? It would be filled in with the single color value regardless.
drmpeg•1h ago
Before HD, almost all video was non-square pixels. DVD is 720x480. SD channels on cable TV systems are 528x480.
m132•1h ago
Correct. This came from the ITU-R BT.601 standard, one of the first digital video standards authors of which chose to define digital video as a sampled analog signal. Analog video never had a concept of pixels and operated on lines instead. The rate at which you could sample it could be arbitrary, and affected only the horizontal resolution. The rate chosen by BT.601 was 13.5 MHz, which resulted in a 10/11 pixel aspect ratio for 4:3 NTSC video and 59/54 for 4:3 PAL.
>SD channels on cable TV systems are 528x480
I'm not actually sure about America, but here in Europe most digital cable and satellite SDTV is delivered as 720x576i 4:2:0 MPEG-2 Part 2. There are some outliers that use 544x576i, however.
drmpeg•47m ago
https://www.w6rz.net/528x480.ts
https://www.w6rz.net/528x480sp.ts
m132•28m ago
Doing my part and sending you some samples of UPC cable from the Czech Republic :)
720x576i 16:9: https://0x0.st/P-QU.ts
720x576i 4:3: https://0x0.st/P-Q0.ts
That one weird 544x576i channel I found: https://0x0.st/P-QG.ts
I also have a few decrypted samples from the Hot Bird 13E, public DVB-T and T2 transmitters and Vectra DVB-C from Poland, but for that I'd have to dig through my backups.
ranger_danger•1h ago
My understanding is that televisions would mostly have square/rectangular pixels, while computer monitors often had circular pixels.
Or are you perhaps referring to pixel aspect ratios instead?
binaryturtle•56m ago
F.ex. in case of a "4:3 720x480" frame… a quick test: 720/4=180 and 480/3=160… 180 vs. 160… different results… which means the pixels for this frame are not square, just rectangular. Alternatively 720/480 vs. 4/3 works too, of course.
binaryturtle•46m ago
ndiddy•14m ago