If you look at the raw footage on an OLED HDR monitor, it's like looking out of a window! You get a feeling that you could just reach out and touch the objects behind the panel.
I've seen a few modern HDR productions that have the same "realism aesthetic" and I rather liked it. I've also enjoyed HDR used as a "special effect" with gleaming highlights everywhere.
Both styles have their place.
Tbh so should the excessive messing with colours that's in fashion these days.
You didn’t need new displays to make use of it. It wasn’t suddenly brighter or darker.
The change from 15 to 16 bit color was at least visible because the dynamic range of 16-bit color is much lower than 30-bit color, so you could see color banding improve, but it wasn’t some new world of color, like how HDR is sold.
Manufacturers want to keep the sales boom that large cheap TVs brought when we moved away from CRTs. That was probably a “golden age” for screen makers.
So they went from failing to sell 3D screens to semi-successfully getting everyone to replace their SDR screen with an HDR screen, even though almost no one can see the difference in those color depths when shown with everything else being equal.
What really cheeses me on things like this is that TV and monitor manufacturers seem to gate the “blacker blacks” and “whiter whites” behind HDR modes and disable those features for SDR content. That is indefensible.
The same way I could instantly tell when I saw a screen showing a footage with more than 40 fps. And I see constantly on youtube wrongly converted footage from 24 fps to 25 fps, one frame every second jumps / is duplicated
IMO the difference between LCD and OLED is massive and "worth buying a new tv" over.
I've never tried doing an 8-bit vs 10-bit-per-color "blind" test, but I think I'd be able to see it?
> What really cheeses me on things like this is that TV and monitor manufacturers seem to gate the “blacker blacks” and “whiter whites” behind HDR modes and disable those features for SDR content. That is indefensible.
This 100%. The hackery I have to regularly perform just to get my "HDR" TV to show an 8-bit-per-color "SDR" signal with it's full range of brightness is maddening.
It's only really visible on subtle gradients on certain colours, especially sky blue, where 8 bits isn't sufficient and would result in visible "banding".
In older SDR footage this is hidden using film grain, which is essentially a type of spatial & temporal dithering.
HDR allows smooth gradients without needing film grain.
In my tests with assorted 24-bit sRGB monitors, a difference of 1 in a single channel is almost always indistinguishable (and this might be a matter of monitor tuning); even a difference of 1 simultaneously in all three channels is only visible in a few places along the lerps. (Contrast all those common shitty 18-bit monitors. On those, even with temporal dithering, the contrast between adjacent colors is always glaringly distracting.)
(If testing yourself, note that there are 8 corners of the color cube, so 8×7÷2=28 unique pairs. You should use blocks of pixels, not single pixels - 16x16 is nice even though it requires scrolling or wrapping on most monitors, since 16×256 = 4096. 7 pixels wide will fit on a 1920-pixel-wide screen naturally.)
So HDR is only a win if it adds to the "top". But frankly, most people's monitors are too bright and cause strain to their eyes anyway, so maybe not even then.
More likely the majority of the gain has nothing to do with 10-bit color channels, and much more to do about improving the quality ("blacker blacks" as you said) of the monitor in general. But anybody who is selling something must necessarily be dishonest, so will never help you get what you actually want.
(For editing of course, using 16-bit color channels is a good idea to prevent repeated loss of precision. If also using separate alpha per channel, that gives you a total of 96 bits per pixel.)
I can no longer see banding if I add dither, though, and the extra noise is imperceptible when done well, especially at 4k and with a temporal component.
Meanwhile, YouTube is incredibly sluggish on my computer, with visible incremental rendering of the page UI, and seeking in a video easily takes 500~1000 ms. It's an embarrassment that the leading video platform, belonging to a multi-billion-dollar company, has a worse user experience than a simple video file with only the web browser's built-in UI controls.
To save readers a "View Source", this is the typical progressive file download user experience with CDNs that support byte-range requests.
<video id="DebunkingHDR" width="100%" height="auto" controls="" autoplay="" preload="preload" bgcolor="black" onended="backtopage()" poster="www.yedlin.net/images/DebunkingHDR_Poster.png">
<source src="https://yedsite.sfo2.cdn.digitaloceanspaces.com/Debunking_HDR_v102.mp4">
Your browser does not support HTML5 video.
</video>
It looks like he’s using DigitalOcean’s CDN though. This isn’t an mov file thrown on an Apache vhost. And it’s probably not gone viral.
You're surprised because you view Youtube as a video platform. It was that once but now it's an advertising platform that happens to show videos.
Luckily for now you can pay for YT premium. It makes the experience so much better. no checking for ads every-time you skip.
I think the point that SDR inputs (to a monitor) can be _similar_ to HDR input to monitors that have high dynamic ranges is obvious if you look at the maths involved. Higher dynamic gives you more precision in the information, you can choose to do what you want with it : higher maximum luminosity, better blacks with less noise, more details in the middle etc.
Of course we should also see "HDR" as a social movement, a new way to communicate between engineers, manufacturers and consumers, it's not "only" a math conversion formula.
I believe we could focus first on comparing SDR and HDR black and white images, to see how higher dynamic range only in the luminosity is in itself very interesting to experience
But in the beginning he is saying the images look similar on both monitors. Surely we could find counter examples and that only applies to his cinema stills ? If he can show this is true for all images then indeed he can show that "SDR input to a HDR monitor" is good enough for all human vision. I'm not sure this is true, as I do psychedelic animation I like to use all the gamut of colors I have at my hand and I don't care about representing scenes from the real world, I just want maximum color p0rn to feed my acid brain : 30 bits per pixels surely improve that, as well as wider color gamut / new LEDs wavelengths not used before
Most displays have the ability to simulate their HDR range on SDR input I believe by dynamically inferring the contrast and seeing if it can punch up small local bright areas.
I wish he shared his code though. Part of the problem is he can't operate like a normal scientist when all the best color grading tools are proprietary.
I think it would be really cool to make an open source color grading software that simulates the best film looks. But there isn't enough information on Yedlin's website to exactly reproduce all the research he's done with open source tools.
Secondly, Rec. 2100 defines more than just a colorspace. A coordinate triple in the Rec. 2100 colorspace does not dictate both luminance and chromaticity. You need to also specify a _transfer function_, of which Rec. 2100 defines two: PQ and HLG. They have different nominal maximum luminance: 10,000 nits for PQ and 1,000 nits for HLG. Without specifying a transfer function, a coordinate triple merely identifies chromaticity. This is true of _all_ color spaces.
On the other hand his feet/meters analogy is excellent and I’m going to steal it next time I need to explain colorspace conversion to someone.
The presentation could surely be condensed, but also depends on prior knowledge and familiarity with the concepts.
I don’t want to criticize too much, though. Like I said I’ve only watched 15 minutes, and IIRC this is also the guy who convinced a lot of cinematographers that digital was finally good enough.
Back in the days I made ray tracers and such, and going from an internal SDR representation to an internal HDR representation was a complete game changer, especially for multiple reflections. That was a decade or more before any consumer HDR monitors were released, so it was all tonemapped to SDR before displaying.
That said, I would really like to see his two monitors display something with really high dynamic range. From the stills I saw in the video, they all seemed quite limited.
Anyway, something to watch fully tomorrow, perhaps he addresses this.
_wire_•3d ago
As to what was to be debunked, the presentation not only fails to set out a thesis in the introduction, it doesn't even beg a question, so you've got to watch hours to get to the point: SDR and HDR are two measurement systems which when correctly used for most cases (legacy and conventional content) must produce the visual result. The increased fidelity of HDR makes it possible to expand the sensory response and achieve some very realistic new looks that were impossible with SDR, but the significance and value of any look is still up to the creativity of the photographer.
This point could be more easily conveyed by this presentation if the author explained in the history of reproduction technology, human visual adaptation exposes a moment by moment contrast window of about 100:1, which is constantly adjusting across time based on average luminance to create an much larger window of perception of billions:1(+) that allows us to operate under the luminance conditions on earth. But until recently, we haven't expected electronic display media to be used in every condition on earth and even if it can work, you don't pick everywhere as your reference environment for system alignment.
(+)Regarding difference between numbers such as 100 or billions, don't let your common sense about big or small values phase your thinking about differences: perception is logarithmic; it's the degree of ratios that matter more than the absolute magnitude of the numbers. As a famous acoustics engineer (Paul Klipsch) said about where to focus design optimization of response traits of reproduction systems: "If you can't double it or halve it, don't worry about it."
strogonoff•13h ago
ttoinou•12h ago
adgjlsfhk1•12h ago
ttoinou•11h ago
bdavbdav•11h ago
ttoinou•10h ago
bdavbdav•1h ago
strogonoff•2h ago
That does not seem a meaningful statement. Information, and by far most of it, is necessarily discarded. The creative task of the photographer is in deciding what is to be discarded (both at shooting time and at editing time) and shaping the remaining data to make the optimal use of the available display space. Various ways of competing dynamic range is often part of this process.
> like a compressor in audio production
Audio is a decent analogy and an illustration of why it is a subjective and creative process. You don’t want to just naively compress everything into a wall of illegible sound, you want to make some things pop at the expense of other things, which is a similar task in photography. Like with photography, you must lose a lot of information at it, because if you preserve all the finest details no one would be able to hear much in real-life circumstances.
sansseriff•11h ago
Gamma encoding, which has been around since the earliest CRTs was a very basic solution to this fact. Nowadays it's silly for any high-dynamic image recording format to not encode data in a log format. Because it's so much more representative of human vision.
ttoinou•10h ago
lucyjojo•9h ago
demosaicing is a first point of loss of data (there is a tiling of monochrome small sensors, you reconstruct color from little bunches with various algorithms)
there is also a mapping to a color space of your choosing (probably mentioned in the op video, i apologize for i have not watched yet...). sensor color space do not need to match that rendered color space...
note of interest being that sensors actually capture some infrared light (modulo physical filters to remove that). so yeah if you count that as color, it gets removed. (infrared photography is super cool!)
then there is denoising/sharpening etc. that mess with your image.
there might be more stuff i am not aware of too. i have very limited knowledge of the domain...
ttoinou•9h ago
strogonoff•2h ago
In a typical scene shot with existing light outdoors it is probably 98%+.
nyanpasu64•6h ago
dperfect•4h ago
- It was clearly a mistake to define HDR transfer functions using absolute luminance values. That mistake has created a cascade of additional problems
- HDR is not what it was marketed to be: it's not superior in many of the ways people think it is, and in some ways (like efficiency) it's actually worse than SDR
- The fundamental problems with HDR formats have resulted in more problems: proprietary formats like Dolby Vision attempting to patch over some of the issues (while being more closed and expensive, yet failing to fully solve the problem), consumer devices that are forced to render things worse than they might be in SDR due to the fact that it's literally impossible to implement the spec 100% (they have to make assumptions that can be very wrong), endless issues with format conversions leading to inaccurate color representation and/or color banding, and lower quality streaming at given bit rates due to HDR's reliance on higher bit depths to achieve the same tonal gradation as SDR
- Not only is this a problem for content delivery, but it's also challenging in the content creation phase as filmmakers and studios sometimes misunderstand the technology, changing their process for HDR in a way that makes the situation worse
Being somewhat of a film nerd myself and dealing with a lot of this first-hand, I completely agree with the overall sentiment and really hope it can get sorted out in the future with a more pragmatic solution that gives filmmakers the freedom to use modern displays more effectively, while not pretending that they should have control over things like the absolute brightness of a person's TV (when they have no idea what environment it might be in).