The issue with color to grayscale conversion for human consumption is in most cases there is no well-defined ground truth. People don’t see in grayscale, so the appearance preservation approach doesn’t work. And the source image was most likely heavily color corrected to match certain aesthetic. So the problem becomes “to preserve as much information, both content and aesthetic, within constraints of the target grayscale medium”.
The bottom line is, use some standardized conversion (like described here — just to avoid surprising users) if images don’t actually matter, some contrast-preserving method if content matters, and edit creatively otherwise.
ChrisMarshallNY•1h ago
I used to do a lot of image processing programming.
The basic way to do it, is with weighted LUTs. The "poor man's conversion" was to just convert the green channel, and toss out the red and blue.
uninformedprior•40m ago
I ran into this subjectiveness in graphics recently. Thought I was doing the "correct" thing blending in linear space but turns out blending in SRGB looks a lot better for certain applications and that's what most popular applications do.
zokier•36m ago
for blending oklab almost always works better than srgb (linear or gamma).
uninformedprior•9m ago
It's possible I wasn't specific enough when I said "graphics". Typically I blend in CIELAB when interpolating between colors.
But I'm unaware of rendering engines that do alpha blending in something other than linear or SRGB. Photoshop, for instance, blends in sRGB by default, while renderers that simulate light physically will blend in linear RGB (to the best of my knowledge).
It depends on the GPU and the implementation, but I personally would not want to spend the compute on per-pixel CIELAB conversions for blending.
ChrisMarshallNY•31m ago
The thing that is difficult to "math," is that we perceive color in a certain way (if you ever look at the CIELAB[0] space, that's based on human eye perception). So there's a lot of "it just don't look right." involved.
I have found that getting weighted LUTs that have been extracted from some process (math, context measurements, user testing, etc.), and simply applying them in the conversion is how you execute the conversion, but generating the LUTs is the tricky part. It's not always best handled by a formula. I guess you could really go crazy, and generate the LUT on the fly, as a per-pixel conversion (we actually did something like this, for RAW conversion).
ansgri•1h ago
The bottom line is, use some standardized conversion (like described here — just to avoid surprising users) if images don’t actually matter, some contrast-preserving method if content matters, and edit creatively otherwise.
ChrisMarshallNY•1h ago
The basic way to do it, is with weighted LUTs. The "poor man's conversion" was to just convert the green channel, and toss out the red and blue.
uninformedprior•40m ago
zokier•36m ago
uninformedprior•9m ago
But I'm unaware of rendering engines that do alpha blending in something other than linear or SRGB. Photoshop, for instance, blends in sRGB by default, while renderers that simulate light physically will blend in linear RGB (to the best of my knowledge).
It depends on the GPU and the implementation, but I personally would not want to spend the compute on per-pixel CIELAB conversions for blending.
ChrisMarshallNY•31m ago
I have found that getting weighted LUTs that have been extracted from some process (math, context measurements, user testing, etc.), and simply applying them in the conversion is how you execute the conversion, but generating the LUTs is the tricky part. It's not always best handled by a formula. I guess you could really go crazy, and generate the LUT on the fly, as a per-pixel conversion (we actually did something like this, for RAW conversion).
[0] https://en.wikipedia.org/wiki/CIELAB_color_space