This seems more a limitation of monitors. If you had very large bit depth, couldn't you just display images in linear light without gamma correction.
A better discriminator might be global edits vs local edits, with local edits being things like retouching specific parts of the image to make desired changes, and one could argue that local edits are "more fake" than global edits, but it still depends on a thousand factors, most importantly intent.
"Fake" images are images with intent to deceive. By that definition, even an image that came straight out of the camera can be "fake" if it's showing something other than what it's purported to (e.g. a real photo of police violence but with a label saying it's in a different country is a fake photo).
What most people think when they say "fake", though, is a photo that has had filters applied, which makes zero sense. As the post shows, all photos have filters applied. We should get over that specific editing process, it's no more fake than anything else.
Even that isn't all that clear-cut. Is noise removal a local edit? It only touches some pixels, but obviously, that's a silly take.
Is automated dust removal still global? The same idea, just a bit more selective. If we let it slide, what about automated skin blemish removal? Depth map + relighting, de-hazing, or fake bokeh? I think that modern image processing techniques really blur the distinction here because many edits that would previously need to be done selectively by hand are now a "global" filter that's a single keypress away.
Intent is the defining factor, as you note, but intent is... often hazy. If you dial down the exposure to make the photo more dramatic / more sinister, you're manipulating emotions too. Yet, that kind of editing is perfectly OK in photojournalism. Adding or removing elements for dramatic effect? Not so much.
The only process in the article that involves nearby pixels is to combine R G and B (and other G) into one screen pixel. (In principle these could be mapped to subpixels.) Everything fancier than that can be reasonably called some fake cosmetic bullshit.
Removing dust and blemishes entails looking at more than one pixel at a time.
Nothing in the basic processing described in the article does that.
The ones that make the annual rounds up here in New England are those foliage photos with saturation jacked. “Look at how amazing it was!” They’re easy to spot since doing that usually wildly blows out the blues in the photo unless you know enough to selectively pull those back.
If you want reality, go there in person and stop looking at photos. Viewing imagery is a fundamentally different type of experience.
We’ve had similar debates about art using miniatures and lens distortions versus photos since photography was invented — and digital editing fell on the lens trick and miniature side of the issue.
Artists, who use these tools with clear vision and intent to achieve specific goals, strangely never have this problem.
So there are levels of image processing, and it would be wrong to dump them all in the same category.
What about this? https://news.ycombinator.com/item?id=35107601
You can look it up because it's published on the web but IIRC it's generally what you'd expect. It's okay to do whole-image processing where all pixels have the same algorithm applied like the basic brightness, contrast, color, tint, gamma, levels, cropping, scaling, etc filters that have been standard for decades. The usual debayering and color space conversions are also fine. Selectively removing, adding or changing only some pixels or objects is generally not okay for journalistic purposes. Obviously, per-object AI enhancement of the type many mobile phones and social media apps apply by default don't meet such standards.
Filters themselves don't make it fake, just like words themselves don't make something a lie. How the filters and words are used, whether they bring us closer or further from some truth, is what makes the difference.
Photos implicitly convey, usually, 'this is what you would see if you were there'. Obviously filters can help with that, as in the OP, or hurt.
i.e. Camera+Lens+ISO+SS+FStop+FL+TC (If present)+Filter (If present). Add focus distance if being super duper proper.
And some of that is to help at least provide the right requirements to try to recreate.
A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data. In many advanced demosaicing algorithms, the pipeline actually reconstructs the green channel first to get a high-resolution luminance map, and then interpolates the red/blue signals—which act more like "color difference" layers—on top of it. We can get away with this because the human visual system is much more forgiving of low-resolution color data than it is of low-resolution brightness data. It’s the same psycho-visual principle that justifies 4:2:0 chroma subsampling in video compression.
Also, for anyone interested in how deep the rabbit hole goes, looking at the source code for dcraw (or libraw) is a rite of passage. It’s impressive how many edge cases exist just to interpret the "raw" voltages from different sensor manufacturers.
When I worked at Amazon on the Kindle Special Offers team (ads on your eink Kindle while it was sleeping), the first implementation of auto-generated ads was by someone who didn't know that properly converting RGB to grayscale was a smidge more complicated than just averaging the RGB channels. So for ~6 months in 2015ish, you may have seen a bunch of ads that looked pretty rough. I think I just needed to add a flag to the FFmpeg call to get it to convert RGB to luminance before mapping it to the 4-bit grayscale needed.
I remember trying out some of the home-made methods while I was implementing a creative work section for a school assignment. It’s surprising how "flat" the basic average looks until you actually respect the coefficients (usually some flavor of 0.21R + 0.72G + 0.07B). I bet it's even more apparent in a 4-bit display.
This is the coefficients I use regularly.
The JPEGs cameras produce are heavily processed, and they are emphatically NOT "original". Taking manual control of that process to produce an alternative JPEG with different curves, mappings, calibrations, is not a crime.
this is totally out of my own self-interest, no problems with its content
also your question implies a bad assumption even if you disclaim it. if you don't want to imply a bad assumption the way to do that is to not say the words, not disclaim them
“NO EM DASHES” is common system prompt behavior.
I'm imagining a sort of Logan's Run-like scifi setup where only people with a documented em dash before November 30, 2022, i.e. D(ash)-day, are left with permission to write.
what's so special about green? oh so just because our eyes are more sensitive to green we should dedicate double the area to green in camera sensors? i mean, probably yes. but still. (⩺_⩹)
Is the output produced by the sensor RGB or a single value per pixel?
R G B
B R G
G B R
? R G
G B
Then at a later stage the image is green because "There are twice as many green pixels in the filter matrix".Each RGB pixel would be 2x2 grid of
``` G R B G ```
So G appears twice as many as other colors (this is mostly the same for both the screen and sensor technology).
There are different ways to do the color filter layouts for screens and sensors (Fuji X-Trans have different layout, for example).
G G R R
G G R R
B B G G
B B G G
[0]: https://en.wikipedia.org/wiki/Bayer_filterIn front of the sensor is a bayer filter which results in each physical pixel seeing illumination filtered R G or B.
From there the software onboard the camera or in your RAW converter does interpolation to create RGB values at each pixel. For example if the local pixel is R filtered, it then interpolates its G & B values from nearby pixels of that filter.
https://en.wikipedia.org/wiki/Bayer_filter
There are alternatives such as what Fuji does with its X-trans sensor filter.
https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor
Another alternative is Foveon (owned by Sigma now) which makes full color pixel sensors but they have not kept up with state of the art.
https://en.wikipedia.org/wiki/Foveon_X3_sensor
This is also why Leica B&W sensor cameras have higher apparently sharpness & ISO sensitivity than the related color sensor models because there is no filter in front or software interpolation happening.
Works great. Most astro shots are taken using a monochrome sensor and filter wheel.
> filters are something like quantum dots that can be turned on/off
If anyone has this tech, plz let me know! Maybe an etalon?
https://en.wikipedia.org/wiki/Fabry%E2%80%93P%C3%A9rot_inter...
I have no idea, it was my first thought when I thought of modern color filters.
Edit or maybe it does work? I've watched at least one movie on a DLP type video projector with sequential colour and not noticed colour fringing. But still photos have much higher demand here.
He take a few minutes to get to the punch line. Feel free to skip ahead to around 5:30.
Generally we shoot “flat” (there are so many caveats to this but I don’t feel like getting bogged down in all of it. If you plan on getting down and dirty with colors and really grading, you generally shoot flat). The image that we handover to DIT/editing can be borderline grayscale in its appearance. The colors are so muted, the dynamic range is so wide, that you basically have a highly muted image. The reason for this is you then have the freedom to “push” the color and look and almost any direction, versus if you have a very saturated, high contrast image, you are more “locked” into that look. This matters more and more when you are using a compressed codec and not something with an incredibly high bitrate or raw codecs, which is a whole other world and I am also doing a bit of a disservice to by oversimplifying.
Though this being HN it is incredibly likely I am telling few to no people anything new here lol
It's sort of the opposite of what's going on with photography, where you have a dedicated "raw" format with linear readings from the sensor. Without these formats, someone would probably have invented "log JPEG" or something like that to preserve more data in highlights and in the shadows.
[0] - https://en.wikipedia.org/wiki/Super_CCD#/media/File:Fuji_CCD...
Processing these does seem like more fun though.
It gets even wilder when perceiving space and time as additional signal dimensions.
I imagine a sort of absolute reality that is the universe. And we’re all just sensor systems observing tiny bits of it in different and often overlapping ways.
I’ve been staring at 16-bit HDR greyscale space for so long…
throw310822•1h ago
delecti•1h ago
throw310822•44m ago
But anyway, I enjoyed the article.