I would love for them to provide an option to view it with film simulation vs without.
One of my favorite movies of all time, The Holdovers, did film simulation extremely well. It's set in the '70s so it attempts to look like a movie of that era.
It looked great to me, but if you're an actual film nerd you're going to notice a lot of things aren't exactly accurate.
Maybe in the near future we'll see Netflix being able to process some post effects on the client. So if you're color blind, you get a mode for that. If you don't want fake grain you can turn it off.
What parameters would that be? Make it look like Eastman Ektachrome High-Speed Daylight Film 7251 400D? For years, people have taken film negative onto telecines and created content of grain to be used as overlays. For years, colorists have come up with ways of simulating the color of specific film stocks by using reference film with test patterns that's been made available.
If a director/producer wants film grain added to their digital content, that's where it should be done in post. Not by some devs working for a streaming platform. The use of grain or not is a creative decision made by the creators of the work. That's where it should remain
Why? If you're spending a significant chunk of your bits just transmitting data that could be effectively recreated on the client for free, isn't that wasteful? Sure, maybe the grains wouldn't be at the exact same coordinates, but it's not like the director purposefully placed each grain in the first place.
I recognize that the locally-produced grain doesn't look quite right at the moment, but travel down the hypothetical with me for a moment. If you could make this work, why wouldn't you?
--------
...and yes, I acknowledge that once the grain is being added client side, the next logical step would be "well, we might as well let viewers turn it off." But, once we've established that client-side grain makes sense, what are you going to do about people having preferences? Should we outlaw de-noising video filters too?
I agree that the default setting should always match what the film maker intended—let's not end up with a TV motion smoothing situation, please for the love of god—but if someone actively decides "I want to watch this without the grain for my own viewing experience"... okay? You do you.
...and I will further acknowledge that I would in fact be that person! I hate grain. I modded Cuphead to remove the grain and I can't buy the Switch version because I know it will have grain. I respect the artistic decision but I don't like it and I'm not hurting anyone.
I'm sorry your tech isn't good enough to recreate the original. That does not mean you get to change the original because your tech isn't up to the task. Update your task to better handle the original. That's like saying an image of the Starry Night doesn't retain the details, so we're going to smear the original to fit the tech better. No. Go fix the tech. And no, this is not fixing the tech. It is a band-aid to cover the flaws in the tech.
In theory though, I don't see any reason why client-side grain that looks identical to the real thing shouldn't be achievable, with massive bandwidth savings in the process.
It won't be, like, pixel-for-pixel identical, but that was why I said no director is placing individual grain specks anyway.
The market has spoken and it says that people want to watch movies even when they don't have access to a 35mm projector or a projector than can handle digital cinema packages, so nobody is seeing the original outside a theater.
Many viewers are bandwidth limited, so there's tradeoffs ... if this film grain stuff improves available picture quality at a given bandwidth, that's a win. IMHO, Netflix blogs about codec things seem to focus on bandwidth reduction, so I'm never sure if users with ample bandwidth end up getting less quality or not; that's a valid question to ask.
That's true, but at a given bitrate (until you get to very high bitrates), the compressed original will usually look worse and less sharp because so many bits are spent trying to encode the original grain. As a result, that original grain tends to get "smeared" over larger areas, making it look muddy. You lose sharpness in areas of the actual scene because it's trying (and often failing) to encode sharp grains.
Film Grain Synthesis makes sense for streaming where bandwidth is limited, but I'll agree that in the examples, the synthesized grain doesn't look very grain-like. And, depending on the amount and method of denoising, it can definitely blur details from the scene.
I can see why they want to compare against the actual local copy of the video with the natural grain. But that’s the perfect copy that they can’t actually hope to match.
Isn't that the image captioned "Regular AV1 (without FGS) @ 8274 kbps"?
But still, they have:
> A source video frame from They Cloned Tyrone
> Regular AV1 (without FGS) @ 8274 kbps
> AV1 with FGS @ 2804 kbps
Just to emphasize the problem, would it be nice to see:
Regular AV1 (without FGS) @ 2804 kbps
It should look really bad, right? Which would emphasize their results.
that's an understatement. it just looks like RGB noise effect was added. film grain does not look like RGB noise. to me, film grain is only one part of what gave film the film look. the way the highlights bloom rather than clip. it also was more natural/organic/some descriptive other than the ultrasharp of modern digital acquisition. using some SoftFX or Black Mist type filters help, but it's just not the same as it is a digital vs analog type of acquisition. all of these attempts at making something look like it's not just keep falling down in the same ways. but hey, there's a cool tech blog about it this time. film grain filters have been around for a long time, yet people just don't care for them. even in Blu-ray time frame, there was attempts at removing the grain in the encode and applying it in playback. Netflix isn't coming up with anything new, and apparently nothing exciting either based on the results.
A few things to note:
- still-frames are also a mediocre way to evaluate video quality.
- a theoretically perfect[1] noise-removal filter will always look less detailed than the original source, since your brain/eye system will invent more detail for a noisy image than for a blurry image.
1: By which I mean a filter that preserves 100% of the non-grain detail present, not one that magically recovers detail lost due to noise.
ANY noticeable percieved "flaw" in any creative media will eventually become an aesthetic choice.
People remember the emotions the artwork engendered, and thus the whole work is associated with the feelings, flaws and all. If the work is particularly widely known, the flaws can become a stand-in for the work itself.
I see this in video games - I'm fond of the NES-era "flaws" and limitations (palette limits, sprite limits, sound channel limits), but less connected to the Atari 2600 or SNES/PS1/NDS/etc flaws. Shovel Knight is charming; A Short Hike, while great, doesn't resonate on a style level.
There's an influx of high-profile directors/films right now and in pipeline filmed for IMAX (F1: The Movie I think, Mission Impossible, etc) and Christopher Nolan's Odyssey coming next year shot entirely on IMAX film with newly developed smaller/quieter cameras made to accomplish it.
I've read that a 15-perf 65mm IMAX negative shot with slower film stocks is "virtually grainless", even when viewed on a 70ft screen. Grain is apparently noticeable in IMAX films when large/fast stocks are used and pushed toward their limits, and (of course) when smaller-format film stocks have been blown up.
It just adds visual noise that obscures details of the authentic scene, and nothing prevents nostalgia from being tied to many of the more prominent visual cues like old actors or your own old memories from when you watched it first...
> contributing to [film's] realism
But there is no grain in reality, so it does the opposite
Otherwise I'm glad AV1 marches along and instead of wasting bitrate encoding visual garbage has an algorithmic replacement mechanism- which also means you could turn it off easier.
Does it add any more than modern video compression techniques? What constitutes noise in cinema, is somewhat subjective.
Well ackchually -- illumination is inherently random, so all time-bounded captures of a scene (including what your eyes do) are subject to shot noise: https://en.wikipedia.org/wiki/Shot_noise
1. You prefer Betamax or VHS to digital media (highly unlikely)
2. You own laserdiscs (limited to 480i)
3. You own 35mm prints of film.
Since all other formats film has been made available on are both digital media and compressed.
All that is 24fps.
That's without audio, which I assume you also want to be uncompressed.
Fake lights, fake shadows, fake sky, ...
Also, the author had me at God of Gamblers 2. So good. I will take him up on his recommendation to rewatch.
That's not to say that all noise and grain is good. It can be unavoidable, due to inferior technology, or a result of poor creative choices. It can even be distracting. But the alternative where everything undergoes denoising (which many of our cameras do by default now) is much worse in my opinion. To my eyes, the smoothing that happens with denoising often looks unrealistic and far more distracting.
a) Compressed original with significant artifacts from the codec trying to represent original grain
b) A denoised version with fewer compression artifacts, but looks "smoothed" by the denoising
c) A denoised version with synthesized grain that looks almost as good as the original, though the grain doesn't exactly match
I personally think the FGS needs better grain simulation (to look more realistic), but even in its current state, I think I'd probably go with choice C. I'm all for showing the closest thing to the author's intent. We just need to remember that compression artifacts are not the author's intent.
In an ideal world where we can deliver full, uncompressed video to everyone, then obviously - don't mess with it at all!
It's like reducing an image to tiny dots with dithering (reminds of Atinkson dithering). Those grains are not a noise, they are a detail, actual data. That's why real grain looks good IMO.
There are two possible advantages for this kind of grain synthesis. For Netflix, they could produce the same perceived quality at lower bitrates, which reduces costs per view and allows customers with marginally slow connections to get a higher quality version. For a consumer, the advantage would be getting more non-grain detail for a fixed bitrate.
You are right that if you subtract the dentists frame from the raw one, showing only the estimated noise, you would get some impression of the scene. I think there’s two reasons for this. Firstly, the places where the denoiser produced a blurry line that should be sharp may show up as faint lines. I don’t think this is ‘hidden information’ so much as it is information lost to lossy compression. In the same way, if you look at the difference between a raw image and one with compression, you may see some emphasized edges due to compression artefacts. Secondly, the less exposed regions of the film will have more noise so noisiness becomes a proxy for darkness, allowing some reproduction of the scene. I would expect this detail to be lost after adjusting for the piecewise linear function for grain intensity at different brightness levels.
Perhaps a third thing is the level of noise in the blacks and the ‘grain size’ or other statistical properties tell you about the kind of film being used, but I think those things are captured in the film grain simulation model.
Possibly there are some other artefacts like evidence of special effects, post processing, etc.
When you watch a high-quality encode that includes the actual noise, there is a startling increase in resolution from seeing a still to seeing the video. The noise is effectively dancing over a signal, and at 24 fps the signal is still perfectly clear behind it.
Whereas if you lossily encode a still that discards the noise and then adds back artificial noise to match the original "aesthetically", the original detail is non-recoverable if this is done frame-by-frame. Watching at 24 fps produces a fundamentally blurrier viewing experience. And it's not subtle -- on old noisy movies the difference in detail can be 2x.
Now, if h.265 or AV1 is actually building its "noise-removed" frames by always taking into account several preceding and following frames while accounting for movement, it could in theory discover the signal of the full detail across time and encode that, and there wouldn't be any loss in detail. But I don't think it does? I'd love to know if I'm mistaken.
But basically, the point is: comparing noise removal and synthesis can't be done using still images. You have to see an actual video comparison side-by-side to determine if detail is being thrown away or preserved. Noise isn't just noise -- noise is detail too.
Regarding aesthetics, I don't think AV1 synthesized grain takes into account the size of the grains in the source video, so chunky grain from an old film source, with its big silver halide crystals, will appear as fine grain in the synthesis, which looks wrong (this might be mitigated by a good film denoiser). It also doesn't model film's separate color components properly, but supposedly that doesn't matter because Netflix's video sources are often chroma subsampled to begin with: https://norkin.org/pdf/DCC_2018_AV1_film_grain.pdf
Disclaimer: I just read about this stuff casually so I could be wrong.
Smoothing the noise out doesn't make use of that additional resolution, unless the smoothing happens over the time axis as well.
Perfectly replicating the noise doesn't help in this situation.
[1]: https://telescope.live/blog/improve-image-quality-dithering [2] https://electronics.stackexchange.com/questions/69748/using-...
Noise is reduced to make the frame more compressible. This reduces the resolution of the original only because it inevitably removes some of the signal that can't be differentiated from noise. But even after noise reduction, successive frames of a still scene retain some frame-to-frame variance, unless the noise removal is too aggressive. When you play back that sequence of noise-reduced frames you still get a temporal dithering effect.
With no dither, each analog input voltage is assigned one and only one code. Thus, there is no difference in the output for voltages located on the same ‘‘step’’ of the ADC’s ‘‘staircase’’ transfer curve. With dither, each analog input voltage is assigned a probability distribution for being in one of several digital codes. Now, different voltages with-in the same ‘‘step’’ of the original ADC transfer function are assigned different probability distributions. Thus, one can see how the resolution of an ADC can be improved to below an LSB.
In actual film, I presume the random inconsistencies of the individual silver halide grains is the noise source, and when watching such a film, I presume the eyes are doing the averaging through persistence of vision[2].
In either case, a key point is that you can't bring back any details by adding noise after the fact.
[1]: https://www.ti.com/lit/an/snoa232/snoa232.pdf section 3.0 - Dither
> In this case, L = 0 corresponds to the case of modeling Gaussian noise whereas higher values of L may correspond to film grain with larger size of grains.
That might seem like a reasonable assumption, but in practice it’s not really the case. Due to nonlinear response curves, adding noise to a bright part of an image has far less effect than a darker part. If the image is completely blown out the grain may not be discernible at all. So practically speaking, grain does travel with objects in a scene.
This means detail is indeed encoded in grain to an extent. If you algorithmically denoise an image and then subtract the result from the original to get only the grain, you can easily see “ghost” patterns in the grain that reflect the original image. This represents lost image data that cannot be recovered by adding synthetic grain.
The synthesized grain is dependent on the brightness. If you were to just replace the frames with the synthesized grain described in the OP post instead of adding it, you would see something very similar.
Sorry if I wasn't clear -- I was referring to the underlying objects moving. The codec is trying to capture those details, the same way our eye does.
But regardless of that, you absolutely cannot compare stills. Stills do not allow you to compare against the detail that is only visible over a number of frames.
Here's an example that might help you intuit why this is true.
Let's suppose you have a digital camera and walk towards a radiation source and then away. Each radioactive particle that hits the CCD causes it to over saturate, creating visible noise in the image. The noise it introduces is random (Poisson) but your movement isn't.
Now think about how noise is introduced. There's a lot of ways actually, but I'm sure this thought exercise will reveal to you how some cause noise across frames to be dependent. Maybe as a first thought, think about from sitting on a shelf degrading.
If you have a few static frames and average them, you improve SNR by retaining the unchanged signal and having the purely random noise cancel itself out. Retaining noise itself is not useful.
I suspect the effect you might be seeing is either just an aesthetic preference for the original grain behavior, or that you are comparing low bandwidth content with heavy compression artifacts like smoothing/low pass filtering (not storing fine detail saves significant bandwidth) to high bandwidth versions that maintain full detail, entirely unrelated to the grain overlaid on top.
Eastman Business Park in Rochester has been demolished.
Also, please stop putting dust and scratches on YouTube videos. Thank you.
I'm in my early 50s so I remember film quite well. Just like vinyl or cassettes, I ain't going back and unless it is an artistic choice I don't want films to emulate what I consider to be an inferior technology.
It's not like we're on Pentium II processors anymore -- I can filter just about anything with ShaderGlass [0] on a shitty computer (and some of the CRT shaders like crt-hyllian-curvature are brilliant, especially on old shows like NewsRadio that only exist on DVD) .. and I'm shocked that Netflix doesn't just have this built into their Apple TV app or whatever. I'm shocked PLEX doesn't have it! (that I know of)
I made a comment on a different post about imagining a world where local AI/LLM/whatever does some favorable processing for you, by you, on your device, of web content, to enhance your experience. I really believe media (streamers all the way down to open source devs) need to begin to incorporate whatever's out there that reduces friction and increases joy. It's all out there already! The heavy lifting has been done! Just make Family Matters look like how it looked when I was locking in on a Friday night for TGIF LOL
jedbrooke•8h ago
I never understood the “grain = realism” thing. my real eyes don’t have grain. I do appreciate the role of grain as an artistic tool though, so this is still cool tech
bob1029•8h ago
tiluha•8h ago
01HNNWZ0MV43FF•8h ago
GuB-42•8h ago
I don't know the psychovisuals behind that. Maybe it adds some high frequencies that compression often washes out, or maybe acts like some kind of dithering.
As for your eyes, I am pretty sure that they have grain, that's how quantum physics work, you just don't perceive it because your brain filters it out. But again, I don't know how it interacts with film grain.
plastic3169•7h ago
dinfinity•7h ago
And lots of it, actually. Just close your eyes or look at any non-textured surface. Tons of noise.
The decreasing signal-to-noise ratio is also highly noticeable when it gets darker.
observationist•8h ago
A child watching a Buster Keaton skit and gasping and giggling and enjoying it is going to have a different subjective aesthetic experience of the media than a film critic who knows exactly what type of film and camera were used, and what the meaning of all the different abstractions imply about the scene, and the fabric of Keaton's costume, and so on, and so forth.
Subjective aesthetic preferences are in the realm of cognition - we need a formal theory of intelligence mapped to the human brain, and all of these subjective phenomena collapse into individualized data processing and initial conditions.
There's something about film grain contrasted against clean cel animation which might make it easier for people to suspend disbelief. They are conditioned to think that absence of grain is associated with unreal animation, particular types of media, and CGI. Home video and news and so forth had grain and low quality, so grain gets correlated with "real". In my view, there's nothing deeper than that - we're the product of our times. In 40 years, media will have changed, and it may be that film grain is associated with surrealism, or edited out completely, as it's fundamentally noise.
Kina•8h ago
I have to imagine past glassmakers would have been absolutely enthralled by the ability we now have to make uniform, large sheets of glass, but here we are emulating the compromises they had to make because we are used to how it looks.
throw0101d•7h ago
It is more than just 'feeling correct': windows and their various (sub-)elements that make them up (can) change the architectural proportions and how the building is perceived as a whole:
* https://www.youtube.com/watch?v=uAMyUoDz4Og
* https://www.youtube.com/watch?v=_c8Ahs9Tcnc&t=49
It is similar with columns: they're not just 'tall-and-narrow', but rather have certain proportions and shapes depending on the style and aesthetic/feeling one wishes to convey:
* https://en.wikipedia.org/wiki/Classical_order
And these proportions can even be 'fractal': the window panes related to windows as a whole, related to the building as a whole:
* https://www.youtube.com/watch?v=J-0XJpPnlrA&t=3m13s
* https://en.wikipedia.org/wiki/Golden_rectangle
* https://en.wikipedia.org/wiki/List_of_works_designed_with_th...
* https://www.nngroup.com/articles/golden-ratio-ui-design/
UltraSane•7h ago
throw0101d•6h ago
Perhaps, but if you're going to have them anyways you might as well make a conscious choice as to how they add to the overall design of the structure.
haiku2077•7h ago
recursive•7h ago
sneak•7h ago
this is likely the result of ~100 years of film-based filmmaking and projection. hell, we still call it filmmaking.
UltraSane•7h ago
kderbe•7h ago
Look around you: nearly all surfaces have some kind of fine texture and are not visually uniform. When this is recorded as video, the fine texture is diminished due to things like camera optics, limited resolution, and compression smoothing. Film grain supplies some of the high frequency visual stimulus that was lost.
Our eyes and brains like that high frequency stimulation and aren't choosy about whether the exact noise pattern from the original scene is reproduced. That's why the x265 video encoder (which doesn't have grain synthesis since it produces H.265 video) has a psy-rd parameter that basically says, "try to keep the compressed video as 'energetic' as the original, even if the energy isn't in the exact same spot", and even a psy-rdoq parameter that says, "prefer higher 'energy' in general". These parameters can be adjusted to make a compressed video look better without needing to store more data.
UltraSane•7h ago
supertrope•6h ago
UltraSane•46m ago
LocalH•13m ago
smusamashah•7h ago
dmbche•6h ago
It might be that there is a large part of the population that still has that association.
Cinephiles are also more likely to watch older (i.e. with grain) movies that ARE well shot and beautiful (which is why they are classics and watched by cinephiles) and not see bad film movies, only the cream of the crop, while being exposed to the whole gamut of quality when watching todays movies shot digitally. Would reinforce that grain = good while not being necessarily the case - and their opinion might be heard more than gen pop.
At any rate, it can be a neat tool to lower sharpness!
tshaddox•5h ago
crazygringo•4h ago
They definitely do at night when it's dark out. There's a kind of "sparkling" or "static" that comes in faint light.
Fortunately, our eyes have way better sensitivity than cameras. But the "realism" just comes from how it was captured using the technology of the day. It's no different from phonograph hiss or the way a CRT signal blurs. The idea is to be "real" to the technology that the filmmaker used, and the way they knew their movie would be seen.
It's the same way Van Gogh's brush strokes were real to his paintings. You wouldn't want his oil paintings sanded down to become flat. It's the reality of the original medium. And so even when we have a digital print of the film, we want to retain as much of the reality of the original as we can.
Wowfunhappy•4h ago
crazygringo•1h ago
On the other hand, a small amount of constant grain or noise is intentionally often introduced because otherwise images feel too static and end up looking almost fake. Similarly, dithering is intentionally added to audio, like mastering CD's or tracks. It helps prevent artifacts in video and in audio.
kuschku•33m ago
Often it can also make sense to modify the grain for aesthetics. Denoising usually produces a less detailed result, but what you can do is denoise only the color channels, not the brightness channel. Brightness noise looks normal to us, while color noise tends to look very artificial. But by keeping the brightness noise, you avoid losing detail to the denoiser.
jccalhoun•2h ago