I would love for them to provide an option to view it with film simulation vs without.
One of my favorite movies of all time, The Holdovers, did film simulation extremely well. It's set in the '70s so it attempts to look like a movie of that era.
It looked great to me, but if you're an actual film nerd you're going to notice a lot of things aren't exactly accurate.
Maybe in the near future we'll see Netflix being able to process some post effects on the client. So if you're color blind, you get a mode for that. If you don't want fake grain you can turn it off.
What parameters would that be? Make it look like Eastman Ektachrome High-Speed Daylight Film 7251 400D? For years, people have taken film negative onto telecines and created content of grain to be used as overlays. For years, colorists have come up with ways of simulating the color of specific film stocks by using reference film with test patterns that's been made available.
If a director/producer wants film grain added to their digital content, that's where it should be done in post. Not by some devs working for a streaming platform. The use of grain or not is a creative decision made by the creators of the work. That's where it should remain
Why? If you're spending a significant chunk of your bits just transmitting data that could be effectively recreated on the client for free, isn't that wasteful? Sure, maybe the grains wouldn't be at the exact same coordinates, but it's not like the director purposefully placed each grain in the first place.
I recognize that the locally-produced grain doesn't look quite right at the moment, but travel down the hypothetical with me for a moment. If you could make this work, why wouldn't you?
--------
...and yes, I acknowledge that once the grain is being added client side, the next logical step would be "well, we might as well let viewers turn it off." But, once we've established that client-side grain makes sense, what are you going to do about people having preferences? Should we outlaw de-noising video filters too?
I agree that the default setting should always match what the film maker intended—let's not end up with a TV motion smoothing situation, please for the love of god—but if someone actively decides "I want to watch this without the grain for my own viewing experience"... okay? You do you.
...and I will further acknowledge that I would in fact be that person! I hate grain. I modded Cuphead to remove the grain and I can't buy the Switch version because I know it will have grain. I respect the artistic decision but I don't like it and I'm not hurting anyone.
I'm sorry your tech isn't good enough to recreate the original. That does not mean you get to change the original because your tech isn't up to the task. Update your task to better handle the original. That's like saying an image of the Starry Night doesn't retain the details, so we're going to smear the original to fit the tech better. No. Go fix the tech. And no, this is not fixing the tech. It is a band-aid to cover the flaws in the tech.
In theory though, I don't see any reason why client-side grain that looks identical to the real thing shouldn't be achievable, with massive bandwidth savings in the process.
It won't be, like, pixel-for-pixel identical, but that was why I said no director is placing individual grain specks anyway.
The market has spoken and it says that people want to watch movies even when they don't have access to a 35mm projector or a projector than can handle digital cinema packages, so nobody is seeing the original outside a theater.
Many viewers are bandwidth limited, so there's tradeoffs ... if this film grain stuff improves available picture quality at a given bandwidth, that's a win. IMHO, Netflix blogs about codec things seem to focus on bandwidth reduction, so I'm never sure if users with ample bandwidth end up getting less quality or not; that's a valid question to ask.
That's true, but at a given bitrate (until you get to very high bitrates), the compressed original will usually look worse and less sharp because so many bits are spent trying to encode the original grain. As a result, that original grain tends to get "smeared" over larger areas, making it look muddy. You lose sharpness in areas of the actual scene because it's trying (and often failing) to encode sharp grains.
Film Grain Synthesis makes sense for streaming where bandwidth is limited, but I'll agree that in the examples, the synthesized grain doesn't look very grain-like. And, depending on the amount and method of denoising, it can definitely blur details from the scene.
I can see why they want to compare against the actual local copy of the video with the natural grain. But that’s the perfect copy that they can’t actually hope to match.
Isn't that the image captioned "Regular AV1 (without FGS) @ 8274 kbps"?
But still, they have:
> A source video frame from They Cloned Tyrone
> Regular AV1 (without FGS) @ 8274 kbps
> AV1 with FGS @ 2804 kbps
Just to emphasize the problem, would it be nice to see:
Regular AV1 (without FGS) @ 2804 kbps
It should look really bad, right? Which would emphasize their results.
that's an understatement. it just looks like RGB noise effect was added. film grain does not look like RGB noise. to me, film grain is only one part of what gave film the film look. the way the highlights bloom rather than clip. it also was more natural/organic/some descriptive other than the ultrasharp of modern digital acquisition. using some SoftFX or Black Mist type filters help, but it's just not the same as it is a digital vs analog type of acquisition. all of these attempts at making something look like it's not just keep falling down in the same ways. but hey, there's a cool tech blog about it this time. film grain filters have been around for a long time, yet people just don't care for them. even in Blu-ray time frame, there was attempts at removing the grain in the encode and applying it in playback. Netflix isn't coming up with anything new, and apparently nothing exciting either based on the results.
A few things to note:
- still-frames are also a mediocre way to evaluate video quality.
- a theoretically perfect[1] noise-removal filter will always look less detailed than the original source, since your brain/eye system will invent more detail for a noisy image than for a blurry image.
1: By which I mean a filter that preserves 100% of the non-grain detail present, not one that magically recovers detail lost due to noise.
ANY noticeable percieved "flaw" in any creative media will eventually become an aesthetic choice.
People remember the emotions the artwork engendered, and thus the whole work is associated with the feelings, flaws and all. If the work is particularly widely known, the flaws can become a stand-in for the work itself.
I see this in video games - I'm fond of the NES-era "flaws" and limitations (palette limits, sprite limits, sound channel limits), but less connected to the Atari 2600 or SNES/PS1/NDS/etc flaws. Shovel Knight is charming; A Short Hike, while great, doesn't resonate on a style level.
There's an influx of high-profile directors/films right now and in pipeline filmed for IMAX (F1: The Movie I think, Mission Impossible, etc) and Christopher Nolan's Odyssey coming next year shot entirely on IMAX film with newly developed smaller/quieter cameras made to accomplish it.
I've read that a 15-perf 65mm IMAX negative shot with slower film stocks is "virtually grainless", even when viewed on a 70ft screen. Grain is apparently noticeable in IMAX films when large/fast stocks are used and pushed toward their limits, and (of course) when smaller-format film stocks have been blown up.
It just adds visual noise that obscures details of the authentic scene, and nothing prevents nostalgia from being tied to many of the more prominent visual cues like old actors or your own old memories from when you watched it first...
> contributing to [film's] realism
But there is no grain in reality, so it does the opposite
Otherwise I'm glad AV1 marches along and instead of wasting bitrate encoding visual garbage has an algorithmic replacement mechanism- which also means you could turn it off easier.
Does it add any more than modern video compression techniques? What constitutes noise in cinema, is somewhat subjective.
Well ackchually -- illumination is inherently random, so all time-bounded captures of a scene (including what your eyes do) are subject to shot noise: https://en.wikipedia.org/wiki/Shot_noise
1. You prefer Betamax or VHS to digital media (highly unlikely)
2. You own laserdiscs (limited to 480i)
3. You own 35mm prints of film.
Since all other formats film has been made available on are both digital media and compressed.
All that is 24fps.
That's without audio, which I assume you also want to be uncompressed.
Fake lights, fake shadows, fake sky, ...
Also, the author had me at God of Gamblers 2. So good. I will take him up on his recommendation to rewatch.
That's not to say that all noise and grain is good. It can be unavoidable, due to inferior technology, or a result of poor creative choices. It can even be distracting. But the alternative where everything undergoes denoising (which many of our cameras do by default now) is much worse in my opinion. To my eyes, the smoothing that happens with denoising often looks unrealistic and far more distracting.
a) Compressed original with significant artifacts from the codec trying to represent original grain
b) A denoised version with fewer compression artifacts, but looks "smoothed" by the denoising
c) A denoised version with synthesized grain that looks almost as good as the original, though the grain doesn't exactly match
I personally think the FGS needs better grain simulation (to look more realistic), but even in its current state, I think I'd probably go with choice C. I'm all for showing the closest thing to the author's intent. We just need to remember that compression artifacts are not the author's intent.
In an ideal world where we can deliver full, uncompressed video to everyone, then obviously - don't mess with it at all!
It's like reducing an image to tiny dots with dithering (reminds of Atinkson dithering). Those grains are not a noise, they are a detail, actual data. That's why real grain looks good IMO.
There are two possible advantages for this kind of grain synthesis. For Netflix, they could produce the same perceived quality at lower bitrates, which reduces costs per view and allows customers with marginally slow connections to get a higher quality version. For a consumer, the advantage would be getting more non-grain detail for a fixed bitrate.
You are right that if you subtract the dentists frame from the raw one, showing only the estimated noise, you would get some impression of the scene. I think there’s two reasons for this. Firstly, the places where the denoiser produced a blurry line that should be sharp may show up as faint lines. I don’t think this is ‘hidden information’ so much as it is information lost to lossy compression. In the same way, if you look at the difference between a raw image and one with compression, you may see some emphasized edges due to compression artefacts. Secondly, the less exposed regions of the film will have more noise so noisiness becomes a proxy for darkness, allowing some reproduction of the scene. I would expect this detail to be lost after adjusting for the piecewise linear function for grain intensity at different brightness levels.
Perhaps a third thing is the level of noise in the blacks and the ‘grain size’ or other statistical properties tell you about the kind of film being used, but I think those things are captured in the film grain simulation model.
Possibly there are some other artefacts like evidence of special effects, post processing, etc.
When you watch a high-quality encode that includes the actual noise, there is a startling increase in resolution from seeing a still to seeing the video. The noise is effectively dancing over a signal, and at 24 fps the signal is still perfectly clear behind it.
Whereas if you lossily encode a still that discards the noise and then adds back artificial noise to match the original "aesthetically", the original detail is non-recoverable if this is done frame-by-frame. Watching at 24 fps produces a fundamentally blurrier viewing experience. And it's not subtle -- on old noisy movies the difference in detail can be 2x.
Now, if h.265 or AV1 is actually building its "noise-removed" frames by always taking into account several preceding and following frames while accounting for movement, it could in theory discover the signal of the full detail across time and encode that, and there wouldn't be any loss in detail. But I don't think it does? I'd love to know if I'm mistaken.
But basically, the point is: comparing noise removal and synthesis can't be done using still images. You have to see an actual video comparison side-by-side to determine if detail is being thrown away or preserved. Noise isn't just noise -- noise is detail too.
Regarding aesthetics, I don't think AV1 synthesized grain takes into account the size of the grains in the source video, so chunky grain from an old film source, with its big silver halide crystals, will appear as fine grain in the synthesis, which looks wrong (this might be mitigated by a good film denoiser). It also doesn't model film's separate color components properly, but supposedly that doesn't matter because Netflix's video sources are often chroma subsampled to begin with: https://norkin.org/pdf/DCC_2018_AV1_film_grain.pdf
Disclaimer: I just read about this stuff casually so I could be wrong.
Smoothing the noise out doesn't make use of that additional resolution, unless the smoothing happens over the time axis as well.
Perfectly replicating the noise doesn't help in this situation.
[1]: https://telescope.live/blog/improve-image-quality-dithering [2] https://electronics.stackexchange.com/questions/69748/using-...
Noise is reduced to make the frame more compressible. This reduces the resolution of the original only because it inevitably removes some of the signal that can't be differentiated from noise. But even after noise reduction, successive frames of a still scene retain some frame-to-frame variance, unless the noise removal is too aggressive. When you play back that sequence of noise-reduced frames you still get a temporal dithering effect.
> In this case, L = 0 corresponds to the case of modeling Gaussian noise whereas higher values of L may correspond to film grain with larger size of grains.
If you have a few static frames and average them, you improve SNR by retaining the unchanged signal and having the purely random noise cancel itself out. Retaining noise itself is not useful.
I suspect the effect you might be seeing is either just an aesthetic preference for the original grain behavior, or that you are comparing low bandwidth content with heavy compression artifacts like smoothing/low pass filtering (not storing fine detail saves significant bandwidth) to high bandwidth versions that maintain full detail, entirely unrelated to the grain overlaid on top.
Eastman Business Park in Rochester has been demolished.
Also, please stop putting dust and scratches on YouTube videos. Thank you.
I'm in my early 50s so I remember film quite well. Just like vinyl or cassettes, I ain't going back and unless it is an artistic choice I don't want films to emulate what I consider to be an inferior technology.
jedbrooke•7h ago
I never understood the “grain = realism” thing. my real eyes don’t have grain. I do appreciate the role of grain as an artistic tool though, so this is still cool tech
bob1029•6h ago
tiluha•6h ago
01HNNWZ0MV43FF•6h ago
GuB-42•6h ago
I don't know the psychovisuals behind that. Maybe it adds some high frequencies that compression often washes out, or maybe acts like some kind of dithering.
As for your eyes, I am pretty sure that they have grain, that's how quantum physics work, you just don't perceive it because your brain filters it out. But again, I don't know how it interacts with film grain.
plastic3169•5h ago
dinfinity•5h ago
And lots of it, actually. Just close your eyes or look at any non-textured surface. Tons of noise.
The decreasing signal-to-noise ratio is also highly noticeable when it gets darker.
observationist•6h ago
A child watching a Buster Keaton skit and gasping and giggling and enjoying it is going to have a different subjective aesthetic experience of the media than a film critic who knows exactly what type of film and camera were used, and what the meaning of all the different abstractions imply about the scene, and the fabric of Keaton's costume, and so on, and so forth.
Subjective aesthetic preferences are in the realm of cognition - we need a formal theory of intelligence mapped to the human brain, and all of these subjective phenomena collapse into individualized data processing and initial conditions.
There's something about film grain contrasted against clean cel animation which might make it easier for people to suspend disbelief. They are conditioned to think that absence of grain is associated with unreal animation, particular types of media, and CGI. Home video and news and so forth had grain and low quality, so grain gets correlated with "real". In my view, there's nothing deeper than that - we're the product of our times. In 40 years, media will have changed, and it may be that film grain is associated with surrealism, or edited out completely, as it's fundamentally noise.
Kina•6h ago
I have to imagine past glassmakers would have been absolutely enthralled by the ability we now have to make uniform, large sheets of glass, but here we are emulating the compromises they had to make because we are used to how it looks.
throw0101d•5h ago
It is more than just 'feeling correct': windows and their various (sub-)elements that make them up (can) change the architectural proportions and how the building is perceived as a whole:
* https://www.youtube.com/watch?v=uAMyUoDz4Og
* https://www.youtube.com/watch?v=_c8Ahs9Tcnc&t=49
It is similar with columns: they're not just 'tall-and-narrow', but rather have certain proportions and shapes depending on the style and aesthetic/feeling one wishes to convey:
* https://en.wikipedia.org/wiki/Classical_order
And these proportions can even be 'fractal': the window panes related to windows as a whole, related to the building as a whole:
* https://www.youtube.com/watch?v=J-0XJpPnlrA&t=3m13s
* https://en.wikipedia.org/wiki/Golden_rectangle
* https://en.wikipedia.org/wiki/List_of_works_designed_with_th...
* https://www.nngroup.com/articles/golden-ratio-ui-design/
UltraSane•5h ago
throw0101d•5h ago
Perhaps, but if you're going to have them anyways you might as well make a conscious choice as to how they add to the overall design of the structure.
haiku2077•6h ago
recursive•5h ago
sneak•5h ago
this is likely the result of ~100 years of film-based filmmaking and projection. hell, we still call it filmmaking.
UltraSane•5h ago
kderbe•5h ago
Look around you: nearly all surfaces have some kind of fine texture and are not visually uniform. When this is recorded as video, the fine texture is diminished due to things like camera optics, limited resolution, and compression smoothing. Film grain supplies some of the high frequency visual stimulus that was lost.
Our eyes and brains like that high frequency stimulation and aren't choosy about whether the exact noise pattern from the original scene is reproduced. That's why the x265 video encoder (which doesn't have grain synthesis since it produces H.265 video) has a psy-rd parameter that basically says, "try to keep the compressed video as 'energetic' as the original, even if the energy isn't in the exact same spot", and even a psy-rdoq parameter that says, "prefer higher 'energy' in general". These parameters can be adjusted to make a compressed video look better without needing to store more data.
UltraSane•5h ago
supertrope•4h ago
smusamashah•5h ago
dmbche•4h ago
It might be that there is a large part of the population that still has that association.
Cinephiles are also more likely to watch older (i.e. with grain) movies that ARE well shot and beautiful (which is why they are classics and watched by cinephiles) and not see bad film movies, only the cream of the crop, while being exposed to the whole gamut of quality when watching todays movies shot digitally. Would reinforce that grain = good while not being necessarily the case - and their opinion might be heard more than gen pop.
At any rate, it can be a neat tool to lower sharpness!
tshaddox•4h ago
crazygringo•3h ago
They definitely do at night when it's dark out. There's a kind of "sparkling" or "static" that comes in faint light.
Fortunately, our eyes have way better sensitivity than cameras. But the "realism" just comes from how it was captured using the technology of the day. It's no different from phonograph hiss or the way a CRT signal blurs. The idea is to be "real" to the technology that the filmmaker used, and the way they knew their movie would be seen.
It's the same way Van Gogh's brush strokes were real to his paintings. You wouldn't want his oil paintings sanded down to become flat. It's the reality of the original medium. And so even when we have a digital print of the film, we want to retain as much of the reality of the original as we can.
Wowfunhappy•2h ago
jccalhoun•1h ago