It is a very cool tool, but not a silver bullet, just for context.
The way it goes is: use high resolution models and textures, always, rely on the engine to render everything in real-time (with techniques like raytracing), realize that no reasonable GPU can run the thing at full resolution, use AI (like DLSS) to compensate, it is still too much, so use AI to generate extra frames.
The end result is often not that great, there are limits on how AI can fill-in the gap, especially in real time. For frame generation, it is even worse as it introduces lag as the generated frame doesn't take into account the latest player actions.
Video games optimization is an art form that goes beyond making code run fast. Artists and programmers are supposed to work together and use the performance budget wisely by cutting the right corners. For example, in a combat scene, players won't look at the background, so make it as simple as possible, save the details for when the player doesn't have to focus on a bunch of monsters. It may even result in gameplay changes. AI can't replicate that (yet?)
It's fine for video games to employ tricks that limit what they have to make look good on screen. Games like Silent Hill did a pretty good job by filling the world with heavy fog or giving you a flashlight leaving everything outside of a small well-lit circle too dark to see. That actually added to the atmosphere and allowed them to make what little you could see look great (for the time).
Guessing at what players might be looking at and making everything else look like garbage is doomed to fail though. It punishes anyone who dares to look half an inch away from what the designers think you should be paying attention to, is distracting even if you are trying to focus where they want you to, and anyone who wants to take a cool a screenshot of the action gets screwed as well.
FYI, the experiment is not as insane as the article makes it seem.
The subjects knew there would be a drop involved, and they timed others doing the drop first before estimating the elapsed time in their own drop.
Unfortunately for those of you who want to try for yourselves, it closed down in 2021.
When you get better at juggling, objects really start falling down in slow motion (e.g a glass from a cupboard).
I guess my brain stores trajectories in cache instead of having to compute them and I get higher fps than I used to.
Doesn't make it much easier though as the window for when you should hit that dodge button is still narrow.
Human time sense is just so weird when you pay attention to it a little.
And now, when there's an accidental falling object, often my hand just moves to the exact correct position to catch it.
One tangental optical note effect I only recently noticed is that I shift my eyes quickly to a spinning ceiling fan there is a moment where the fan blade(s) appear to be effectively stationary -- and then transition to the blur that one normally sees.
The fan, I believe is similar to a clock ticking and a type of saccadic masking. (1)
Related to the optokinetic response. (2)
I’m sure someone with much more knowledge than I could better clarify however.
I'm not aware of any movies shot in non-interleaved 120fps (AFAIK all the movies advertised as "120fps" are 2*60fps stereoscopic with the frames interleaved between the eyes). Considering how much better games look in high frame rate compared to 60fps I'd love to see a non-interleaved 120fps movie.
I think Billy Lynn's Long Halftime Walk? Although I'm not sure if you can actually watch it at 120fps.
> To accommodate the film's wide release, various additional versions of the film were created.[3] They include 120 fps in 2D and 60 fps in 3D as well as today's current standard of 24 fps. The film also received a Dolby Cinema release, with two high dynamic range versions that can accommodate 2D and 3D, with up to 120 fps in 2K resolution.
https://en.wikipedia.org/wiki/Billy_Lynn%27s_Long_Halftime_W...
Obviously you can make 60+ FPS cinematography work well, games do it all the time. But whether that's practical in live action, I'm not sure. I certainly haven't seen an example that didn't make me cringe yet. Even in non-cinema settings, such as on YouTube, the presentation style usually needs adjusting a bit.
Speaking of, I do wish that all the cinematography-focused content creators on YouTube stopped using 24 and 30 FPS out of vanity though at least... Though it would help if YouTube rolled out support for HFR (120+ fps) as well, so that those who include 24p movie snippets don't need to compromise.
If you think about how light falls off in proportion to the square of its distance from the source—and that generally actors don’t stand in one place, but move through large spaces where they must appear to be lit evenly—you start to see that this is not just a question of “efficient LED lighting.” Shooting at high frame rates requires an enormous amount of light that cannot easily (read: cheaply, quickly, without higher expenses) be brought to bear in a normal production outside of controlled studio conditions.
Unless you are James Cameron shooting Avatar III on a soundstage with (close to) a blank-cheque from the studio, you are still limited in terms of space (the constraints of the location given the size of the light and its supporting stand), time (the time to set up and adjust each light properly, including last-minute adjustments), labor (someone’s got to plug all that in, run the cables, etc.), and cost/availability (you don’t always get the lights you want for a given budget).
Beyond that, you’re also considering aperture and ISO from a creative standpoint; maybe you don’t want to shoot wide open for reasons of image control, and so you may spend your lumen budget on ensuring that a particular scene can be exposed at, say, f/5.6 at ISO 100. Or you may want to spend your lumens on lens filtration, which produces a specific effect but further cuts down the incident light.
In short, no, you do not have 10x light available to spend on frame rate, and for any marginal gains in raw output, most cinematographers are thinking about what creative choices it opens up for the film; I would never burn additional lumens to shoot at 120fps just for the sake of A/V fanboys on the internet, unless the scene requires slow-motion or high-speed capture for postproduction reasons. Technical choices in this industry should always be motivated by the need to solve creative problems effectively, quickly, and within budget.
The Matrix also did an interesting take on freeze frame animation with the 360 degree simultaneous camera capture thing. They'd use CGI for that nowadays. Actually they used a lot more CGI in fight scenes in the sequels already. Which I think is a shame.
1) Back in the day, you'd use slowmo if you wanted to make something look bigger and more impressive, like scale model work or making a human-sized person look like a giant[0]. Maybe people just figured out the same effect works at 1:1 scale. Or maybe it started working at 1:1 scale after people got used to it being associated with big and impressive things.
2) It's just become a lot easier and cheaper, in the same sort of way that shallow depth of field was everywhere after large-sensor consumer video cameras started appearing (notably the Canon 5D mkii). You don't even have to remember to overcrank the camera, you can fake it in post with Twixtor or its descendents.
3) Not sure what the state of play is now, but for a while higher frame rates were one of the main things distinguishing "cinema" cameras. Eg. maybe you could shoot at 180fps but only with an extra crop factor or with certain codecs. Maybe that focused film makers' minds on it a bit.
4) I don't think you ever see step printing [1] anymore (which is when you repeat frames, instead of overcranking or interpolating them). Maybe it's due a comeback.
So I don’t think it’s quite that uncommon. For editors it serves a useful purpose as an effect that feels perceptually different than regular slow motion and adds variety to cuts.
This was used to good effect in Spider-Man: Into The Spider-Verse where they mixed animating on ones (24 fps) and twos[2], making one character appear more skilled than the other for example.
[1]: https://businessofanimation.com/why-animation-studios-are-an...
deadbabe•6mo ago
mc32•6mo ago