The technique here seems to rely more on eyeballing a plausible penumbra without explicitly considering a size of the light source, though I don't quite understand the core intuition.
The article would probably benefit from having figure captions below each image stating whether the image is interactive or not.
Or alternatively to figure captions about interactivity, showing some kind of symbol in one of the corners of each of the ones that are interactive. In that case, the intro should also mention that symbol and what it means before any images that have that symbol on it.
The demo at the end has bad banding issues (which the article does acknowledge).
It seems like a cheat-ish improvement to both of these would be a blur applied at the end.
Small step sizes are doubly bad because low-spec shader models like WebGL and D3D9 have a limitation on the number of loop iterations, so no matter how powerful your GPU is the step loop will terminate somewhat early and produce results that don't resemble the ground truth.
Right at the end:
> The random jitter ensures that pixels next to each other don’t end up in the same band. This makes the result a little grainy which isn’t great. But I think looks better than banding… This is an aspect of the demo that I’m still not satisfied with, so if you have ideas for how to improve it please tell me!
I implemented a similar algorithm myself, and had the same issue. I did find a solution without that particular aliasing, but with its own tradeoffs. So, I guess I should write it up some time as a blog post.
The primary force behind real soft shadows is obviously that real lights are not point sources. I wonder how much worse the performance would be if instead of the first two (kinda hacky) soft shadow rules we instead replaced the light by maybe five lights that represent random points in a small circular light source. Maybe you'd get too much banding unless you used a much higher number of light sources, but at the very least it would be an interesting comparison to justify using the approximation
edit: here’s one. I’m not sure this is the one I was thinking of, but I think it does validate your hypothesis that you can reduce the number of steps needed by looking at gradients. https://hal.science/hal-02507361/file/lipschitz-author-versi...
Fun fact - you can use very similar logic to do a single-sample depth of field and/or antialiasing. The core idea, that maybe this blog post doesn’t quite explain, is that you’re tracing a thin cone, and not just a ray. You can track the distance to anything the ray grazes, assume it’s an edge that partially covers your cone (think of dividing a circle into two parts with an arbitrary straight line and keeping whichever part contains the center), and that gives you a way to compute both soft shadows and partial pixel or circle-of-confusion coverage. You can do a lot of really cool effects with such a simple trick!
I searched briefly and found another nice blog post and demo about this: https://blog.42yeah.is/rendering/2023/02/25/dof.html
I recall a paper published by Valve that showed their approach to using SDFs to pack glyphs into low res textures while still rendering them at high resolution:
https://steamcdn-a.akamaihd.net/apps/valve/2007/SIGGRAPH2007...
https://youtube.com/watch?v=btWy-BAERoY&t=1929s&pp=2AGJD5ACA...
ravetcofx•2mo ago
keyle•2mo ago
forrestthewoods•2mo ago
My iPhone is 1320 × 2868. That’s more than 1080p. So I would not consider it a “small resolution”!
speedgoose•2mo ago