I'm not sure either way, would you say this makes it easier to read and I should make it the default?
How much of it is convention vs based in measurable outcomes is up for debate (maybe), but at least that’s where most every formally trained designer/visual artist in the west comes from.
What? I'm pretty sure that if I pick any book in my shelf, it's going to be justified.
> Most web media are narrow-column format, so tend to be fully justified.
What #2? 99% of web media is ragged-right, the biggest reason being that it's the default, and that browsers have terrible line-wrapping and historically had no support for hyphenation. And justified text gets worse the shorter the lines are, because there are fewer options on where to insert newlines, leading to large spaces between words. Also, good justification requires fine-grained word spacing control, which doesn't work well with traditional low-resolution displays.
My MSc thesis advisor recently told that apparently thesis documents should be submitted with ragged-right lines these days because it makes them easier to read for dyslexics; while it makes sense, it must be a quite new guideline.
On displays, readability works out differently, and that's why I speculate this has changed. For example, printed media uses serif fonts to aid readability, but on displays, sans-serif works better, especially on lower resolutions.
So in this current case, since OP's blog is on the internet and not printed, I would suggest unjustified.
This kick-started by desire to write about the Dual-Kawase Blur, a technique I stumbled upon when ricing my linux distro.
Gaussian function for coefficients is fine for large sigmas, but for small blur radius you better integrate properly. Luckily, C++ standard library has std::erf function you gonna need for the proper formula. Here’s more info: https://bartwronski.com/2021/10/31/practical-gaussian-filter...
You can, of course, implement the algorithm as is on a compute shader with texture sampling.
But I have a situation where the inputs and outputs should be in shared memory. I'm trying to avoid writing the results out to off-chip DRAM, which would be necessary to be able to use texture sampling.
I spent some time looking into a way of doing an efficient compute shader blur using warp/wave/subgroup intrinsics to downsample the image and then do some kind of gaussian-esque weighted average. The hard part here is that the Kawase blur samples the input at "odd" locations but warp intrinsics are limited to "even" locations if that makes sense.
I would appreciate if anyone knows any prior art in this department.
I didn't really understand why every image is slowly moving around. It says:
> Above the box you have an Animate button, which will move the scene around to tease out problems of upcoming algorithms. Movement happens before our blur will be applied, akin to the player character moving.
I don't really understand the explanation - the movement just seemed a bit annoying.
This is not the first time [1] I hear this critique of movement by default being annoying. Should I just turn it off by default?
I don't understand its explanation therefore it's annoying
These blur effects, like any other graphical thing, can have a similar effect when combined with motion. The animate function is to bring these issues out, if there is any.
Spherical Harmonic Lighting is an alternative or used to supplement the generation of HDR cube maps for the purpose of global illumination. For the very diffuse aspect of this global illumination, Spherical Harmonics are used (Often called light probes), as a very cheap approximation of the environment, that is easy to get in and out of shaders and interpolate between them. They are just a bunch of RGB floats, so you can place a ton of them in the environment to capture a scene's direct or indirect lighting. For specular reflections you still have to use HDR cube maps, as the light probes don't hold enough information for that task.
Blurs are a basic building block of image processing and used in a ton of effects. In a way, there are related topics, as the cube maps generated to cover different roughness factors for PBR rendering are, in-fact blurs! https://threejs.org/docs/#api/en/extras/PMREMGenerator But that's not the topic of the article, but instead of how to do blurs efficiently in real time, in the first place.
I use box/gaussian blurs often, but for rendering outlines/highlights of objects.
Here's a breakdown of how Doom (2016) does it: https://www.adriancourreges.com/blog/2016/09/09/doom-2016-gr...
Yes, bokeh blur is way more pleasing. In my article the gaussian likes are the focus for their use as a basic building block for other effects, like frosted glass, heat distortions, bloom and the like.
Specifically the 2015 Dual Kawase was created in the context of mobile graphics, with weak memory throughput. But even on my RTX 4090, near the fastest consumer hardware available, those unoptimized unseparable, naive gaussian implementations bring it to a crawl and `samplePosMultiplier` has a non insignificant performance hit, so texture caches still play a role.
At today's high resolutions and especially on mobile, we still need smart and optimized algorithms like the dual kawase.
If you've ever seen Ghost in the Shell or like cyberpunk stuff and don't mind a first person shooter in a universe like that? Prepare to have a lot of fun getting your butt kicked in by veterans.
The community deserves a remake. ;-)
jtxt•21h ago