This time I will show you a simple approach to an effect that will help our renderer to have a more “artistic” look: the light shafts, also known as god-rays. Needless to say that the source code + assets are available, at the bottom of this post.
You can find a good description of the effect here. It’s a more physically based tutorial, so if you have shivers on your spine every time you see an integral symbol, skip it and follow my trail. The effect I implemented can be described as a simple radial blur, using the light position (in screen space) as the center of this effect, and masked by a texture.
The masking is needed to avoid the foreground objects to bleed into the scene, as the light is the only thing that needs to be “shafted”. Keep in mind that this example/tutorial is only valid for directional lights, but it can be easily extended to support point lights.
To find the center of the radial blur, you need to project the light position into screen space. As we are dealing with a directional light, you can just use a simple trick to create a fake light position, using the camera position + light direction*(value between near and far plane). And the hack season has begun!
With this projected value in hand (and in range [-1..1]), we should compute an “intensity” factor. The effect should be at maximum when the user is viewing the light right in front of it, and fade out smoothly as the light moves to the screen border. You can see what I’m doing in the file PostProcessingComponent.cs, method PreRender.
note: last post I talked about a game framework using components, and that’s what I’m using from now on. The rendering code itself is not tied to the game layer, so if you don’t like that, just use the .fx and LightShaftEffect.cs code.
We need to figure out what will be blurred. We can’t just blur everything, we need to detect what is in the light layer, in our case, the background (remember that I’m dealing only with directional lights in this example). Thankfully, we already have what we need: the depth buffer! Since we are using a LPP approach, we already have it in our G-Buffer, it’s even already downsampled 2x to save some bandwidth. What we need to do is mask out the foreground pixels and voila, it’s done. Actually, I’m using the z value itself, and not just a binary mask. This allows the foreground objects to bleed a little bit, just to give a special taste to it. Check out the file LightShaftEffect.fx, method PixelShaderConvertRGB. Remember folks: this is not a physically based approach!
In the same step where I generate this mask to the Alpha channel of a render target (I’m using a 1/4 sized RT), I downsample the color buffer with a simple linear filtering. This will save some texture bandwidth in the next step.
This is the most costly step, but it is as simple as it could be: for each pixel, we sample a lot of texture values in the direction of pixel’s position -> light’s position (in screen space). I’m applying some attenuation based on the distance to the light, and also the distance between each texture fetch is customizable. In fact, there are lots of customizable parameters. Take a look in the shader and also in the level file, coluna_level.xml, where the component containing the post-processing effect is stored along with the default values for its parameters.
I’m doing 40 texture fetches for each pixel, and that is a lot. It’s important to have the blur source (the mask + downsampled RGB) at a small size to avoid texture cache misses. As it’s blurred as hell, you will gonna end up losing the high frequencies anyway.
The output can be done as a single sum of the original source + ( blurred version * blurred version Alpha ). You could use some luminance and threshold formulas, but for the sake of simplicity I’m doing just what I said before.
Bonus: Tone Mapping
To give an even more sexy look, I added a tone mapping algorithm to the final mix, so you can control how the colors are displayed on screen. You can change the contrast, saturation, exposure and color balance of the scene. Thanks to our LPP renderer, the input source is already in HDR, so we won’t have color banding when doing this color space transformation. The technique I’m using is explained here. Here are some examples of the same view using different tone mapping parameters:
That’s it folks, I hope you enjoyed it. As usual, the source code is here (***my public dropbox folder is down for a while, I will try to move the source files somewhere else. Sorry about that***). Feel free to use it, at your own risk!! Any comments, suggestions and donations are appreciated.
Errata: in the previous post, I forgot to add a serializer class, so the loading code was duplicating all the entries instead of sharing them (the SharedResourceList stuffs). It’s fixed now.