XNA Light Pre-Pass: culling, blending and particles

After the shortest Summer in my life, I’m back! Thanks to all who donated, I’m almost buying a Xbox360 with the money from this blog!

This time I will release the feature that everybody was asking for: blending! I’m not doing any lighting on the blended objects, though (sorry!). But it’s a good starting point: we can have explosions, sparkles, transparency and other effects without any lighting. The full source code + assets is here, use it at your own risk!!

I changed the code a lot, so I will divide the changes I made in three topics: culling (for both main rendering and shadows), blending and particles.

Culling

Obviously every renderer needs some sort of culling. Although I’m using only frustum-culling, I added a new step on the model pre-processor: the generation of metadata for each sub mesh. At compile time, I loop through all sub meshes from a given model, and compute their local bounding boxes. I assign this information plus some other properties like “cast shadows” and “render queue” to their “Tag” property (a modern “void pointer”). I need to do this because some times you have a single model with hundreds of sub meshes, so using just the main volume around them is not a good approach. I created a SubMesh class that holds that metadata, the transformed bounding box in global space, and the information for that sub mesh (effect, transform, modelMeshPart), so I can cull every sub mesh individually.

You can also extend the processor to read some extra information, like the “cast shadows” and “render queue” above, as right now they are the same for the whole model.

For the shadow rendering, I’m also using those bounding boxes to check if they lie inside the light volume. For spot lights, it’s very easy since I’m using a frustum as their volumes. For directional lights, I ignore the near clip plane if the view has the same direction as the light, as the geometry can be behind the frustum.

Blending

Using blend (additive, alpha, whatever) sounds easy, and it is: we just need to do it on the right stage. In my case, it’s right after we reconstruct the lighting, so we have all the solid objects on-screen (or almost all of them) and the z-buffer properly constructed.

I introduced into the renderer the Render Queue concept: a list of objects to be drawn at a specific stage. I have only 3 stages now:

  • Default: for objects that needs the full LPP pipeline, ie. rendering to GBuffer and reconstructing the lighting;
  • SkipGBuffer: objects that doesn’t need to be drawn to GBuffer, like skyboxes, pure reflective models or other crazy things;
  • Blend: objects that needs to be drawn after all the opaque ones, like particle systems or transparent models

In this example, that model would be drawn on the “Blend” stage, using a custom shader (you can use a custom shader also on the “SkipGBuffer” stage). That shader draws only the outline of the mesh, using fresnel math and additive blending. Have a look at the fx to see how it’s done.

Particles

At first I thought about using some 3rd party particle library, but I changed my mind because I don’t want to get tied (or anyone who download this code) to any library. So I chose the excellent XML Particles sample from MSDN, and modified it to fit my pipeline. I didn’t convert it into sub meshes or another generic mesh: I just store them as a list of visible emitters and render them after the “Blend” meshes. I’m computing an approximated local bounding box for every particle emitter at creation time (using velocity, life time and particle size), and then using it with their transforms to generate a global bonding box for culling. I’m also sorting the emitters (not individual particles) from back to front, to have a better composition. Note that in the sample, the particles are very fill-rate intensive: they are huge, and the worst is that they are so transparent that we need lots of them on the screen. Remember that even if the texture is fully transparent, its cost of rendering is the same as if it was opaque.

Bonus: other optimizations

The last thing I did was to cache lots of shader parameters, to avoid accessing the parameters map every frame. There are some left to be done, but I will do next. I also removed lots of per-frame memory allocations, so if you disable text rendering (that creates a lot of strings every frame), you will only see the List.Sort() allocating memory (if you know how to fix it using the same List class, please let me know!).

Next time I will add Xbox version, and also the specular light buffer: on the Xbox, the light buffer is HdrBlendable, what means we have only 2 bits for specular (it’s a shame we can’t use RGBA64). I did this on my XNA engine, and I didn’t see any performance issue.

See you next time!

-J.Coluna

About jcoluna

Game developer and musician
This entry was posted in XNA and tagged , , , , , , . Bookmark the permalink.

23 Responses to XNA Light Pre-Pass: culling, blending and particles

  1. c00ler says:

    Simply amazing, good work man and good luck for next

  2. Great, please continue the series🙂

  3. Pingback: Windows Client Developer Roundup 079 for 8/22/2011 - Pete Brown's 10rem.net

  4. Jens-Axel Grünewald says:

    Great series! Thank you very much. I have refactored your source a bit, may you are interested i will send you a copy.

  5. bayu says:

    good work
    I’m learning graphics programming, XNA, your writing has helped me to learn.
    Can you explain how to apply the “LPP + shadowing” for terrain?

  6. bayu says:

    good work🙂
    I’m learning graphics programming, XNA, your writing has helped me to learn.
    Can you explain how to apply the “LPP + shadowing” for terrain?

  7. bayu says:

    hi,
    after spending some time in front of the computer, I finally can make the terrain to support the “LPP and shadowing”.
    because I lost Tangent and Binormal, I replace the “NormalMapToSpaceNormal Function” becomes

    normalMap = half3(normal * -normalMap.z);

    NB : negative normalMap.z, since the normal is need to inverse🙂

    and my terrain support multitexturing (blendmap), detail texture, and normal mapping …
    One of the problems that I can use since the “LPP and shadowing” the terrain is “Light Direction” on normal mapping.

    float3 n = normalize(tex2D(normalMap, texCoord).rgb * 2.0f – 1.0f);
    float nDotL = saturate(dot(n, lightDir));

    so while I am using static “Light Direction”.
    any suggestion to solved the problems to be dynamic?

    Thanks
    Bayu

    • jcoluna says:

      You shouldn’t use a different shader for the terrain lighting (unless you are skipping the LPP and creating a forward-rendering approach). You should just output the normals in viewspace, and the default lighting shader would do the rest. What shader are you using? When you replaced the NormalMapToSpaceNormal function, did you remember to convert it into view space?

      ________________________________

      • bayu says:

        sorry,
        maybe my question is ambiguous, so that you may misunderstand.😦
        I have successfully entered terrain into the LPP + shadowing.
        I already made ​​that support multitexturing LOD terrain, normal mapping, and fog effects. then I copied shader

        “LPPMainEffect” to “TerrainEffect” to get an independent effect for the terrain.
        i customized some code in it so it can support the LPP and shadowing. The following are the changes I did.
        1. NormalMapToSpaceNormal function
        half3 NormalMapToSpaceNormal(half3 normalMap, float3 normal)
        {
        normalMap = normalMap * 2 – 1;
        normalMap = half3(normal * -normalMap.z);
        return normalMap;
        }
        because, my terrain lost binormal and tangent.
        2. RenderToGBuffer PS
        outputMap += blendMap.r * normalMap2 + blendMap.g * normalMap3 + blendMap.b * normalMap4;
        half3 normalViewSpace = NormalMapToSpaceNormal(outputMap.xyz, input.Normal);
        3. ReconstructShading PS
        float3 base = PS_GetTerrainColor(Diffuse1Sampler, Normal1Sampler, input.TexCoord * TextureTiling, LightDirection);
        float3 rTex = PS_GetTerrainColor …..

        float4 PS_GetTerrainColor(const sampler2D colorMap,
        const sampler2D normalMap,
        const float2 texCoord,
        const float3 lightDir)
        {
        float3 n = normalize(tex2D(normalMap, texCoord).rgb * 2.0f – 1.0f);

        float4 ambient = BaseLightColor * AmbientColor;
        float4 diffuse = BaseLightColor * DiffuseColor * saturate(dot(n, lightDir));

        return (ambient + diffuse) * tex2D(colorMap, texCoord);
        }

        I still use the shaders “LightingLPP” from you to generate shadow and shading for my terrain, with a little extra to

        support the soft shadowing. the difference is only “TerrainEffect” shader only.
        “LightDirection” parameter that I do not mean from “LightingLPP”, but from “TerrainEffect” shader. so that i mean static is

        it. before the terrain implemented “LPP + shadowing”, LightDirection is useful for normal mapping. but after implemented

        “LPP + shadowing”, how can I change the dependency of “LighDirection”?
        I think I do not need it if the lighting and shadowing has been made ​​with the “LPP and shadowing”

        This is my screenshot terrain LPP
        https://picasaweb.google.com/lh/photo/RQ31XcP9Sw5ra7b2sN0xADK2f-t3ro7aWifXI_a1BDM?feat=directlink

        Thanks
        Bayu

  8. jcoluna says:

    Yes, you don’t need the “LightDirection” parameter. Instead, you need to read the LightBuffer. Look for something like this on the shaders:

    float4 lightColor = tex2D(lightSampler, screenPos) * LightBufferScaleInv;)

    and then your PS_GetTerrainColor would look like this:

    float4 PS_GetTerrainColor(const sampler2D colorMap, const float2 texCoord)
    {
    return (DifuseColor) * tex2D(colorMap, texCoord);
    }
    and after you sum up all your multitexture results, you should multiply it by the “lightColor” value.

    I hope that helps!
    -J.Coluna

  9. Robot97 says:

    Hello

    Can i change the color of the shadows because i think they are to black.🙂

    Thanks
    Robot97

    • jcoluna says:

      You can add an ambient color in the main shader, in the ReconstructLight stage. You can either use a fixed color, or use something clever like SH or irradiance cubemaps. Right before returning the color, you do something like finalColor += diffuseColor * ambientColor;

      You could use this “ambientColor” as a shader parameter, so you can set it per-level, or per-region in your level.

      ________________________________

  10. Robot97 says:

    Hello again🙂

    Now i’ve got problem with directional light’s shadow:

    Can i fix it?

    Thanks
    Robot97

    • Robot97 says:

      I fixed it:

      In the ShadowRenderer.cs in the GenerateShadowTextureDirectionalLight method i changed:

      for (int index = 0; index < meshes.Count; index++)
      {
      Mesh mesh = meshes[index];
      //cull meshes outside the light volume
      if (!_tempFrustum.Intersects(mesh.GlobalBoundingBox))
      continue;
      for (int j = 0; j < mesh.SubMeshes.Count; j++)
      {

      To:

      for (int index = 0; index 200)
      continue;
      for (int j = 0; j < mesh.SubMeshes.Count; j++)
      {

      • Robot97 says:

        Sorry I changed:
        for (int index = 0; index 200)
        continue;
        for (int j = 0; j < mesh.SubMeshes.Count; j++)
        {

        instead of:
        for (int index = 0; index 200)
        continue;
        for (int j = 0; j < mesh.SubMeshes.Count; j++)
        {
        (i copied another code…)
        So i fixed…

  11. Robot97 says:

    I can’t post my code to your website because it’s always chages to
    for (int index = 0; index 200)
    continue;
    for (int j = 0; j < mesh.SubMeshes.Count; j++)
    {

  12. Neil Knight says:

    When is the next tutorial going to become available?

  13. Feedidy says:

    I used to be able to change textures of meshes by doing
    Model.Meshes[“MeshName”].Effects[0].Parameters[“DiffuseMap”].SetValue(NewTexture)

    But now the models disappear off in to the renderer.

    Is there any way to change mesh textures at runtime?

  14. Mick says:

    I’ve tried rebuilding the sample.
    I don’t know why the value “Visible Particle Sys” keeps increasing endlessly.
    Could please somebody help me?

    P.S. Has somebody developed a simpler way to load objects (such as a class in wich the constructor loads the objects in the scene)?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s