After my first LPP implementation, I decided to go for the mesh-based approach for rendering point lights. It consists in rendering a convex mesh (usually a sphere) that fits the light’s volume. This way we can use some depth/stencil tricks to reject pixels outside light range and save some pixel processing.

As I was using screen-aligned quads before, it was easy to compute each pixel view-space position using this. However, as I’m using a mesh now, I had to figure out a way to recreate the position using some projection tricks.

I didn’t find any good resource on google, so I decided to create my shaders and releasing them here, with some basic explanation.

First, I store the **linear depth in the range [0..1]**, where 0 is at camera origin (not near plane) and 1 is at far plane, and all my code is valid for **view-space** and for **perspective projection** only.

Second, we need to send our current** Tangent(camera_fovY / 2)** and **camera_aspect * Tangent(camera_fovY / 2)** to our shader, like this:

I’m negating **camera.TanFovy** due to some shader black magic (basically to avoid more negating operations inside the shader).

We need also to send our camera’s **FarClip **to the shader, and the **WorldViewProjection **for the current light-volume-mesh.

With this in hands we can use the concept of Similar Triangles and deduce that

posViewSpace.x = posClipSpace.x*TanAspect.x*depth; posViewSpace.y = posClipSpace.y*TanAspect.y*depth;where posClipSpace is in range [-1..1], and depth is in range [0..FarClip]. Here is the source, in HLSL (I’m using in my XNA project):

float2 PostProjectionSpaceToScreenSpace(float4 pos){

float2 screenPos = pos.xy / pos.w;

return (0.5f * (float2(screenPos.x, -screenPos.y) + 1));

} struct VertexShaderOutputMeshBased

{

float4 Position : POSITION0;

float4 TexCoordScreenSpace : TEXCOORD0;

}; VertexShaderOutputMeshBased PointLightMeshVS(VertexShaderInput input)

{

VertexShaderOutputMeshBased output = (VertexShaderOutputMeshBased)0;

output.Position = mul(input.Position, WorldViewProjection);

//we will compute our texture coords based on pixel position further

output.TexCoordScreenSpace = output.Position;

return output;

}

float4 PointLightMeshPS(VertexShaderOutputMeshBased input) : COLOR0

{

//as we are using a sphere mesh, we need to recompute each pixel position //into texture space coords. GBufferPixelSize is used to fetch the texel’s center

float2 screenPos = PostProjectionSpaceToScreenSpace(input.TexCoordScreenSpace) + GBufferPixelSize;

//read the depth value

float depthValue = tex2D(depthSampler, screenPos).r;

depthValue*=FarClip;

// Reconstruct position from the depth value, the FOV, aspect and pixel position

// Convert screenPos to [-1..1] range // We negate the depthValue since it goes to -FarClip in view space

float3 pos = float3(TanAspect*(screenPos*2 – 1)*depthValue, -depthValue); … }

As you can see we don’t need to use a matrix multiplication inside the pixel shader for reconstruction the position, only muls, which helps in processing cost. Hope it helps. See ya!

Coluna

Pingback: Windows Client Developer Roundup 057 for 1/31/2011 - Pete Brown's 10rem.net

really nice author and his text

Thank you a bunch for sharing this with all folks you actually recognize what you are talking

approximately! Bookmarked. Kindly also consult with my site =).

We will have a hyperlink exchange contract among us