Reconstructing view-space position from depth

After my first LPP implementation, I decided to go for the mesh-based approach for rendering point lights. It consists in rendering a convex mesh (usually a sphere) that fits the light’s volume. This way we can use some depth/stencil tricks to reject pixels outside light range and save some pixel processing.

As I was using screen-aligned quads before, it was easy to compute each pixel view-space position using this. However, as I’m using a mesh now, I had to figure out a way to recreate the position using some projection tricks.

I didn’t find any good resource on google, so I decided to create my shaders and releasing them here, with some basic explanation.

First, I store the linear depth in the range [0..1], where 0 is at camera origin (not near plane) and 1 is at far plane, and all my code is valid for view-space and for perspective projection only.

Second, we need to send our current Tangent(camera_fovY / 2) and camera_aspect * Tangent(camera_fovY / 2) to our shader, like this:

_lighting.Parameters[“TanAspect”].SetValue(new Vector2(camera.TanFovy * camera.Aspect, -camera.TanFovy));

I’m negating camera.TanFovy due to some shader black magic (basically to avoid more negating operations inside the shader).

We need also to send our camera’s FarClip to the shader, and the WorldViewProjection for the current light-volume-mesh.

With this in hands we can use the concept of Similar Triangles and deduce that

posViewSpace.x = posClipSpace.x*TanAspect.x*depth;
posViewSpace.y = posClipSpace.y*TanAspect.y*depth;

where posClipSpace is in range [-1..1], and depth is in range [0..FarClip]. Here is the source, in HLSL (I’m using in my XNA project):

float2 PostProjectionSpaceToScreenSpace(float4 pos)
float2 screenPos = pos.xy / pos.w;
return (0.5f * (float2(screenPos.x, -screenPos.y) + 1));
struct VertexShaderOutputMeshBased
float4 Position : POSITION0;
float4 TexCoordScreenSpace : TEXCOORD0;
VertexShaderOutputMeshBased PointLightMeshVS(VertexShaderInput input)
VertexShaderOutputMeshBased output = (VertexShaderOutputMeshBased)0;
output.Position = mul(input.Position, WorldViewProjection);
//we will compute our texture coords based on pixel position further
output.TexCoordScreenSpace = output.Position;
return output;
float4 PointLightMeshPS(VertexShaderOutputMeshBased input) : COLOR0
//as we are using a sphere mesh, we need to recompute each pixel position
//into texture space coords. GBufferPixelSize is used to fetch the texel’s center
float2 screenPos = PostProjectionSpaceToScreenSpace(input.TexCoordScreenSpace) + GBufferPixelSize;
//read the depth value
float depthValue = tex2D(depthSampler, screenPos).r;
// Reconstruct position from the depth value, the FOV, aspect and pixel position
// Convert screenPos to [-1..1] range
// We negate the depthValue since it goes to -FarClip in view space
float3 pos = float3(TanAspect*(screenPos*2 – 1)*depthValue, -depthValue);

As you can see we don’t need to use a matrix multiplication inside the pixel shader for reconstruction the position, only muls, which helps in processing cost. Hope it helps. See ya!



About jcoluna

Game developer and musician
This entry was posted in XNA and tagged , , , , . Bookmark the permalink.

3 Responses to Reconstructing view-space position from depth

  1. Pingback: Windows Client Developer Roundup 057 for 1/31/2011 - Pete Brown's

  2. really nice author and his text

  3. Anonymous says:

    Thank you a bunch for sharing this with all folks you actually recognize what you are talking
    approximately! Bookmarked. Kindly also consult with my site =).
    We will have a hyperlink exchange contract among us

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s