← blog
Apr 6, 2026

Texture Space Shading

I was studying how light baking works. A baked lightmap is just a texture. The lighting is computed once and stored in UV space, not screen space. That made me curious about the broader idea: what if you did all your shading in UV space instead?

So I implemented it. Here's what it is.

The Idea

Normal rendering writes shading results to screen space. Each pixel on screen gets a color, and that color is computed for that screen position.

Texture space shading flips this. Instead of rendering to the screen, you render to a texture using the mesh's UV coordinates as the output position. The fragment shader runs in UV space. The result gets stored in the texture. Then a second pass samples that texture and draws the mesh to screen as usual.

To do the first pass, the vertex shader outputs UV as the clip-space position:

// first pass: render into UV space
vec2 uv = aTexCoords * 2.0 - 1.0;  // remap [0,1] to [-1,1]
gl_Position = vec4(uv, 0.0, 1.0);

The rasterizer covers the triangles as they exist in UV space. The fragment shader runs for each texel the mesh occupies, computes lighting, and writes to the framebuffer. That framebuffer is your shading texture.

The second pass is just normal mesh rendering with one extra texture sample:

// second pass: sample the shading texture
vec3 shading = texture(uShadingTex, vTexCoords).rgb;
fragColor = vec4(shading, 1.0);

Why This Is Useful

One Catch

UV layouts need to be clean. Overlapping UVs mean multiple surface points write to the same texel. Seams and padding matter more than they do for regular texturing. If the mesh wasn't unwrapped with this in mind, the output will have artifacts along UV boundaries.