Volumetric Light Scattering as a Custom Renderer Feature in URP

Learn how to create your own custom rendering features with Unity’s Universal Render Pipeline by adding some volumetric light scattering to a small animated scene. By Ignacio del Barrio.

4.7 (10) · 2 Reviews

Download materials
Save for later
Share
You are currently viewing page 2 of 4 of this article. Click here to view the first page.

Implementing a Custom Render Pass

If you recall, the code template included two classes, including one for a custom render pass. That’s what you’ll use now.

Start by going back to VolumetricLightScattering.cs. Look for CustomRenderPass and you’ll see this:

Render pass class code

CustomRenderPass derives from the base abstract class, ScriptableRenderPass, which provides methods to implement a logical rendering pass.

Just as with ScriptableRendererFeature, Unity calls a few methods while the script is running. Here are the ones you need to know about for this tutorial:

  • OnCameraSetup(): Before rendering a camera to configure render targets, this is called.
  • Execute(): Called every frame to run the rendering logic.
  • OnCameraCleanup(): After this render pass executes, call this to clean up any allocated resources — usually render targets.

There are more methods that you can override, but which you won’t use in this tutorial, including:

  • Configure(): Before you execute the render pass to configure render targets, you can call this function instead, it executes right after OnCameraSetup()
  • OnFinishCameraStackRendering(): This function is called once after rendering the last camera in the camera stack. You can use this to clean up any allocated resources once all cameras in the stack have finished rendering.

Setting up the Light Scattering Pass

Next, you’ll be setting up the light scattering pass.

Rename CustomRenderPass to LightScatteringPass. Use your code editor’s rename function as the term occurs in multiple places. Then, declare the following variables above OnCameraSetup():

private readonly RenderTargetHandle occluders = 
    RenderTargetHandle.CameraTarget;
private readonly float resolutionScale;
private readonly float intensity;
private readonly float blurWidth;

Here’s what you’re doing above:

  • occluders: You need a RenderTargetHandle to create a texture.
  • resolutionScale: The resolution scale.
  • intensity: The effect intensity.
  • blurWidth: The radial blur width.

You define resolutionScale, intensity and blurWidth in the settings.

Your next step is to declare a constructor to initialize these variables. Do this by adding the following code below the variables you just added:

public LightScatteringPass(VolumetricLightScatteringSettings settings)
{
    occluders.Init("_OccludersMap");
    resolutionScale = settings.resolutionScale;
    intensity = settings.intensity;
    blurWidth = settings.blurWidth;
}

Here, LightScatteringPass is the render pass constructor. You inject the settings instance that you created for the feature class.

The first step is to initialize resolutionScale based on the settings. Then you need to initialize occluders by calling Init() with a texture name.

Next, replace Create() in VolumetricLightScattering with the following:

public override void Create()
{
    m_ScriptablePass = new LightScatteringPass(settings);
    m_ScriptablePass.renderPassEvent = 
        RenderPassEvent.BeforeRenderingPostProcessing;
}

Here, you call the pass constructor and inject the settings as an argument. You also configure where to inject the render pass. In this case, you inject it before the renderer executes post-processing.

Note: You can also inject render pass events at a specific point by adding an offset to a RenderPassEvent.

Configuring the Occluders Map

Now, you’ll create an off-screen texture to store the silhouettes of all the objects that occlude the light source. You’ll do this in OnCameraSetup(). Replace this method with the following code:

public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
{
    // 1
    RenderTextureDescriptor cameraTextureDescriptor = 
        renderingData.cameraData.cameraTargetDescriptor;

    // 2
    cameraTextureDescriptor.depthBufferBits = 0;

    // 3
    cameraTextureDescriptor.width = Mathf.RoundToInt(
        cameraTextureDescriptor.width * resolutionScale);
    cameraTextureDescriptor.height = Mathf.RoundToInt(
        cameraTextureDescriptor.height * resolutionScale);

    // 4
    cmd.GetTemporaryRT(occluders.id, cameraTextureDescriptor, 
        FilterMode.Bilinear);

    // 5
    ConfigureTarget(occluders.Identifier());
}

There are a couple of important things going on here:

  1. First, you get a copy of the current camera’s RenderTextureDescriptor. This descriptor contains all the information you need to create a new texture.
  2. Then, you disable the depth buffer because you aren’t going to use it.
  3. You scale the texture dimensions by resolutionScale.
  4. To create a new texture, you issue a GetTemporaryRT() graphics command. The first parameter is the ID of occluders. The second parameter is the texture configuration you take from the descriptor you created and the third is the texture filtering mode.
  5. Finally, you call ConfigureTarget() with the texture’s RenderTargetIdentifier to finish the configuration.
Note: It’s important to understand that you issue all rendering commands through a CommandBuffer. You set up the commands you want to execute and then hand them over to the scriptable render pipeline to actually run them. You should never call CommandBuffer.SetRenderTarget(). Instead, call ConfigureTarget() and ConfigureClear().

Save the script and go back to the editor.

Implementing the Occluders Shader

Next, you need to create your own unlit shader. Why write your own instead of using the default unlit shader?

The default shader takes the fog settings into account, using them to affect the color of distant objects when it renders them. This is nice for the final image, but not for this texture map. That’s why you create a custom unlit shader and declare Fog {Mode Off} in the SubShader.

Inside RW/Shaders, select Create ▸ Shader ▸ Unlit Shader and name it UnlitColor. Double-click UnlitColor.shader to launch the editor, then replace all the lines with:

Shader "Hidden/RW/UnlitColor"
{
    Properties
    {
        _Color("Main Color", Color) = (0.0, 0.0, 0.0, 0.0)
    }

    SubShader
    {
        Tags { "RenderType" = "Opaque" }
        Fog {Mode Off}
        Color[_Color]

        Pass {}
    }
}

Here, you create a shader that takes a color property named _Color and passes it to the Color shader command. This is all you need to draw the objects in black.

Save the shader code and switch to the editor to compile it.

Executing the Render Pass

First, you need to create a material using the shader. Go back to VolumetricLightScattering.cs and, in LightScatteringPass, add the following line above the constructor:

private readonly Material occludersMaterial;

This will hold the material instance.

Now, add this line in the constructor:

occludersMaterial = new Material(Shader.Find("Hidden/RW/UnlitColor"));

This creates a new material instance with the UnlitColor shader. You use Shader.Find() to get a reference to the shader using the shader name.

To execute the rendering logic, locate Execute() and replace it with the following:

public override void Execute(ScriptableRenderContext context, 
    ref RenderingData renderingData)
{
    // 1
    if (!occludersMaterial)
    {
        return;
    }

    // 2
    CommandBuffer cmd = CommandBufferPool.Get();

    // 3
    using (new ProfilingScope(cmd, 
        new ProfilingSampler("VolumetricLightScattering")))
    {
        // TODO: 1

        // TODO: 2
    }

    // 4
    context.ExecuteCommandBuffer(cmd);
    CommandBufferPool.Release(cmd);
}

Here’s what’s going on in the code above:

  1. You stop the pass rendering if the material is missing.
  2. As you know by now, you issue graphic commands via command buffers. CommandBufferPool is just a collection of pre-created command buffers that are ready to use. You can request one using Get().
  3. You wrap the graphic commands inside a ProfilingScope, which ensures that FrameDebugger can profile the code.
  4. Once you add all the commands to CommandBuffer, you schedule it for execution and release it.