UIKit Apprentice, Second Edition – Now Updated!

Learn iOS and Swift from scratch. Build four powerful apps—with support for iPad and Dark Mode. Publish apps to the App Store.

Home Unity Tutorials

Volumetric Light Scattering as a Custom Renderer Feature in URP

Learn how to create your own custom rendering features with Unity’s Universal Render Pipeline by adding some volumetric light scattering to a small animated scene.

5/5 1 Rating

Version

  • C# 7.3, Unity 2020.3, Unity

One of the most effective ways to make your scenes look absolutely stunning is to add a volumetric lighting effect. Here, you’ll learn how you can easily achieve this stunning effect yourself by harnessing the power of Unity’s Universal Render Pipeline (URP).

Note: This tutorial assumes that you’re already comfortable working with Unity, C# programming and shaders. An understanding of vector algebra is helpful, but you can still learn a lot without it.

If you’re new to Unity development, check out our Getting Started in Unity and Introduction to Shaders in Unity tutorials.

Getting Started

Download the materials for the project using the Download Materials button at the top or bottom of this tutorial. Once you’ve done that, unzip the package and open the starter folder inside Unity.

Next, look at the contents of RW in the Project window:

The Project window

Here’s a quick breakdown of what each folder contains:

  • Animations: Animations for the player character.
  • Materials: Materials to use for the player and environment.
  • Models: Models for the environment and the player.
  • Scenes: The sample scene where you’ll work.
  • Scripts: A bunch of scripts for the sample project.
  • Settings: Assets for Universal Render Pipeline settings.
  • Shaders: An empty folder where your shaders will go.
  • Textures: Textures for the player and debugging.

Don’t worry too much about the contents of all these folders; you’ll mostly work with Scripts and Shaders.

Now, open the Sunset God Rays scene inside RW/Scenes. Look at the scene view and click Play to try the game:

Playing the starter project

You’ll see an overview of the game level. Use the A and D keys to move the player left and right. Jump by pressing Space.

Your goal in this tutorial is to learn how to improve the visuals of the game with some cool volumetric effects, making it more interesting and engaging. Your first step is to understand what volumetric lighting is and how to implement it.

Volumetric Light Scattering

In the real world, light doesn’t travel in a vacuum, where nothing exists between you and the object you are looking at. Unless, of course, you’re in space. :]

In real-time rendering, this is known as the effect of participating media on light transport. The most common phenomenon is fog.

Real-life photograph of crepuscular rays

When the density of particles in the air is high enough, objects that partially occlude a light source will cast shadows on those particles in the form of light beams or rays.

In game design, these are known as god rays or light shafts. This effect lets you enhance the realism and polish of your scenes, making everything look simply beautiful. :]

Next, you’ll look at how to achieve this effect in your Unity games.

Using the Screen Space Method

While not physically accurate, the screen space method is quite simple. First, you render the light source color, then you paint all the objects in your scene black. You do this in an off-screen texture called an occluders map. It looks like this:

Occluders map example

After that, you apply radial blur to the image in post-processing. Starting from the light source center, you take multiple color samples along a vector that goes from the light origin toward the current pixel you’re evaluating. You set the final color of this pixel to the weighted sum of these samples.

An overview of how to sample pixels

Finally, you overlay this image on top of the original color image.

Each step of the screen space method

Now that you understand the process, you’ll see how to apply this concept to your own game.

Creating a Custom Renderer Feature

The Universal Render Pipeline provides a script template to create features. Now you’ll create your very own custom renderer feature.

Inside RW/Scripts, select Create ▸ Rendering ▸ Universal Render Pipeline ▸ Renderer Feature and name the feature VolumetricLightScattering.

Next, double-click VolumetricLightScattering.cs to launch the code editor. You’ll see the following:

How the renderer feature class appears in the editor

This is your renderer feature class. It’s derived from the base abstract class, ScriptableRendererFeature, which enables you to inject render passes into the renderer and execute them upon different events.

ScriptableRendererFeatures are composed of one or many ScriptableRenderPasses. By default, Unity provides you with an empty pass named CustomRenderPass. You’ll learn about the details of this class when you write your custom pass. For now, focus on VolumetricLightScattering.

Unity calls a few methods in a predetermined order as the script runs:

  • Create(): Called when the feature first loads. You’ll use it to create and configure all ScriptableRenderPass instances.
  • AddRenderPasses(): Called every frame, once per camera. You’ll use it to inject your ScriptableRenderPass instances into the ScriptableRenderer.

First, you’ll define some settings to configure this feature. Start by adding the following class above VolumetricLightScattering:

[System.Serializable]
public class VolumetricLightScatteringSettings
{
    [Header("Properties")]
    [Range(0.1f, 1f)]
    public float resolutionScale = 0.5f;

    [Range(0.0f, 1.0f)]
    public float intensity = 1.0f;

    [Range(0.0f, 1.0f)]
    public float blurWidth = 0.85f;
}

VolumetricLightScatteringSettings is a data container for the following feature settings:

  • resolutionScale: Configures the size of your off-screen texture.
  • intensity: Manages the brightness of the light rays you’re generating.
  • blurWidth: The radius of the blur you use when you combine the pixel colors.
Note: You add System.Serializable at the top of the class to make the attributes editable through the inspector.

Next, declare an instance of the settings by adding the following line above Create():

public VolumetricLightScatteringSettings settings = 
    new VolumetricLightScatteringSettings();

This is all you need to get your custom renderer feature ready to use. Save your script and switch to Unity.

Adding the Render Feature to the Forward Renderer

Now that you have a spiffy new renderer feature, you need to add it to the forward renderer.

To do this, find RW/Settings in the Project window and select ForwardRenderer.

The forward renderer inspector

In the Inspector window, select Add Renderer Feature ▸ Volumetric Light Scattering.

Adding the Volumetric Light Scattering feature to the forward renderer

The renderer now uses your renderer feature. Click Settings to show the properties you just defined.

Now, click Play and… you’ll see that nothing has changed. This is because the feature’s render pass doesn’t do anything yet.

Note: If you don’t see the settings in the inspector, try reloading VolumetricLightScattering.cs. Right-click on the script and select Reimport. Go back to the forward renderer — the settings should be visible now.

Implementing a Custom Render Pass

If you recall, the code template included two classes, including one for a custom render pass. That’s what you’ll use now.

Start by going back to VolumetricLightScattering.cs. Look for CustomRenderPass and you’ll see this:

Render pass class code

CustomRenderPass derives from the base abstract class, ScriptableRenderPass, which provides methods to implement a logical rendering pass.

Just as with ScriptableRendererFeature, Unity calls a few methods while the script is running. Here are the ones you need to know about for this tutorial:

  • OnCameraSetup(): Before rendering a camera to configure render targets, this is called.
  • Execute(): Called every frame to run the rendering logic.
  • OnCameraCleanup(): After this render pass executes, call this to clean up any allocated resources — usually render targets.

There are more methods that you can override, but which you won’t use in this tutorial, including:

  • Configure(): Before you execute the render pass to configure render targets, you can call this function instead, it executes right after OnCameraSetup()
  • OnFinishCameraStackRendering(): This function is called once after rendering the last camera in the camera stack. You can use this to clean up any allocated resources once all cameras in the stack have finished rendering.

Setting up the Light Scattering Pass

Next, you’ll be setting up the light scattering pass.

Rename CustomRenderPass to LightScatteringPass. Use your code editor’s rename function as the term occurs in multiple places. Then, declare the following variables above OnCameraSetup():

private readonly RenderTargetHandle occluders = 
    RenderTargetHandle.CameraTarget;
private readonly float resolutionScale;
private readonly float intensity;
private readonly float blurWidth;

Here’s what you’re doing above:

  • occluders: You need a RenderTargetHandle to create a texture.
  • resolutionScale: The resolution scale.
  • intensity: The effect intensity.
  • blurWidth: The radial blur width.

You define resolutionScale, intensity and blurWidth in the settings.

Your next step is to declare a constructor to initialize these variables. Do this by adding the following code below the variables you just added:

public LightScatteringPass(VolumetricLightScatteringSettings settings)
{
    occluders.Init("_OccludersMap");
    resolutionScale = settings.resolutionScale;
    intensity = settings.intensity;
    blurWidth = settings.blurWidth;
}

Here, LightScatteringPass is the render pass constructor. You inject the settings instance that you created for the feature class.

The first step is to initialize resolutionScale based on the settings. Then you need to initialize occluders by calling Init() with a texture name.

Next, replace Create() in VolumetricLightScattering with the following:

public override void Create()
{
    m_ScriptablePass = new LightScatteringPass(settings);
    m_ScriptablePass.renderPassEvent = 
        RenderPassEvent.BeforeRenderingPostProcessing;
}

Here, you call the pass constructor and inject the settings as an argument. You also configure where to inject the render pass. In this case, you inject it before the renderer executes post-processing.

Note: You can also inject render pass events at a specific point by adding an offset to a RenderPassEvent.

Configuring the Occluders Map

Now, you’ll create an off-screen texture to store the silhouettes of all the objects that occlude the light source. You’ll do this in OnCameraSetup(). Replace this method with the following code:

public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
{
    // 1
    RenderTextureDescriptor cameraTextureDescriptor = 
        renderingData.cameraData.cameraTargetDescriptor;

    // 2
    cameraTextureDescriptor.depthBufferBits = 0;

    // 3
    cameraTextureDescriptor.width = Mathf.RoundToInt(
        cameraTextureDescriptor.width * resolutionScale);
    cameraTextureDescriptor.height = Mathf.RoundToInt(
        cameraTextureDescriptor.height * resolutionScale);

    // 4
    cmd.GetTemporaryRT(occluders.id, cameraTextureDescriptor, 
        FilterMode.Bilinear);

    // 5
    ConfigureTarget(occluders.Identifier());
}

There are a couple of important things going on here:

  1. First, you get a copy of the current camera’s RenderTextureDescriptor. This descriptor contains all the information you need to create a new texture.
  2. Then, you disable the depth buffer because you aren’t going to use it.
  3. You scale the texture dimensions by resolutionScale.
  4. To create a new texture, you issue a GetTemporaryRT() graphics command. The first parameter is the ID of occluders. The second parameter is the texture configuration you take from the descriptor you created and the third is the texture filtering mode.
  5. Finally, you call ConfigureTarget() with the texture’s RenderTargetIdentifier to finish the configuration.
Note: It’s important to understand that you issue all rendering commands through a CommandBuffer. You set up the commands you want to execute and then hand them over to the scriptable render pipeline to actually run them. You should never call CommandBuffer.SetRenderTarget(). Instead, call ConfigureTarget() and ConfigureClear().

Save the script and go back to the editor.

Implementing the Occluders Shader

Next, you need to create your own unlit shader. Why write your own instead of using the default unlit shader?

The default shader takes the fog settings into account, using them to affect the color of distant objects when it renders them. This is nice for the final image, but not for this texture map. That’s why you create a custom unlit shader and declare Fog {Mode Off} in the SubShader.

Inside RW/Shaders, select Create ▸ Shader ▸ Unlit Shader and name it UnlitColor. Double-click UnlitColor.shader to launch the editor, then replace all the lines with:

Shader "Hidden/RW/UnlitColor"
{
    Properties
    {
        _Color("Main Color", Color) = (0.0, 0.0, 0.0, 0.0)
    }

    SubShader
    {
        Tags { "RenderType" = "Opaque" }
        Fog {Mode Off}
        Color[_Color]

        Pass {}
    }
}

Here, you create a shader that takes a color property named _Color and passes it to the Color shader command. This is all you need to draw the objects in black.

Save the shader code and switch to the editor to compile it.

Executing the Render Pass

First, you need to create a material using the shader. Go back to VolumetricLightScattering.cs and, in LightScatteringPass, add the following line above the constructor:

private readonly Material occludersMaterial;

This will hold the material instance.

Now, add this line in the constructor:

occludersMaterial = new Material(Shader.Find("Hidden/RW/UnlitColor"));

This creates a new material instance with the UnlitColor shader. You use Shader.Find() to get a reference to the shader using the shader name.

To execute the rendering logic, locate Execute() and replace it with the following:

public override void Execute(ScriptableRenderContext context, 
    ref RenderingData renderingData)
{
    // 1
    if (!occludersMaterial)
    {
        return;
    }

    // 2
    CommandBuffer cmd = CommandBufferPool.Get();

    // 3
    using (new ProfilingScope(cmd, 
        new ProfilingSampler("VolumetricLightScattering")))
    {
        // TODO: 1

        // TODO: 2
    }

    // 4
    context.ExecuteCommandBuffer(cmd);
    CommandBufferPool.Release(cmd);
}

Here’s what’s going on in the code above:

  1. You stop the pass rendering if the material is missing.
  2. As you know by now, you issue graphic commands via command buffers. CommandBufferPool is just a collection of pre-created command buffers that are ready to use. You can request one using Get().
  3. You wrap the graphic commands inside a ProfilingScope, which ensures that FrameDebugger can profile the code.
  4. Once you add all the commands to CommandBuffer, you schedule it for execution and release it.

Drawing the Light Source

With the material created, you’ll now draw the actual light source.

Replace // TODO: 1 with the following lines:

context.ExecuteCommandBuffer(cmd);
cmd.Clear();

This prepares the command buffer so you can start adding commands to it.

The first graphic command you’ll issue will render the main light source. To keep it simple, you’ll draw the skybox, which contains the shape of the sun. This should yield very good results!

Add these lines below the previous code:

Camera camera = renderingData.cameraData.camera;
context.DrawSkybox(camera);

For ScriptableRenderContext to provide DrawSkybox, it needs a reference to the camera. You get this reference from RenderingData, a struct that provides information about the scene.

Referencing Unity Default Shaders

Next, you’ll draw the occluders, which are all the objects in the scene that can potentially block the light source. Instead of keeping track of these objects, you’ll use their shaders to reference them during rendering.

In this project, the objects all use Unity’s default shaders. To support default shaders, you have to fetch the shader tag ID for all the default shader passes. You’ll do this once and cache the results in a variable, using lists.

To use C# lists, you must add this line at the top of the file:

using System.Collections.Generic;

Next, declare the following field at the top of LightScatteringPass:

private readonly List<ShaderTagId> shaderTagIdList = 
    new List<ShaderTagId>();

Then, add the following code in the constructor to populate the list:

shaderTagIdList.Add(new ShaderTagId("UniversalForward"));
shaderTagIdList.Add(new ShaderTagId("UniversalForwardOnly"));
shaderTagIdList.Add(new ShaderTagId("LightweightForward"));
shaderTagIdList.Add(new ShaderTagId("SRPDefaultUnlit"));

Drawing the Occluders

Once you have the default shader IDs, you can draw the objects that use those shaders.

Go back to DrawSkybox() function and add the following line below it:

// 1
DrawingSettings drawSettings = CreateDrawingSettings(shaderTagIdList, 
    ref renderingData, SortingCriteria.CommonOpaque);
// 2
drawSettings.overrideMaterial = occludersMaterial;

Here’s what you’re doing with this code:

  1. Before you draw anything, you need to set up a few things. DrawingSettings describes how to sort the objects and which shader passes are allowed. You create this by calling CreateDrawingSettings(). You supply this method with the shader passes, a reference to RenderingData and the sorting criteria for visible objects.
  2. You use the material override to replace the objects’ materials with occludersMaterial.

Next, add the following after the previous code:

context.DrawRenderers(renderingData.cullResults, 
    ref drawSettings, ref filteringSettings);

DrawRenderers handles the actual draw call. It needs to know which objects are currently visible, which is what the culling results are for. Additionally, you must supply the drawing settings and filtering settings. You pass both structs by reference.

You’ve defined the drawing settings already, but not the filtering settings. Once again, you can declare them once, so add this line at the top of the class:

private FilteringSettings filteringSettings = 
    new FilteringSettings(RenderQueueRange.opaque);

FilteringSettings indicates which render queue range is allowed: opaque, transparent or all. With this line, you set the range to filter any objects that aren’t part of the opaque render queue.

The last thing you’ll do is clean up the resources you allocated when executing this render pass. To do that, replace OnCameraCleanup() with:

public override void OnCameraCleanup(CommandBuffer cmd)
{
    cmd.ReleaseTemporaryRT(occluders.id);
}

Congratulations, you’ve made a lot of progress! Save the script, click Play and, guess what… everything’s still the same. Don’t worry, you’ll see why in the next section.

Inspecting With the Frame Debbuger

While you can’t see anything new in the scene, something different is happening under the hood. Next, you’ll use the Frame Debugger to inspect the renderer and see if the texture is being drawn properly.

Make sure you’re still in Play mode with the Game view selected. Select Window ▸ Analysis ▸ Frame Debugger, which will open a new window. Dock the window next to the Scene tab, press Enable and you’ll see this:

Frame debugger window

The main list shows the sequence of graphics commands in the form of a hierarchy that identifies where they originated.

Select VolumetricLightScattering and expand it. You’ll notice that the Game view changes. If rendering happens in a RenderTexture at the selected draw call, the contents of that RenderTexture display in the Game view. This is the occluders map!

The frame debugger window and the game view

If you kept the default resolution scale settings, you’ll see the texture is half the screen size. You can select the individual draw calls to inspect what they do. You can even step through each draw call:

Stepping through the draw calls for the occluders map

OK, the occluders map works. Click Disable to stop debugging.

Refining the Image in Post-Processing

Now to refine the image by blurring it in post-processing.

Implementing the Radial Blur Shader

Radial blur is achieved by creating a post-processing fragment shader.

Go to RW/Shaders, select Create ▸ Shader ▸ Image Effect Shader and name it RadialBlur.

Next, open RadialBlur.shader and replace the name declaration with:

Shader "Hidden/RW/RadialBlur"

This defines the new radial blur shader.

Next, you need to tell the shader about the settings you defined in the renderer feature. In the property block, add the following below _MainTex:

_BlurWidth("Blur Width", Range(0,1)) = 0.85
_Intensity("Intensity", Range(0,1)) = 1
_Center("Center", Vector) = (0.5,0.5,0,0)

_BlurWidth and _Intensity control how your light rays look. _Center is a Vector for the screen space coordinates of the sun, the origin point for the radial blur.

Combining the Images

For your next step, you’ll execute this shader on the occluders texture map and overlay the resulting color on top of the main camera color texture. You’ll use blend modes to determine how to combine the two images.

Start by going to SubShader and removing this code:

// No culling or depth
Cull Off ZWrite Off ZTest Always

Replace it with this:

Blend One One

This line configures the blend mode as additive. This adds both images’ values to the color channels and clamps them to the maximum value of 1.

Next, declare the following attributes above frag():

#define NUM_SAMPLES 100

float _BlurWidth;
float _Intensity;
float4 _Center;

The first line defines the number of samples to take to blur the image. A high number yields better results, but is also less performant. The other lines are the same variables you declared in the property block.

The actual magic happens inside the fragment shader. Replace frag() with this code:

fixed4 frag(v2f i) : SV_Target
{
    //1
    fixed4 color = fixed4(0.0f, 0.0f, 0.0f, 1.0f);

    //2
    float2 ray = i.uv - _Center.xy;

    //3
    for (int i = 0; i < NUM_SAMPLES; i++)
    {
        float scale = 1.0f - _BlurWidth * (float(i) / 
            float(NUM_SAMPLES - 1));
        color.xyz += tex2D(_MainTex, (ray * scale) + 
            _Center.xy).xyz / float(NUM_SAMPLES);
    }

    //4
    return color * _Intensity;
}

In the code above, you:

  1. Declare color with a default value of black.
  2. Calculate the ray that goes from the center point towards the current pixel UV coordinates.
  3. Sample the texture along the ray and accumulate the fragment color.
  4. Multiply color by intensity and return the result.

Save the shader and go back to Unity. You’ll test the new shader by creating a new material and dragging the shader file onto it.

Start by selecting the material and assigning occludersMapExample.png in the texture slot. Find this texture in RW/Textures.

Now, you’ll see the shader effect on the preview window. Change the preview shape to a plane and play around with the values to get a better understanding of the shader attributes.

Testing the radial blur shader

Fantastic, the effect is almost done.

Adding the Radial Blur Material Instance

The following steps are similar to what you did for the occluders map. Go back to VolumetricLightScattering.cs and, in LightScatteringPass, declare a material above the constructor:

private readonly Material radialBlurMaterial;

Then, add this line inside the constructor:

radialBlurMaterial = new Material(
    Shader.Find("Hidden/RW/RadialBlur"));

Nothing new so far, you’re just creating the material instance. Make sure that the material isn’t missing by replacing this if statement:

if (!occludersMaterial)
{
    return;
}

With this one:

if (!occludersMaterial || !radialBlurMaterial )
{
    return;
}

Now you’re all set to configure the blur material.

Configuring the Radial Blur Material

The blur shader needs to know the sun’s location so it can use it as the center point. Within Execute(), replace // TODO: 2 with the following lines:

// 1
Vector3 sunDirectionWorldSpace = 
    RenderSettings.sun.transform.forward;
// 2
Vector3 cameraPositionWorldSpace = 
    camera.transform.position;
// 3
Vector3 sunPositionWorldSpace = 
    cameraPositionWorldSpace + sunDirectionWorldSpace;
// 4
Vector3 sunPositionViewportSpace = 
    camera.WorldToViewportPoint(sunPositionWorldSpace);

This might seem like some crazy voodoo magic, but it’s actualy quite simple:

  1. You get a reference to the sun from RenderSettings. You need the forward vector of the sun because directional lights don’t have a position in space.
  2. Get the camera position.
  3. This gives you a unit vector that goes from the camera towards the sun. You’ll use this for the sun’s position.
  4. The shader expects a viewport space position, but you did your calculations in world space. To fix this, you use WorldToViewportPoint() to transform the point-to-camera viewport space.

Good job, you’ve finished the hardest part.

Now, pass the data to the shader with the following code:

radialBlurMaterial.SetVector("_Center", new Vector4(
    sunPositionViewportSpace.x, sunPositionViewportSpace.y, 0, 0));
radialBlurMaterial.SetFloat("_Intensity", intensity);
radialBlurMaterial.SetFloat("_BlurWidth", blurWidth);

Keep in mind that you only really need the x and y components of sunPositionViewportSpace since it represents a pixel position on the screen.

Blurring the Occluders Map

Finally, you need to blur the occluders map.

Add the following line of code to run the shader:

Blit(cmd, occluders.Identifier(), cameraColorTargetIdent, 
    radialBlurMaterial);

The context provides Blit, a function that copies a source texture into a destination texture using a shader. It executes your shader with occluders as the source texture, then stores the output in the camera color target. Because of the blend mode you defined in the shader, it combines both the output and the color target.

To get a reference to the camera color target, add the following line at the top of LightScatteringPass:

private RenderTargetIdentifier cameraColorTargetIdent;

Add a new function below the constructor to set this variable:

public void SetCameraColorTarget(RenderTargetIdentifier cameraColorTargetIdent)
{
    this.cameraColorTargetIdent = cameraColorTargetIdent;
}

In VolumetricLightScattering, add this line at the end of AddRenderPasses():

m_ScriptablePass.SetCameraColorTarget(renderer.cameraColorTarget);

This will pass the camera color target to the render pass, which Blit() requires.

Save the script and go back to the editor. Congratulations, you completed the effect!

Go to the Scene view and you’ll see your lighting effects at work. Click Play and move around the level to see it in action:

Playing the game with volumetric light scattering

Where to Go From Here?

Download the final project by using the Download Materials button at the top or bottom of this tutorial.

In this tutorial, you learned how to write your own renderer features to extend the Universal Render Pipeline. You also learned a cool post-processing technique to visually enhance your projects.

Feel free to explore the project files and experiment with the effect. To see how everything works together, inspect the render pass with the Frame Debugger.

There’s a small surprise hidden in the scene. Look at the day and night controller attached to the directional light. Activate Auto Increment and enjoy the sunrise. :]

If you want to learn more about the Universal Render Pipeline, it’s helpful to look at the source code. Yes! It’s available to anyone for free, and it’s a great source of learning material. You can find it in Packages/Universal RP.

Also, look at the Boat Attack Demo by Andre McGrail, which uses custom renderer features for water effects and other cool things.

We hope you enjoyed this tutorial! If you have any questions or comments, feel free to join the forum discussion below.

Average Rating

5/5

Add a rating for this content

1 rating

More like this

Contributors

Comments