Home iOS & Swift Tutorials

Core Image Tutorial for iOS: Custom Filters

Learn to create your own Core Image filters using the Metal Shading Language to build kernels that provide pixel-level image processing.


  • Swift 5, iOS 15, Xcode 13

Core Image is a powerful and efficient image processing framework. You can create beautiful effects using the built-in filters the framework provides, as well as create custom filters and image processors. You can adjust the color, geometry and perform complex convolutions.

Creating beautiful filters is an art, and one of the greatest artists was Leonardo da Vinci. In this tutorial, you’ll add some interesting touches to da Vinci’s famous paintings.

In the process, you’ll:

  • Get an overview of Core Image’s classes and built-in filters.
  • Create a filter using built-in filters.
  • Transform an image’s color using a custom color kernel.
  • Transform the geometry of an image using a custom warp kernel.
  • Learn to debug Core Image issues.

Get your paintbrushes, oops, I mean your Xcode ready. It’s time to dive into the amazing world of Core Image!

Getting Started

Download the project by clicking Download Materials at the top or bottom of this page. Open the RayVinci project in starter. Build and run.

RayVinci Getting Started

You’ll see four of Leonardo da Vinci’s most famous works. Tapping a painting opens a sheet, but the image’s output is empty.

In this tutorial, you’ll create filters for these images and then see the result of applying a filter in the output.

Swipe down to dismiss the sheet. Next, tap Filter List on the top right.

That button should show a list of available built-in filters. But wait, it’s currently empty. You’ll fix that next. :]

Introducing Core Image Classes

Before you populate the list of filters, you need to understand the Core Image framework’s basic classes.

  • CIImage: Represents an image that is either ready for processing or produced by the Core Image filters. A CIImage object has all the image’s data within it but isn’t actually an image. It’s like a recipe that contains all the ingredients to make a dish but isn’t the dish itself.

    You’ll see how to render the image to display later in this tutorial.

  • CIFilter: Takes one or more images, processes each image by applying transformations and produces a CIImage as its output. You can chain multiple filters and create interesting effects. The objects of CIFilters are mutable and not thread-safe.
  • CIContext: Renders the processed results from the filter. For example, CIContext helps create a Quartz 2D image from a CIImage object.

To learn more about these classes, refer to the Core Image Tutorial: Getting Started.

Now that you’re familiar with the Core Image classes, it’s time to populate the list of filters.

Fetching the List of Built-In Filters

Open RayVinci and select FilterListView.swift. Replace filterList in FilterListView with:

let filterList = CIFilter.filterNames(inCategory: nil)

Here, you fetch the list of all available built-in filters provided by Core Image by using filterNames(inCategory:) and passing nil as the category. You can view the list of available categories in CIFilter‘s developer documentation.

Open FilterDetailView.swift. Replace Text("Filter Details") in body with:

// 1
if let ciFilter = CIFilter(name: filter) {
  // 2
  ScrollView {
} else {
  // 3
  Text("Unknown filter!")

Here, you:

  1. Initialize a filter, ciFilter, using the filter name. Since the name is a string and can be misspelled, the initializer returns an optional. For this reason, you’ll need to check for the existence of a filter.
  2. You can inspect the filter’s various attributes using attributes. Here, you create a ScrollView and populate the description of the attributes in a Text view if the filter exists.
  3. If the filter doesn’t exist or isn’t known, you show a Text view explaining the situation.

Build and run. Tap Filter List. Whoa, that’s a lot of filters!

Tap any filter to see its attributes.

Fetch List of available filters

Remarkable, isn’t it? You’re just getting started! In the next section, you’ll use one of these built-in filters to make the sun shine on the “Mona Lisa”. :]

Using Built-In Filters

Now that you’ve seen the list of available filters, you’ll use one of these to create an interesting effect.

Open ImageProcessor.swift. At the top, before the class declaration, add:

enum ProcessEffect {
  case builtIn
  case colorKernel
  case warpKernel
  case blendKernel

Here, you declare ProcessEffect as an enum. It has all the filter cases you’ll work on in this tutorial.

Add the following to ImageProcessor:

// 1
private func applyBuiltInEffect(input: CIImage) {
  // 2
  let noir = CIFilter(
    name: "CIPhotoEffectNoir",
    parameters: ["inputImage": input]
  // 3
  let sunGenerate = CIFilter(
    name: "CISunbeamsGenerator",
    parameters: [
      "inputStriationStrength": 1,
      "inputSunRadius": 300,
      "inputCenter": CIVector(
        x: input.extent.width - input.extent.width / 5,
        y: input.extent.height - input.extent.height / 10)
  // 4
  let compositeImage = input.applyingFilter(
    parameters: [
      kCIInputBackgroundImageKey: noir as Any,
      kCIInputMaskImageKey: sunGenerate as Any

Here, you:

  1. Declare a private method that takes a CIImage as input and applies a built-in filter.
  2. You start by creating a darkened, moody noir effect using CIPhotoEffectNoir. CIFilter takes a string as the name and parameters in the form of a dictionary. You fetch the resulting filtered image from outputImage.
  3. Next, you create a generator filter using CISunbeamsGenerator. This creates a sunbeams mask. In the parameters, you set:
    • inputStriationStrength: Represents the intensity of the sunbeams.
    • inputSunRadius: Represents the radius of the sun.
    • inputCenter: The x and y position of the center of the sunbeam. In this case, you set the position to the top right of the image.
  4. Here, you create a stylized effect by using CIBlendWithMask. You apply the filter on the input by setting the result of CIPhotoEffectNoir as the background image and sunGenerate as the mask image. The result of this composition is a CIImage.

ImageProcessor has output, a published property which is a UIImage. You’ll need to convert the result of the composition to a UIImage to display it.

In ImageProcessor, add the following below @Published var output = UIImage():

let context = CIContext()

Here, you create an instance of CIContext that all the filters will use.

Add the following to ImageProcessor:

private func renderAsUIImage(_ image: CIImage) -> UIImage? {
  if let cgImage = context.createCGImage(image, from: image.extent) {
    return UIImage(cgImage: cgImage)
  return nil

Here, you use context to create an instance of CGImage from CIImage.

Using cgImage, you then create a UIImage. The user will see this image.

Displaying a Built-In Filter’s Output

Add the following to the end of applyBuiltInEffect(input:):

if let outputImage = renderAsUIImage(compositeImage) {
  output = outputImage

This converts compositeImage, which is a CIImage, to a UIImage using renderAsUIImage(_:). You then save the result to output.

Add the following new method to ImageProcessor:

// 1
func process(painting: Painting, effect: ProcessEffect) {
  // 2
    let paintImage = UIImage(named: painting.image),
    let input = CIImage(image: paintImage)
  else {
    print("Invalid input image")
  switch effect {
  // 3
  case .builtIn:
    applyBuiltInEffect(input: input)
    print("Unsupported effect")

Here, you:

  1. Create a method that acts as an entry point to ImageProcessor. It takes an instance of Painting and an effect to apply.
  2. Check for a valid image.
  3. If the effect is of type .builtIn, you call applyBuiltInEffect(input:) to apply the filter.

Open PaintingWall.swift. Below selectedPainting = paintings[index] in the action closure of Button, add:

var effect = ProcessEffect.builtIn
if let painting = selectedPainting {
  switch index {
  case 0:
    effect = .builtIn
    effect = .builtIn
  ImageProcessor.shared.process(painting: painting, effect: effect)

Here, you set the effect to .builtIn for the first painting. You also set it as the default effect. Then you apply the filter by calling process(painting:, effect:) on ImageProcessor.

Build and run. Tap the “Mona Lisa”. You’ll see a built-in filter applied in the output!

Applying Built-in Filter

Great job making the sun shine on Mona Lisa. No wonder she’s smiling! Now it’s time to create your filter using CIKernel.

Meet CIKernel

With CIKernel, you can put in place custom code, called a kernel, to manipulate an image pixel by pixel. The GPU processes these pixels. You write kernels in the Metal Shading Language, which offers the following advantages over the older Core Image Kernel Language, deprecated since iOS 12:

  • Supports all the great features of Core Image kernels like concatenation and tiling.
  • Comes pre-compiled at build-time with error diagnostics. This way, you don’t need to wait for runtime for errors to appear.
  • Offers syntax highlighting and syntax checking.

There are different types of kernels:

  • CIColorKernel: Changes the color of a pixel but doesn’t know the pixel’s position.
  • CIWarpKernel: Changes the position of a pixel but doesn’t know the pixel’s color.
  • CIBlendKernel: Blends two images in an optimized way.

To create and apply a kernel, you:

  1. First, add custom build rules to the project.
  2. Then, add the Metal source file.
  3. Load the kernel.
  4. Finally, initialize and apply the kernel.

Custom kernel steps

You’ll implement each of these steps next. Get ready for a fun ride!

Creating Build Rules

You need to compile the Core Image Metal code and link it with special flags.

Custom kernel build rules

Select the RayVinci target in the Project navigator. Then, select the Build Rules tab. Add a new build rule by clicking +.

Then, set up the first new build rule:

  1. Set Process to Source files with name matching:. Then set *.ci.metal as the value.
  2. Uncheck Run once per architecture.
  3. Add the following script:
    xcrun metal -c -fcikernel "${INPUT_FILE_PATH}" \
      -o "${SCRIPT_OUTPUT_FILE_0}"

    This calls the Metal compiler with the required -fcikernel flag.

  4. Add the following in Output Files:

    This produces an output binary that ends in .ci.air.

Metal To Air file

Next, add another new build rule by clicking + again.

Follow these steps for the second new build rule:

  1. Set Process to Source files with name matching:. Then set *.ci.air as the value.
  2. Uncheck Run once per architecture.
  3. Add the following script:
    xcrun metallib -cikernel "${INPUT_FILE_PATH}" -o "${SCRIPT_OUTPUT_FILE_0}"

    This calls the Metal linker with the required -cikernel flag.

  4. Add the following in Output Files:

    This produces a file ending with .ci.metallib in the app bundle.

Air to Metallib

Next, it’s time to add the Metal source.

Adding the Metal Source

First, you’ll create a source file for a color kernel. In the Project navigator, highlight RayVinci right under the RayVinci project.

Right-click and choose New Group. Name this new group Filters. Then, highlight the group and add a new Metal file named ColorFilterKernel.ci.metal.

Open the file and add:

// 1
#include <CoreImage/CoreImage.h>
// 2
extern "C" {
  namespace coreimage {
    // 3
    float4 colorFilterKernel(sample_t s) {
      // 4
      float4 swappedColor;
      swappedColor.r = s.g;
      swappedColor.g = s.b;
      swappedColor.b = s.r;
      swappedColor.a = s.a;
      return swappedColor;

Here’s a code breakdown:

  1. Including the Core Image header lets you access the classes the framework provides. This automatically includes the Core Image Metal Kernel Library, CIKernelMetalLib.h.
  2. The kernel needs to be inside an extern "C" enclosure to make it’s accessible by name at runtime. Next, you specify the namespace of coreimage. You declare all the extensions in the coreimage namespace to avoid conflicts with Metal.
  3. Here, you declare colorFilterKernel, which takes an input of type sample_t. sample_t represents a single color sample from an input image. colorFilterKernel returns a float4 that represents the RGBA value of the pixel.
  4. Then, you declare a new float4, swappedColor, and swap the RGBA values from the input sample. You then return the sample with the swapped values.

Next, you’ll write the code to load and apply the kernel.

Loading the Kernel Code

To load and apply a kernel, start by creating a subclass of CIFilter.

Create a new Swift file in the Filters group. Name it ColorFilter.swift and add:

// 1
import CoreImage

class ColorFilter: CIFilter {
  // 2
  var inputImage: CIImage?

  // 3
  static var kernel: CIKernel = { () -> CIColorKernel in
    guard let url = Bundle.main.url(
      forResource: "ColorFilterKernel.ci",
      withExtension: "metallib"),
      let data = try? Data(contentsOf: url) else {
      fatalError("Unable to load metallib")

    guard let kernel = try? CIColorKernel(
      functionName: "colorFilterKernel",
      fromMetalLibraryData: data) else {
      fatalError("Unable to create color kernel")

    return kernel

  // 4
  override var outputImage: CIImage? {
    guard let inputImage = inputImage else { return nil }
    return ColorFilter.kernel.apply(
      extent: inputImage.extent,
      roiCallback: { _, rect in
        return rect
      arguments: [inputImage])

Here, you:

  1. Start by importing the Core Image framework.
  2. Subclassing CIFilter involves two main steps:
    • Specifying the input parameters. Here, you use inputImage.
    • Overriding outputImage.
  3. Then, you declare a static property, kernel, that loads the contents of the ColorFilterKernel.ci.metallib. This way, the library loads only once. You then create an instance of CIColorKernel with the contents of the ColorFilterKernel.ci.metallib.
  4. Next, you override outputImage. Here, you apply the kernel by using apply(extent:roiCallback:arguments:). The extent determines how much of the input image gets passed to the kernel.

    You pass the entire image, so the filter will apply to the entire image. roiCallback determines the rect of the input image needed to render the rect in outputImage. Here, the rect of inputImage and outputImage doesn’t change, so you return the same value and pass the inputImage in the arguments array to the kernel.

Now that you’ve created the color kernel filter, you’ll apply it to an image.

Applying the Color Kernel Filter

Open ImageProcessor.swift. Add the following method to ImageProcessor:

private func applyColorKernel(input: CIImage) {
  let filter = ColorFilter()
  filter.inputImage = input
  if let outputImage = filter.outputImage,
    let renderImage = renderAsUIImage(outputImage) {
    output = renderImage

Here, you declare applyColorKernel(input:). This takes a CIImage as input. You create the custom filter by creating an instance of ColorFilter.

The filter’s outputImage has the color kernel applied. You then create an instance of UIImage using renderAsUIImage(_:) and set this as the output.

Next, handle .colorKernel in process(painting:effect:) as shown below. Add this new case above default:

case .colorKernel:
  applyColorKernel(input: input)

Here, you call applyColorKernel(input:) to apply your custom color kernel filter.

Finally, open PaintingWall.swift. Add the following in the switch statement right below case 0 in the Button‘s action closure:

case 1:
  effect = .colorKernel

This sets the effect to .colorKernel for the second painting.

Build and run. Now tap the second painting, “The Last Supper”. You’ll see the color kernel filter applied and the RGBA values swapped in the image.

Color Kernel Filter

Great job! Next, you’ll create a cool warp effect on da Vinci’s mysterious “Salvator Mundi.”

Creating a Warp Kernel

Similar to the color kernel, you’ll start by adding a Metal source file. Create a new Metal file in the Filters group named WarpFilterKernel.ci.metal. Open the file and add:

#include <CoreImage/CoreImage.h>
extern "C" {
  namespace coreimage {
    float2 warpFilter(destination dest) {
      float y = dest.coord().y + tan(dest.coord().y / 10) * 20;
      float x = dest.coord().x + tan(dest.coord().x/ 10) * 20;
      return float2(x,y);

Here’s what you added:

  1. Like in the color kernel Metal source, you include the Core Image header and enclose the method in an extern "C" enclosure. Then you specify the coreimage namespace.
  2. Next, you declare warpFilter(_:) with an input parameter of type destination, allowing access to the position of the pixel you’re currently computing. It returns the position in the input image coordinates you can then use as a source.

    You access the x and y coordinates of the destination pixel using coord(). Then, you apply simple math to transform the coordinates and return them as source pixel coordinates to create an interesting tile effect.

    Note: Try replacing tan with sin in warpFilter(_:) and you’ll get an interesting distortion effect! :]

Loading the Warp Kernel

Similar to the filter you created for the color kernel, you’ll create a custom filter to load and initialize the warp kernel.

Create a new Swift file in the Filters group. Name it WarpFilter.swift and add:

import CoreImage

// 1
class WarpFilter: CIFilter {
  var inputImage: CIImage?
  // 2
  static var kernel: CIWarpKernel = { () -> CIWarpKernel in
    guard let url = Bundle.main.url(
      forResource: "WarpFilterKernel.ci",
      withExtension: "metallib"),
    let data = try? Data(contentsOf: url) else {
      fatalError("Unable to load metallib")

    guard let kernel = try? CIWarpKernel(
      functionName: "warpFilter",
      fromMetalLibraryData: data) else {
      fatalError("Unable to create warp kernel")

    return kernel

  // 3
  override var outputImage: CIImage? {
    guard let inputImage = inputImage else { return .none }

    return WarpFilter.kernel.apply(
      extent: inputImage.extent,
      roiCallback: { _, rect in
        return rect
      image: inputImage,
      arguments: [])

Here, you:

  1. Created WarpFilter as a subclass of CIFilter with inputImage as the input parameter.
  2. Next, you declare the static property kernel to load the contents of WarpFilterKernel.ci.metallib. You then create an instance of CIWarpKernel using the contents of .metallib.
  3. Finally, you provide the output by overriding outputImage. Within override, you apply the kernel to inputImage using apply(extent:roiCallback:arguments:) and return the result.

Applying the Warp Kernel Filter

Open ImageProcessor.swift. Add the following to ImageProcessor:

private func applyWarpKernel(input: CIImage) {
  let filter = WarpFilter()
  filter.inputImage = input
  if let outputImage = filter.outputImage,
    let renderImage = renderAsUIImage(outputImage) {
    output = renderImage

Here, you declare applyColorKernel(input:), which takes CIImage as input. You then create an instance of WarpFilter and set inputImage.

The filter’s outputImage has the warp kernel applied. You then create an instance of UIImage using renderAsUIImage(_:) and save it to output.

Next, add the following case to process(painting:effect:), below case .colorKernel:

case .warpKernel:
  applyWarpKernel(input: input)

Here, you handle the case for .warpKernel and call applyWarpKernel(input:) to apply the warp kernel filter.

Finally, open PaintingWall.swift. Add the following case in the switch statement right below case 1 in action:

case 2:
  effect = .warpKernel

This sets the effect to .warpKernel for the third painting.

Build and run. Tap the painting of Salvator Mundi. You’ll see an interesting warp-based tile effect applied.

Warp Kernel Filter

Congrats! You applied your own touch to a masterpiece! ;]

Challenge: Implementing a Blend Kernel

The CIBlendKernel is optimized for blending two images. As a fun challenge, implement a custom filter for CIBlendKernel. Some hints:

  1. Create a subclass of CIFilter that takes in two images: an input image and a background image.
  2. Use the built-in available CIBlendKernel kernels. For this challenge, use the built-in multiply blend kernel.
  3. Create a method in ImageProcessor that applies the blend kernel filter to the image and sets the result as the output. You can use the multi_color image provided in the project assets as the background image for the filter. In addition, handle the case for .blendKernel.
  4. Apply this filter to the fourth image in PaintingWall.swift.

You’ll find the solution implemented in the final project available in the downloaded materials. Good luck!

Debugging Core Image Issues

Knowing how Core Image renders an image can help you debug when the image doesn’t appear the way you expected. The easiest way is using Core Image Quick Look when debugging.

Using Core Image Quick Look

Open ImageProcessor.swift. Put a breakpoint on the line where you set output in applyColorKernel(input:). Build and run. Tap “The Last Supper”.

Viewing Core Image graph using quick look

When you hit the breakpoint, hover over outputImage. You’ll see a small popover that shows the address.

Click the eye symbol. A window will appear that shows the graph that makes the image. Pretty cool, huh?


CI_PRINT_TREE is a debugging feature based on the same infrastructure as Core Image Quick Look. It has several modes and operations.

Select and Edit the RayVinci scheme. Select the Run tab and add CI_PRINT_TREE as a new environment variable with a value of 7 pdf.

CI print tree environment variable

The value of CI_PRINT_TREE takes the form graph_type output_type options.

graph_type denotes the stages of the Core Image render. Here are the values you can specify:

  • 1: The initial graph showing the color spaces.
  • 2: An optimized graph showing exactly how Core Image optimizes.
  • 4: A concatenated graph showing how much memory you need.
  • 7: Verbose logging. This prints all the above graphs.

For output_type, you can specify either PDF or PNG. It saves the documents to a temporary directory.

Build and run. Select “The Last Supper” in the simulator. Now, open the temporary directory on your Mac by navigating to /tmp using the terminal.

You’ll see all the graphs as PDF files. Open one of the files with _initial_graph.pdf as the suffix.

Color kernel graph render

The input is at the bottom, and the output is at the top. The red nodes represent the color kernels, while the green nodes represent the warp kernels. You’ll also see each step’s ROI and extent.

To learn more about the various options you can set for CI_PRINT_TREE, check out this WWDC session: Discover Core Image debugging techniques.

Where to Go From Here?

You can download the completed version of the project using the Download Materials button at the top or bottom of this tutorial.

In this tutorial, you learned to create and apply custom filters using Metal-based Core Image kernels. To learn more, check out these WWDC videos:

You can also refer to Apple’s guide, Metal Shading Language for Core Image Kernels.

I hope you enjoyed this tutorial. If you have any questions or comments, please join the forum discussion below.


More like this