Image Depth Maps Tutorial for iOS: Getting Started

Learn how you can use the incredibly powerful image manipulation frameworks on iOS to use image depth maps with only a few lines of code. By Owen L Brown.

Leave a rating/review
Download materials
Save for later
Share
You are currently viewing page 3 of 3 of this article. Click here to view the first page.

Color Highlight Filter

Return to DepthImageFilters.swift and add the following new method again:

func createColorHighlight(
  for image: SampleImage,
  withFocus focus: CGFloat
) -> UIImage? {
  let mask = createMask(for: image, withFocus: focus)
  let grayscale = image.filterImage.applyingFilter("CIPhotoEffectMono")
  let output = image.filterImage.applyingFilter("CIBlendWithMask", parameters: [
    "inputBackgroundImage" : grayscale,
    "inputMaskImage": mask
  ])

  guard let cgImage = context.createCGImage(output, from: output.extent) else {
    return nil
  }

  return UIImage(cgImage: cgImage)
}

This should look familiar. It’s almost exactly the same as createSpotlightImage(for:withFocus:), which you just wrote. The difference is that this time, you set the background image to be a grayscale version of the original image.

This filter will show full color at the focal point based on the slider position, and fade to gray from there.

Open DepthImageViewController.swift and, in the same switch statement, replace the code for (.filtered, .color) with the following:

return depthFilters.createColorHighlight(for: image, withFocus: focus)

This calls your new filter method and displays the result.

Build and run to see the magic:

Building - Color Filter

Don’t you hate it when you take a picture only to discover later that the camera focused on the wrong object? What if you could change the focus after the fact?

That’s exactly what the depth-inspired filter you’ll write next does!

Change the Focal Length

Under createColorHighlight(for:withFocus:) in DepthImageFilters.swift, add one last method:

func createFocalBlur(
  for image: SampleImage,
  withFocus focus: CGFloat
) -> UIImage? {
  // 1
  let mask = createMask(for: image, withFocus: focus)

  // 2
  let invertedMask = mask.applyingFilter("CIColorInvert")

  // 3
  let output = image.filterImage.applyingFilter(
    "CIMaskedVariableBlur", 
    parameters: [
      "inputMask" : invertedMask,
      "inputRadius": 15.0
    ])

  // 4
  guard let cgImage = context.createCGImage(output, from: output.extent) else {
    return nil
  }

  // 5
  return UIImage(cgImage: cgImage)
}

This filter is a little different from the other two.

  1. First, you get the initial mask that you’ve used previously.
  2. You then use CIColorInvert to invert the mask.
  3. Then you apply CIMaskedVariableBlur, a filter that was new with iOS 11. It will blur using a radius equal to inputRadius multiplied by the mask’s pixel value. When the mask pixel value is 1.0, the blur is at its max, which is why you needed to invert the mask first.
  4. Once again, you generate a CGImage using CIContext.
  5. You use that CGImage to create a UIImage and return it.
Note: If you have performance issues, try decreasing the inputRadius. Gaussian blurs are computationally expensive and the bigger the blur radius, the more computations occur.

Before you can run, you need to once again update the switch statement back in DepthImageViewController.swift. To use your shiny new method, change the code under (.focused, .blur) to:

return depthFilters.createFocalBlur(for: image, withFocus: focus)

Build and run.

Bike - Blur Filter

It’s… so… beautiful!

Its so beautiful

More About AVDepthData

Remember how you scaled the mask in createMask(for:withFocus:)? You had to do this because the depth data captured by the iPhone is a lower resolution than the sensor resolution. It’s closer to 0.5 megapixels than the 12 megapixels the camera can take.

Another important thing to know is the data can be filtered or unfiltered. Unfiltered data may have holes represented by NaN, which stands for Not a Number — a possible value in floating point data types. If the phone can’t correlate two pixels or if something obscures just one of the cameras, it uses these NaN values for disparity.

Pixels with a value of NaN display as black. Since multiplying by NaN will always result in NaN, these black pixels will propagate to your final image, and they’ll look like holes.

As this can be a pain to deal with, Apple gives you filtered data, when available, to fill in these gaps and smooth out the data.

If you’re unsure, check isDepthDataFiltered to find out if you’re dealing with filtered or unfiltered data.

Where to Go From Here?

Download the final project using the Download Materials button at the top or bottom of this tutorial.

There are tons more Core Image filters available. Check Apple’s Core Image Filter Reference for a complete list. Many of these filters create interesting effects when you combine them with depth data.

Additionally, you can capture depth data with video, too! Think of the possibilities.

I hope you had fun building these image filters. If you have any questions or comments, please join the forum discussion below!