Spring Ahead Sale — Save on Everything.All videos. All books. Now 50% off.

Build your mobile development skills and save! Stay ahead of the rest with an Ultimate book & video subscription. Starting at just $149/year as part of the Spring Ahead sale.

Home iOS & Swift Tutorials

Swift Accelerate and vImage: Getting Started

Learn how to process images using Accelerate and vImage in a SwiftUI application.

5/5 7 Ratings

Version

  • Swift 5, iOS 14, Xcode 12

The most valuable benefit of modern computing might be the ability to do complicated math quickly and accurately. Early computers existed almost exclusively to automate tedious and error-prone calculations previously done by hand.

Today, a phone can do calculations in a fraction of a second that once would have taken a team of people weeks or even months. This increased ability invites programmers to add more complicated calculations to these devices, increasing the utility of a phone.

The Accelerate framework gives app developers an efficient, high-speed library for large-scale mathematical or image-based calculations. It uses the vector-processing capabilities on modern purpose-built CPUs to perform calculations quickly while maintaining efficient energy usage.

Getting Started

You need to first understand a bit about what Accelerate is and the components that you will use in this tutorial. So before diving into code, take a look at the components of Accelerate.

Accelerate comprises several related libraries, each of which performs a dedicated type of mathematical process. The libraries are:

  • BNNS: for training and running neural networks
  • vImage: an image processing library
  • vDSP: a library of digital signal processing functions
  • vForce: to perform arithmetic and transcendental calculations on large sets of numbers
  • Sparse Solvers, BLAS and LAPACK: for linear algebra calculations

Apple also uses these libraries as building blocks of other frameworks. For example, CoreML builds on top of BNNS. The archive and compression frameworks also build on top of Accelerate. Because Apple uses these frameworks extensively, you’ll also find support on all current Apple platforms.

In this tutorial, you’ll explore the Accelerate framework using the vImage library. All the libraries work similarly, and the vImage library provides clear visual examples that will be easier to understand than more complex tasks like digital signal processing.

Introducing vImage and vImage_Buffer

vImage gives you a way to manipulate large images using the CPU’s vector instruction set. These instructions let you write apps that can do complex image calculations quickly while placing less stress on mobile devices’ batteries than if you were to use general purpose instructions. vImage works well when you need to perform very complex calculations or process real-time video, or you require high accuracy.

Accelerate is a bit unique in older Apple frameworks in that it’s partially updated to Swift compatibility. Unlike many older frameworks, you’ll find functionality that works as you’d expect a native Swift library. But you can’t ignore the older origin of the framework because many calls still expect and use pre-Swift idioms and patterns. You’ll see how to manage those later in this tutorial.

For an image to work with vImage, you must first convert it to the native format for vImage — a vImage_Buffer. This buffer represents raw image data, and vImage functions treat it more as a set of numbers than as image data, keeping with the vector processing paradigm of Accelerate.

Creating vImage Buffers

Time to start coding!

Download the starter project by clicking the Download Materials button at the top or bottom of this tutorial. Open the starter project, then build and run.

Waterfall converted from image to buffer and vice versa

You’ll see an app that allows you to select a photo from the camera roll. It then displays the selected image. In UIKit and SwiftUI, you usually work with a UIImage. Because vImage does not understand this format, you’ll first convert this UIImage to something it can use.

Create a new Swift file named VImageWrapper.swift. Replace the contents of the file with:

import UIKit
import Accelerate

struct VImageWrapper {
  // 1
  let vNoFlags = vImage_Flags(kvImageNoFlags)
}

extension VImageWrapper {
  // 2
  func printVImageError(error: vImage_Error) {
    let errDescription = vImage.Error(vImageError: error).localizedDescription
    print("vImage Error: \(errDescription)")
  }
}

This file contains the start of a Swift wrapper for several vImage processes you’ll create in this tutorial. Begin by importing UIKit for access to UIImage along with the Accelerate framework. The rest of the code will be useful later:

  1. Most Accelerate functions expect a flags parameter used to restrict or provide context to the function. For this tutorial, you won’t need to provide this, and vNoFlags = vImage_Flags(kvImageNoFlags) provides a handy constant for the value reflecting that.
  2. Even after the better integration with Swift, many methods still return an Objective C style value to indicate the method’s status. This method converts the returned vImage_Error value to a Swift-friendly vImage.Error. It then prints the description to the console for debugging. You’ll use this method to handle errors in this tutorial.

Converting UIImage to vImage

Next, add the following code to the VImageWrapper struct:

var uiImage: UIImage

init(uiImage: UIImage) {
  self.uiImage = uiImage
}

This code creates a UIImage property along with a custom initializer accepting a UIImage.

Next, add the following method to the structure.

func createVImage(image: UIImage) -> vImage_Buffer? {
  guard
    // 1
    let cgImage = uiImage.cgImage,
    // 2
    let imageBuffer = try? vImage_Buffer(cgImage: cgImage)
  else {
    // 3
    return nil
  }
  // 4
  return imageBuffer
}

Notice the use of the guard statement to ensure that each step works. If any steps fail, then the else returns nil to represent something went wrong. You’ll use this logic often in this tutorial.

  1. Accelerate doesn’t provide a direct conversion from a UIImage to a vImage_Buffer. It does support converting a CGImage to a vImage_Buffer. Because a UIImage (usually) contains the CGImage in the cgImage property, you attempt to access the underlying CGImage of the UIImage.
  2. With a CGImage, you attempt to create a vImage_Buffer from it. The creation of this buffer can throw an error, so you use the try? operator to get a nil if an error occurs.
  3. If either of these steps fails, then you return nil.
  4. Otherwise, you return the vImage_Buffer.

That concludes the setup necessary for converting your images into a buffer that vImage can handle. You’ll use this extensively throughout the tutorial.

You’ll also need a way to convert back from a vImage_Buffer into a UIImage. So, code that now.

Converting vImage to UIImage

Add the following method at the end of the VImageWrapper struct:

func convertToUIImage(buffer: vImage_Buffer) -> UIImage? {
  guard
    // 1
    let originalCgImage = uiImage.cgImage,
    // 2
    let format = vImage_CGImageFormat(cgImage: originalCgImage),
    // 3
    let cgImage = try? buffer.createCGImage(format: format)
  else {
    return nil
  }
  let image = UIImage(cgImage: cgImage)
  return image
}

You can see it takes a bit more work to go from the vImage_Buffer back to the UIImage:

  1. To begin, you’ll get the CGImage for the UIImage as earlier.
  2. As mentioned earlier, the vImage_Buffer contains only the image data. It has no information about what the buffer represents, and it needs to know the order and number of bits used for the image data. You can obtain this information from the original image, and that’s what you do here. You create a vImage_CGImageFormat object which contains information needed to interpret the image data of the original image.
  3. You call the createCGImage(format:) on the buffer to convert the image data into an image using the format determined in the last step.

Now, to see your work, add the following optional property to the end of the list of properties of the VImageWrapper struct:

var processedImage: UIImage?

Then, add the following code at the end of the init(uiImage:):

if let buffer = createVImage(image: uiImage), 
  let converted = convertToUIImage(buffer: buffer) {
    processedImage = converted
}

The block of code above attempts to convert the image to a buffer and then back to a UIImage.

Next, open ContentView.swift and, after the existing ImageView, add the following code:

if let image = processedImage {
  ImageView(
    title: "Processed",
    image: image)
  .padding(3)
}

When a processed image exists, the app will display it using the ImageView defined in the starter project. You need to make one more change before you can see your work.

Instantiating the Wrapper

Inside the sheet(isPresented:onDismiss:content:), replace print("Dismissed") with the following:

let imageWrapper = VImageWrapper(uiImage: originalImage)
processedImage = imageWrapper.processedImage

This code instantiates the wrapper with the selected image and then assigns the processedImage property of the object to the processedImage property of the view. Because you’re not changing the image yet, you’ll see the processed image matches the original one.

Build and run. Tap Select Photo and choose any photo in the simulator other than the red flowers. You’ll see things look good.

Waterfall converted from image to buffer and vice versa

But, as you might have guessed from that restriction, there’s an issue. Tap Select Photo and choose that photo of the red flowers, and you’ll see things are a bit upside down.

Red flowers with processed image upside down

The processed photo looks upside down. Specifically, it’s rotated 180° from the original. What’s causing the rotated image? You’ll fix that now.

Managing Image Orientation

The rotated image bug comes in the conversion back to a UIImage. Remember from earlier that the vImage_Buffer contains only the raw pixel data for the image. There’s no context for what those pixels mean. During the conversion back to a UIImage, you created a vImage_CGImageFormat based on the original CGImage to provide that format. So what’s missing?

It turns out that a UIImage holds more information about an image than a CGImage. One of those properties is the imageOrientation. This property specifies the intended display orientation for an image. Typically this value will be .up. But for this image, it reads .down, defining that the displayer should rotate the image 180° from its original pixel data orientation.

Because you don’t set this value when converting the vImage_Buffer back to a UIImage, you get the default of .up. The simple fix: Use the information of the original UIImage when doing the conversion.

Open VImageWrapper.swift and find the convertToUIImage(buffer:). Change the let image = ... line to read:

let image = UIImage(
  cgImage: cgImage,
  scale: 1.0,
  orientation: uiImage.imageOrientation
)

This constructor allows you to specify the orientation when creating the image. You use the value from the original image’s imageOrientation property.

Build and run. Tap Select Photo and choose that photo of the red flowers. You’ll now see the processed image looks correct.

Correctly oriented red flowers

Image Processing

Now that you can convert an image to and from the format vImage requires, you can start using the library to process images. You’re going to implement a few different ways to manipulate images, playing around with their colors.

Implementing Equalize Histogram

First, you’ll implement the equalize histogram process. This process transforms an image so that it has a more uniform histogram. The resulting image should have each intensity of color occur almost equally. The result is often more interesting than visually appealing. But it’s a clear visual distinction on most images.

To begin, open VImageWrapper.swift. Add the following method to the end of the struct:

// 1
mutating func equalizeHistogram() {
  guard
    // 2
    let image = uiImage.cgImage,
    var imageBuffer = createVImage(image: uiImage),
    // 3
    var destinationBuffer = try? vImage_Buffer(
      width: image.width,
      height: image.height,
      bitsPerPixel: UInt32(image.bitsPerPixel))
    else {
      // 4
      print("Error creating image buffers.")
      processedImage = nil
      return
  }
  // 5
  defer {
    imageBuffer.free()
    destinationBuffer.free()
  }
}

This code sets up the values needed for the image change. The conversion to a buffer should look familiar, but there’s some new work because you need a place to put the image processing results:

  1. With a struct, you must declare the method mutating so you can update the processedImage property.
  2. You get the CGImage of the UIImage along with a vImage_Buffer for the original image. You’ll see why you need the CGImage in a later step.
  3. Some processing functions place the results back into the original buffer. Most expect a second destination buffer to hold the results. This constructor allows you to specify the buffer’s width and height along with the number of bits used for each pixel in the image. You obtain these values from the CGImage obtained in step two.
  4. If any of these steps fails, then the method prints an error to the console, sets the processedImage property to nil and returns.
  5. When you create a vImage_Buffer, the library dynamically allocates memory. You must let the library know when you have finished with the object by calling free(). You wrap this call inside a defer so that it is called whenever this method returns.

Processing the Image

With the setup done, you can now process the image. Add the following code to the end of the equalizeHistogram():

// 1
let error = vImageEqualization_ARGB8888(
  &imageBuffer,
  &destinationBuffer,
  vNoFlags)

// 2
guard error == kvImageNoError else {
  printVImageError(error: error)
  processedImage = nil
  return
}

// 3
processedImage = convertToUIImage(buffer: destinationBuffer)

You can see after the setup that the actual image processing takes little code:

  1. vImageEqualization_ARGB8888(_:_:_:) takes three parameters. The first and second are the source and destination buffers you created. You pass the vNoFlags constant you defined earlier because you have no special instructions for this function. This is the function that performs the actual histogram equalization for you.
  2. You check to see if there was an error in the function. If so, then you print the error to the console. Afterward, you clear the processedImage property and return from the method. As a reminder, thanks to the defer keyword, the free() executes now.
  3. You now convert the buffer back to a UIImage and store it in the processedImage property. If the method made it here and didn’t return earlier due to error, then now is when the free() will be called on the buffers thanks to the defer block.

Note the _ARGB8888 suffix on the vImage. Because the buffer specifies only image data without context, most vImages exist with several suffixes that treat the data as the indicated format.

This format specifies that the image data will contain the alpha, red, green and blue channels in that order for each pixel. The numbers identify that each channel consists of eight bits of information channel per pixel.

Note: You’ll find more detail on image formats at the start of Image Processing in iOS Part 1: Raw Bitmap Modification.

Hooking it up to the UI

This pattern holds for most vImage processing routines. You do the setup to create appropriate source and destination buffers, call the function for the desired image processing with a suffix stating the data format and then check for errors and process the destination buffer as necessary.

It’s time to see this in action. Open ContentView.swift. Between the two ImageViews, add the following code:

HStack {
  Button("Equalize Histogram") {
    var imageWrapper = VImageWrapper(uiImage: originalImage)
    imageWrapper.equalizeHistogram()
    processedImage = imageWrapper.processedImage
  }
}
.disabled(originalImage.cgImage == nil)

You’ve added a button that will be active only after an image containing valid CGImage loads. When tapped, the button calls equalizeHistogram() and places the result into the view’s processedImage property.

Build and run. Select a photo and tap the Equalize Histogram button. You’ll notice the dramatic change.

Green leaves with equalize histogram implemented

That’s a neat transformation, but it’s now time to look at another transformation.

Implementing Image Reflection

The steps that you followed for histogram equalization will work for almost any vImage image processing function. Now, you’ll add similar code to implement horizontal image reflection.

Open VImageWrapper.swift and add the following method to the struct:

mutating func reflectImage() {
  guard
    let image = uiImage.cgImage,
    var imageBuffer = createVImage(image: uiImage),
    var destinationBuffer = try? vImage_Buffer(
      width: image.width,
      height: image.height,
      bitsPerPixel: UInt32(image.bitsPerPixel))
  else {
    print("Error creating image buffers.")
    processedImage = nil
    return
  }
  defer {
    imageBuffer.free()
    destinationBuffer.free()
  }

  let error = vImageHorizontalReflect_ARGB8888(
    &imageBuffer,
    &destinationBuffer,
    vNoFlags)

  guard error == kvImageNoError else {
    printVImageError(error: error)
    processedImage = nil
    return
  }

  processedImage = convertToUIImage(buffer: destinationBuffer)
}

The only difference between this method and equalizeHistogram() is that it calls vImageHorizontalReflect_ARGB8888(_:_:_:) instead of vImageEqualization_ARGB8888(_:_:_:).

Open ContentView.swift and add the following code after your Equalize Histogram button at the end of the HStack:

Spacer()
Button("Reflect") {
  var imageWrapper = VImageWrapper(uiImage: originalImage)
  imageWrapper.reflectImage()
  processedImage = imageWrapper.processedImage
}

Build and run. Select a photo and tap the new Reflect button. You’ll see you have implemented further image manipulation with only a small change.

Original waterfall and reflected waterfall

Now that you have a basic grasp of using vImage, you’ll explore a more complex task that will require you to delve into Objective-C patterns.

Histograms

An image histogram represents the distribution of tonal values in an image. It divides an image’s tones into bins displayed along the horizontal axis. The height of the histogram at each spot represents the number of pixels with that tone. At a glance, it provides an understanding of the overall exposure and balance of exposure in a photo. In image processing, the histogram can help with edge detection and segmentation tasks.

In this section, you’ll see how to get the histogram data for an image. In the process, you’ll learn more complex interactions with the vImage library, including working with patterns still built upon Objective-C.

Getting the Histogram

Open VImageWrapper.swift and add the following code before the VImageWrapper struct:

enum WrappedImage {
  case original
  case processed
}

You’ll use this enum type to distinguish between the original or processed image. Now, add the following code to the end of the VImageWrapper struct:

func getHistogram(_ image: WrappedImage) -> HistogramLevels? {
  guard
    // 1
    let cgImage =
      image == .original ? uiImage.cgImage : processedImage?.cgImage,
    // 2
    var imageBuffer = try? vImage_Buffer(cgImage: cgImage)
  else {
    return nil
  }
  // 3
  defer {
    imageBuffer.free()
  }
}

Nothing new here:

  1. You use the value of the WrappedImage enum to select either the original image or the processed image.
  2. Then, you create a vImage_Buffer for the image.
  3. Again you use defer to call free() when exiting the scope.

Now, add the following code at the end of the getHistogram(_:):

var redArray: [vImagePixelCount] = Array(repeating: 0, count: 256)
var greenArray: [vImagePixelCount] = Array(repeating: 0, count: 256)
var blueArray: [vImagePixelCount] = Array(repeating: 0, count: 256)
var alphaArray: [vImagePixelCount] = Array(repeating: 0, count: 256)

The histogram’s raw contents provide the number of pixels for the bin in the histogram, a value of type vImagePixelCount. So you create four 256-element arrays — one array for each color channel along with the alpha channel. Having 256 elements means an array with 256 bins for the histogram, the number of distinct values the eight bits of data for each channel of the ARGB8888 format holds.

Working with Pointers

Now, you come to pointers, something you probably hoped you’d avoided using Swift! Add the following code after the array definitions you just added:

// 1
var error: vImage_Error = kvImageNoError
// 2
redArray.withUnsafeMutableBufferPointer { rPointer in
  greenArray.withUnsafeMutableBufferPointer { gPointer in
    blueArray.withUnsafeMutableBufferPointer { bPointer in
      alphaArray.withUnsafeMutableBufferPointer { aPointer in
        // 3
        var histogram = [
          rPointer.baseAddress, gPointer.baseAddress,
          bPointer.baseAddress, aPointer.baseAddress
        ]
        // 4
        histogram.withUnsafeMutableBufferPointer { hPointer in
          // 5
          if let hBaseAddress = hPointer.baseAddress {
            error = vImageHistogramCalculation_ARGB8888(
              &imageBuffer,
              hBaseAddress,
              vNoFlags
            )
          }
        }
      }
    }
  }
}

The Objective-C roots of the Accelerate framework show through here, despite the work to make the libraries more Swift-friendly. The function to calculate a histogram expects a pointer to an array that contains four more pointers. Each of these four pointers will point to an array that will receive the counts for one channel. Almost all of this code changes a set of Swift arrays to this Objective-C pattern.

Here’s what the code does:

  1. Working with pointers in Swift becomes more comfortable when you define them inside closures. Because you want to set the error parameter several blocks deep and still use it outside the block, you define it before starting the closures.
  2. Swift provides several ways to access pointers for legacy needs such as this. withUnsafeMutableBufferPointer(_:) creates a pointer you can access within the method’s closure. You make one for each channel array. The pointers are only valid inside the closure passed to withUnsafeMutableBufferPointer(_:). This is why you must nest these calls.
  3. You create a new array whose elements are these four pointers. Note the order of the arrays here. The baseAddress property gets a pointer to the first element of a buffer — in this case, your array for each channel. At this point, you’ve built the structure that vImage expects for the histogram function call.
  4. You need the pointer to the array you just created as you did with the channel arrays and work with it inside the block.
  5. You unwrap the pointer to the first element of the histogram array and then call the vImageHistogramCalculation_ARGB8888 passing the image buffer, the unwrapped pointer and that there are no special instructions again.

Finalizing the Histogram Data

Finally, you can go back to normal Swift-land because either the calculated values reside in the arrays you created earlier or something went wrong. Add the following code to finish out the method:

// 1
guard error == kvImageNoError else {
  printVImageError(error: error)
  return nil
}

// 2
let histogramData = HistogramLevels(
  red: redArray,
  green: greenArray,
  blue: blueArray,
  alpha: alphaArray
)

// 3
return histogramData

It’s nice to be back in standard Swift, isn’t it? Here’s what you do here:

  1. Check for any error and if there is one, print it out.
  2. Create a HistogramLevels struct from the raw histogram data.
  3. Return the histogram data.

This HistogramLevels struct is defined inside HistogramView.swift in the project. It provides an array to each channel of the image. Feel free to open the file to take a look at how the struct is defined.

Visualizing the Histogram Data

Now, to visualize the histogram data, open ContentView.swift. Add the following to the list of properties at the top of the struct:

@State private var originalHistogram: HistogramLevels?
@State private var processedHistogram: HistogramLevels?

Then, you need to pass the optional histogram to the view. Change the first ImageView call to:

ImageView(
  title: "Original",
  image: originalImage,
  histogram: originalHistogram)

and the second to:

ImageView(
  title: "Processed",
  image: image,
  histogram: processedHistogram)
.padding(3)

Next, inside the sheet(isPresented:onDismiss:content:) call, replace the contents of the onDismiss closure with the following:

let imageWrapper = VImageWrapper(uiImage: originalImage)
processedImage = nil
processedHistogram = nil
originalHistogram = imageWrapper.getHistogram(.original)

Finally, add the following code at the end of both the Equalize Histogram and Reflect buttons’ action closures.

processedHistogram = imageWrapper.getHistogram(.processed)

Build and run. Select a photo like before, then tap it and you’ll see that doing so toggles the histogram display on and off. Tap the Equalize Histogram button, and you’ll see the changes made to the histogram for the processed image. It’s quite a dramatic change for most photos.

Original and processed waterfall with histograms

As you might expect, Reflect doesn’t change the histogram because it only changes the orientation of the photo and not the tones of the image.

Similar histograms for original waterfall and reflected waterfall

Where to Go From Here?

To see the final project, click the Download Materials button at the top or bottom of this tutorial.

You’ve learned the Accelerate framework’s basics by exploring image processing using the vImage library. There’s so much functionality in the Accelerate framework that this tutorial barely touches the capabilities and functions it provides. It should give you a strong foundation to understand the challenges you’ll run into working with the partially Swift-friendly framework.

The documentation for Accelerate might be one of the most thorough of any Apple framework. It’s a good starting point to learn what the framework can do. The topics section provides several examples with modern code. It often focuses more on “how” than “why”, but after this tutorial you should better understand the processes behind the framework.

To see how Accelerate adapted for Swift, watch Introducing Accelerate for Swift from WWDC 2019. If you’re interested in signal processing, you’ll find useful information in Using Accelerate and SIMD from WWDC 2018 even though much of the code is a bit outdated since it predates the 2019 Swift updates.

Good luck vectorizing your applications. If you have questions or comments, leave them below.

Average Rating

5/5

Add a rating for this content

7 ratings

More like this

Contributors

Comments