Image Processing in iOS Part 2: Core Graphics, Core Image, and GPUImage

Learn the basics of image processing on iOS via raw bitmap modification, Core Graphics, Core Image, and GPUImage in this 2-part tutorial series. By Jack Wu.

Leave a rating/review
Save for later
Share
You are currently viewing page 2 of 3 of this article. Click here to view the first page.

Contents

Hide contents

Ultra Super SpookCam, Core Image Edition

There are also several great Core Image tutorials on this site already, such as this one, from the iOS 6 feast. We also have several chapters on Core Image in our iOS by Tutorials series.

In this tutorial, you’ll see some discussion about how Core Image compares to the other methods in this tutorial.

Core Image is Apple’s solution to image processing. It absconds with all the low-level pixel manipulation methods, and replaces them with high-level filters.

The best part of Core Image is that it has crazy awesome performance when compared to raw pixel manipulation or Core Graphics. The library uses a mix of CPU and GPU processing to provide near-real-time performance.

Apple also provides a huge selection of pre-made filters. On OSX, you can even create your own filters by using Core Image Kernel Language, which is very similar to GLSL, the language for shaders in OpenGL. Note that at the time of writing this tutorial, you cannot write your own Core Image filters on iOS (only Mac OS X).

There are still some effects that are better to do with Core Graphics. As you’ll see in the code, you get the most out of Core Image by using Core Graphics alongside it.

Add this new method to ImageProcessor.m:

- (UIImage *)processUsingCoreImage:(UIImage*)input {
  CIImage * inputCIImage = [[CIImage alloc] initWithImage:input];
  
  // 1. Create a grayscale filter
  CIFilter * grayFilter = [CIFilter filterWithName:@"CIColorControls"];
  [grayFilter setValue:@(0) forKeyPath:@"inputSaturation"];
  
  // 2. Create your ghost filter
  
  // Use Core Graphics for this
  UIImage * ghostImage = [self createPaddedGhostImageWithSize:input.size];
  CIImage * ghostCIImage = [[CIImage alloc] initWithImage:ghostImage];

  // 3. Apply alpha to Ghosty
  CIFilter * alphaFilter = [CIFilter filterWithName:@"CIColorMatrix"];
  CIVector * alphaVector = [CIVector vectorWithX:0 Y:0 Z:0.5 W:0];
  [alphaFilter setValue:alphaVector forKeyPath:@"inputAVector"];
  
  // 4. Alpha blend filter
  CIFilter * blendFilter = [CIFilter filterWithName:@"CISourceAtopCompositing"];
  
  // 5. Apply your filters
  [alphaFilter setValue:ghostCIImage forKeyPath:@"inputImage"];
  ghostCIImage = [alphaFilter outputImage];

  [blendFilter setValue:ghostCIImage forKeyPath:@"inputImage"];
  [blendFilter setValue:inputCIImage forKeyPath:@"inputBackgroundImage"];
  CIImage * blendOutput = [blendFilter outputImage];
  
  [grayFilter setValue:blendOutput forKeyPath:@"inputImage"];
  CIImage * outputCIImage = [grayFilter outputImage];
  
  // 6. Render your output image
  CIContext * context = [CIContext contextWithOptions:nil];
  CGImageRef outputCGImage = [context createCGImage:outputCIImage fromRect:[outputCIImage extent]];
  UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage];
  CGImageRelease(outputCGImage);
  
  return outputImage;
}

Look at how different this code looks compared to the previous methods.

With Core Image, you set up a variety of filters to process your images – here you use a CIColorControls filter for grayscale, CIColorMatrix and CISourceAtopCompositing for blending, and finally chain them all together.

Now, take a walk through this function to learn more about each step.

  1. Create a CIColorControls filter and set its inputSaturation to 0. As you might recall, saturation is a channel in HSV color space that determines how much color there is. Here a value of 0 indicates grayscale.
  2. Create a padded ghost image that is the same size as the input image.
  3. Create an CIColorMatrix filter with its alphaVector set to [0 0 0.5 0]. This will multiply Ghosty’s alpha by 0.5.
  4. Create a CISourceAtopCompositing filter to perform alpha blending.
  5. Chain up your filters to process the image.
  6. Render the output CIImage to a CGImage and create the final UIImage. Remember to free your memory afterwards.

This method uses a helper function called -createPaddedGhostImageWithSize:, which uses Core Graphics to create a scaled version of Ghosty padded to be 25% the width of the input image. Can you implement this function by yourself?

Give it a shot. If you are stuck, here is the solution:

[spoiler title=”Solution”]

- (UIImage *)createPaddedGhostImageWithSize:(CGSize)inputSize {
  UIImage * ghostImage = [UIImage imageNamed:@"ghost.png"];
  CGFloat ghostImageAspectRatio = ghostImage.size.width / ghostImage.size.height;
  
  NSInteger targetGhostWidth = inputSize.width * 0.25;
  CGSize ghostSize = CGSizeMake(targetGhostWidth, targetGhostWidth / ghostImageAspectRatio);
  CGPoint ghostOrigin = CGPointMake(inputSize.width * 0.5, inputSize.height * 0.2);
  
  CGRect ghostRect = {ghostOrigin, ghostSize};
  
  UIGraphicsBeginImageContext(inputSize);
  CGContextRef context = UIGraphicsGetCurrentContext();

  CGRect inputRect = {CGPointZero, inputSize};
  CGContextClearRect(context, inputRect);

  CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0);
  CGAffineTransform flipThenShift = CGAffineTransformTranslate(flip,0,-inputSize.height);
  CGContextConcatCTM(context, flipThenShift);
  CGRect transformedGhostRect = CGRectApplyAffineTransform(ghostRect, flipThenShift);
  CGContextDrawImage(context, transformedGhostRect, [ghostImage CGImage]);
  
  UIImage * paddedGhost = UIGraphicsGetImageFromCurrentImageContext();
  UIGraphicsEndImageContext();
  
  return paddedGhost;
}

[/spoiler]

Finally, replace the first line in processImage: to call your new method:

UIImage * outputImage = [self processUsingCoreImage:inputImage];

Now build and run. Again, you should see the same spooky image.

BuildnRun-3

You can download a project with all the code in this section here.

Core Image provides a large amount of filters you can use to create almost any effect you want. It’s a good friend to have when you’re processing images.

Now onto the last solution, which is incidentally the only third-party option explored in this tutorial: GPUImage.

Mega Ultra Super SpookCam, GPUImage Edition

GPUImage is a well-maintained library for GPU-based image processing on iOS. It won a place in the top 10 best iOS libraries on this site!

GPUImage hides all of the complex code required for using OpenGL ES on iOS, and presents you with an extremely simple interface to process images at blazing speeds. The performance of GPUImage even beats Core Image on many occasions, but Core Image still wins with a few functions.

To start with GPUImage, you’ll need to include it into your project. This can be done using Cocoapods, building the static library or by embedding the source code directly to your project.

The project app already contains a static framework, which was built externally. It’s easy to copy into the project when you follow these steps:

Instructions:
Run build.sh at the command line. The resulting library and header files will go tobuild/Release-iphone.

You may also change the version of the iOS SDK by changing the IOSSDK_VER variable in build.sh (all available versions can be found using xcodebuild -showsdks).

To embed the source into your project, follow these instructions from the Github repo:

Instructions:

GPUImage needs a few other frameworks to link into your application, so you’ll need to add the following as linked libraries in your application target:

CoreMedia
CoreVideo
OpenGLES
AVFoundation
QuartzCore

Then you need to find the framework headers. Within your project’s build settings, set the Header Search Paths to the relative path from your application to the framework/subdirectory within the GPUImage source directory. Make this header search path recursive.

  • Drag the GPUImage.xcodeproj file into your application’s Xcode project to embed the framework in your project.
  • Next, go to your application’s target and add GPUImage as a Target Dependency.
  • Drag the libGPUImage.a library from the GPUImage framework’s Products folder to the Link Binary With Librariesbuild phase in your application’s target.

After you add GPUImage to your project, make sure to include the header file in ImageProcessor.m.

If you included the static framework, use #import GPUImage/GPUImage.h. If you included the project directly, use #import "GPUImage.h" instead.

Add the new processing function to ImageProcessor.m:

- (UIImage *)processUsingGPUImage:(UIImage*)input {
  
  // 1. Create the GPUImagePictures
  GPUImagePicture * inputGPUImage = [[GPUImagePicture alloc] initWithImage:input];
  
  UIImage * ghostImage = [self createPaddedGhostImageWithSize:input.size];
  GPUImagePicture * ghostGPUImage = [[GPUImagePicture alloc] initWithImage:ghostImage];

  // 2. Set up the filter chain
  GPUImageAlphaBlendFilter * alphaBlendFilter = [[GPUImageAlphaBlendFilter alloc] init];
  alphaBlendFilter.mix = 0.5;
  
  [inputGPUImage addTarget:alphaBlendFilter atTextureLocation:0];
  [ghostGPUImage addTarget:alphaBlendFilter atTextureLocation:1];
  
  GPUImageGrayscaleFilter * grayscaleFilter = [[GPUImageGrayscaleFilter alloc] init];
  
  [alphaBlendFilter addTarget:grayscaleFilter];
  
  // 3. Process & grab output image
  [grayscaleFilter useNextFrameForImageCapture];
  [inputGPUImage processImage];
  [ghostGPUImage processImage];
  
  UIImage * output = [grayscaleFilter imageFromCurrentFramebuffer];
  
  return output;
}

Hey! That looks straightforward. Here’s what’s going on:

GPUFilterChain
GPUImageAlphaBlendFilter takes two inputs, in this case the top image and bottom image, so that the texture locations matter. -addTarget:atTextureLocation: sets the textures to the correct inputs.

  1. Create the GPUImagePicture objects; use -createPaddedGhostImageWithSize: as a helper again. GPUImage uploads the images into the GPU memory as textures when you do this.
  2. Create and chain the filters you’re going to use. This kind of chaining is quite different from the Core Image filter chaining, and resembles a pipeline. After you’re finished, it looks like this:
  3. Call -useNextFrameForImageCapture on the last filter in the chain and then -processImage on both inputs. This makes sure the filter knows that you want to grab the image from it and retains it for you.

Finally, replace the first line in processImage: to call this new method:

UIImage * outputImage = [self processUsingGPUImage:inputImage];

And that’s it. Build and run. Ghosty is looking as good as ever!

BuildnRun-3

As you can see, GPUImage is easy to work with. You can also create your own filters by writing your own shaders in GLSL. Check out the documentation for GPUImage here for more on how to use this framework.

Download a version of the project with all the code in this section here.

Jack Wu

Contributors

Jack Wu

Author

Over 300 content creators. Join our team.