Image Processing in iOS Part 2: Core Graphics, Core Image, and GPUImage

Learn the basics of image processing on iOS via raw bitmap modification, Core Graphics, Core Image, and GPUImage in this 2-part tutorial series. By Jack Wu.

Leave a rating/review
Save for later
Share

Contents

Hide contents

Welcome to part two of this tutorial series about image processing in iOS!

In the first part of the series, you learned how to access and modify the raw pixel values of an image.

In this second and final part of the series, you’ll learn how to perform this same task by using other libraries: Core Graphics, Core Image and GPUImage to be specific. You’ll learn about the pros and cons of each, so that you can make the best choice for your situation.

This tutorial picks up where the previous one left off. If you don’t have the project already, you can download it here.

If you fared well in part one, you’re going to thoroughly enjoy this part! Now that you understand the principles, you’ll fully appreciate how simple these libraries make image processing.

Super SpookCam, Core Graphics Edition

Core Graphics is Apple’s API for drawing based on the Quartz 2D drawing engine. It provides a low-level API that may look familiar if you’re acquainted with OpenGL.

If you’ve ever overridden the -drawRect: method for a view, you’ve interacted with Core Graphics, which provides several functions to draw objects, gradients and other cool stuff to your view.

There are tons of Core Graphics tutorials on this site already, such as this one and this one. So, in this tutorial, you’re going to focus on how to use Core Graphics to do some basic image processing.

Before you get started, you need to get familiar with the concept of a Graphics Context.

Concept: Graphics Contexts are common to most types of rendering and a core concept in OpenGl and Core Graphics. Think of it as simply a global state object that holds all the information for drawing.

In terms of Core Graphics, this includes the current fill color, stroke color, transforms, masks, where to draw and much more. In iOS, there are also other different types of contexts such as PDF contexts, which allow you to draw to a PDF file.

In this tutorial you’re only going to use a Bitmap context, which draws to a bitmap.

Inside the -drawRect: function, you’ll find a context that is ready for use. This is why you can directly call and draw toUIGraphicsGetCurrentContext(). The system has set this up so that you’re drawing directly to the view to be rendered.

Outside of the -drawRect: function, there is usually no graphics context available. You can create one as you did in the first project using CGContextCreate(), or you can use UIGraphicsBeginImageContext() and grab the created context using UIGraphicsGetCurrentContext().

This is called offscreen-rendering, as the graphics you’re drawing are not directly presented anywhere. Instead, they render to an off-screen buffer.

In Core Graphics, you can then get an UIImage of the context and show it on screen. With OpenGL, you can directly swap this buffer with the one currently rendered to screen and display it directly.

Image processing using Core Graphics takes advantage of this off-screen rendering to render your image into a buffer, apply any effects you want and grab the image from the context once you’re done.

All right, enough concepts, now it’s time to make some magic with code! Add this following new method to ImageProcessor.m:

- (UIImage *)processUsingCoreGraphics:(UIImage*)input {
  CGRect imageRect = {CGPointZero,input.size};
  NSInteger inputWidth = CGRectGetWidth(imageRect);
  NSInteger inputHeight = CGRectGetHeight(imageRect);
  
  // 1) Calculate the location of Ghosty
  UIImage * ghostImage = [UIImage imageNamed:@"ghost.png"];
  CGFloat ghostImageAspectRatio = ghostImage.size.width / ghostImage.size.height;
  
  NSInteger targetGhostWidth = inputWidth * 0.25;
  CGSize ghostSize = CGSizeMake(targetGhostWidth, targetGhostWidth / ghostImageAspectRatio);
  CGPoint ghostOrigin = CGPointMake(inputWidth * 0.5, inputHeight * 0.2);
  
  CGRect ghostRect = {ghostOrigin, ghostSize};
  
  // 2) Draw your image into the context.
  UIGraphicsBeginImageContext(input.size);
  CGContextRef context = UIGraphicsGetCurrentContext();

  CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0);
  CGAffineTransform flipThenShift = CGAffineTransformTranslate(flip,0,-inputHeight);
  CGContextConcatCTM(context, flipThenShift);
  
  CGContextDrawImage(context, imageRect, [input CGImage]);
  
  CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
  CGContextSetAlpha(context,0.5);
  CGRect transformedGhostRect = CGRectApplyAffineTransform(ghostRect, flipThenShift);
  CGContextDrawImage(context, transformedGhostRect, [ghostImage CGImage]);
  
  // 3) Retrieve your processed image
  UIImage * imageWithGhost = UIGraphicsGetImageFromCurrentImageContext();
  UIGraphicsEndImageContext();

  // 4) Draw your image into a grayscale context   
  CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
  context = CGBitmapContextCreate(nil, inputWidth, inputHeight,
                           8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaNone);
  
  CGContextDrawImage(context, imageRect, [imageWithGhost CGImage]);
  
  CGImageRef imageRef = CGBitmapContextCreateImage(context);
  UIImage * finalImage = [UIImage imageWithCGImage:imageRef];
  
  // 5) Cleanup
  CGColorSpaceRelease(colorSpace);
  CGContextRelease(context);
  CFRelease(imageRef);
  
  return finalImage;
}

That’s quite a bit of stuff. Let’s go over it section by section.

1) Calculate the location of Ghosty

UIImage * ghostImage = [UIImage imageNamed:@"ghost.png"];
CGFloat ghostImageAspectRatio = ghostImage.size.width / ghostImage.size.height;
  
NSInteger targetGhostWidth = inputWidth * 0.25;
CGSize ghostSize = CGSizeMake(targetGhostWidth, targetGhostWidth / ghostImageAspectRatio);
CGPoint ghostOrigin = CGPointMake(inputWidth * 0.5, inputHeight * 0.2);

CGRect ghostRect = {ghostOrigin, ghostSize};

Create a new CGContext.

As discussed before, this creates an “off-screen” context. Remember how the coordinate system for CGContext uses the bottom-left corner as the origin, as opposed to UIImage, which uses the top-left?

Interestingly, if you use UIGraphicsBeginImageContext() to create a context, the system flips the coordinates for you, resulting in the origin being at the top-left. Thus, you’ll need to apply a transformation to your context to flip it back so your CGImage will draw properly.

If you drew a UIImage directly to this context, you don’t need to perform this transformation, as the coordinate systems would match up. Setting the transform to the context will apply this transform to all the drawing you do afterwards.

2) Draw your image into the context.

UIGraphicsBeginImageContext(input.size);
CGContextRef context = UIGraphicsGetCurrentContext();

CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0);
CGAffineTransform flipThenShift = CGAffineTransformTranslate(flip,0,-inputHeight);
CGContextConcatCTM(context, flipThenShift);
  
CGContextDrawImage(context, imageRect, [input CGImage]);
 
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextSetAlpha(context,0.5);
CGRect transformedGhostRect = CGRectApplyAffineTransform(ghostRect, flipThenShift);
CGContextDrawImage(context, transformedGhostRect, [ghostImage CGImage]);

After drawing the image, you set the alpha of your context to 0.5. This only affects future draws, so the input image drawing uses full alpha.

You also need to set the blend mode to kCGBlendModeSourceAtop.

This sets up the context so it uses the same alpha blending formula you used before. After setting up these parameters, flip Ghosty’s rect and draw him(it?) into the image.

3) Retrieve your processed image

UIImage * imageWithGhost = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

To convert your image to Black and White, you’re going to create a new CGContext that uses a grayscale colorspace. This will convert anything you draw in this context into grayscale.

Since you’re using CGBitmapContextCreate() to create this context, the coordinate system has the origin in the bottom-left corner, and you don’t need to flip it to draw your CGImage.

4) Draw your image into a grayscale context.

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
context = CGBitmapContextCreate(nil, inputWidth, inputHeight,
                         8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaNone);
  
CGContextDrawImage(context, imageRect, [imageWithGhost CGImage]);
 
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * finalImage = [UIImage imageWithCGImage:imageRef];

Retrieve your final image.

See how you can’t use UIGraphicsGetImageFromCurrentImageContext() because you never set this grayscale context as the current graphics context?

Instead, you created it yourself. Thus you’ll need to use CGBitmapContextCreateImage() to render the image from this context.

5) Cleanup.

CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
  
return finalImage;

At the end, you have to release everything you created. And that’s it – you’re done!

Memory Usage: When performing image processing, pay close attention to memory usage. As discussed in part one, an 8 megapixel image takes a whopping 32 megabytes of memory. Try to avoid keeping several copies of the same image in memory at once.

Notice how you need to release context the second time but not the first? In the first case, you got your context using UIGraphicsGetCurrentImageContext(). The key word here is ‘get’.

‘Get’ means that you’re getting a reference to the current context, but you don’t own it.

In the second case, you called CGBitmapContextCreateImage(), and Create means that you own the object and have to manage its life. This is also why you need to release the imageRef, because you created it using CGBitmapContextCreateImage().

Good job! Now, replace the first line in processImage: to call this new method instead of processUsingPixels::

UIImage * outputImage = [self processUsingCoreGraphics:inputImage];

Build and run. You should see the exact same output as before.

BuildnRun-3

Such spookiness! You can download a complete project with the code described in this section here.

In this simple example, it doesn’t seem like using Core Graphics is that much easier to implement than directly manipulating the pixels.

However, imagine performing a more complex operation, such as rotating an image. In pixels, that would require some rather complicated math.

However, by using Core Graphics, you just set a rotational transform to the context before drawing in the image. Hence, the more complicated your processing becomes, the more time Core Graphics saves.

Two methods down, and two to go. Next up: Core Image!

Jack Wu

Contributors

Jack Wu

Author

Over 300 content creators. Join our team.