Image Processing in iOS Part 1: Raw Bitmap Modification

Learn the basics of image processing on iOS via raw bitmap modification, Core Graphics, Core Image, and GPUImage in this 2-part tutorial series. By Jack Wu.

Leave a rating/review
Save for later
Share
You are currently viewing page 2 of 4 of this article. Click here to view the first page.

Coordinate Systems

Since an image is a 2D map of pixels, the origin needs specification. Usually it’s the top-left corner of the image, with the y-axis pointing downwards, or at the bottom left, with the y-axis pointing upwards.

There’s no “correct” coordinate system, and Apple uses both in different places.

Currently, UIImage and UIView use the top-left corner as the origin and Core Image and Core Graphics use the bottom-left. This is important to remember so you know where to find the bug when Core Image returns an “upside down” image.

Image Compression

This is the last concept to discuss before coding! With raw images, each pixel is stored individually in memory.

If you do the math on an 8 megapixel image, it would take 8 * 10^6 pixels * 4 bytes/pixel = 32 Megabytes to store! Talk about a data hog!

This is where JPEG, PNG and other image formats come into play. These are compression formats for images.

When GPUs render images, they decompress images to their original size, which can take a lot of memory. If your app takes up too much memory, it could be terminated by the OS (which looks to the user like a crash). So be sure to test your app with large images!

I’m dying for some action

I'm dying for some action
I'm dying for some action

Looking at Pixels

Now that you have a basic understanding of the inner workings of images, you’re ready to dive into coding. Today you’re going to work through developing a selfie-revolutionizing app called SpookCam, the app that puts a little Ghosty in your selfie!

Download the starter kit, open the project in Xcode and build and run. On your phone, you should see tiny Ghosty:

screenshot2 - ghosty

In the console, you should see an output like this:

Screenshot1 - pixel output

Currently the app is loading the tiny version of Ghosty from the bundle, converting it into a pixel buffer and printing out the brightness of each pixel to the log.

What’s the brightness? It’s simply the average of the red, green and blue components.

Pretty neat. Notice how the outer pixels have a brightness of 0, which means they should be black. However, since their alpha value is 0, they are actually transparent. To verify this, try setting imageView background color to red, then build and run gain.

Ghosty-Red

Now take a quick glance through the code. You’ll notice ViewController.m uses UIImagePickerController to pick images from the album or to take pictures with the camera.

After it selects an image, it calls -setupWithImage:. In this case, it outputs the brightness of each pixel to the log. Locate logPixelsOfImage: inside of ViewController.m, and review the first part of the method:

// 1.
CGImageRef inputCGImage = [image CGImage];
NSUInteger width = CGImageGetWidth(inputCGImage);
NSUInteger height = CGImageGetHeight(inputCGImage);

// 2.
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;

UInt32 * pixels;
pixels = (UInt32 *) calloc(height * width, sizeof(UInt32));

// 3.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

// 4.
CGContextDrawImage(context, CGRectMake(0, 0, width, height), inputCGImage);

// 5. Cleanup
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);

Now, a section-by-section recap:

  1. Section 1: Convert the UIImage to a CGImage object, which is needed for the Core Graphics calls. Also, get the image’s width and height.
  2. Section 2: For the 32-bit RGBA color space you’re working in, you hardcode the parameters bytesPerPixel and bitsPerComponent, then calculate bytesPerRow of the image. Finally, you allocate an array pixels to store the pixel data.
  3. Section 3: Create an RGB CGColorSpace and a CGBitmapContext, passing in the pixels pointer as the buffer to store the pixel data this context holds. You’ll explore Core Graphics in more depth in a section below.
  4. Section 4: Draw the input image into the context. This populates pixels with the pixel data of image in the format you specified when creating context.
  5. Section 5: Cleanup colorSpace and context.

Note: When you display an image, the device’s GPU decodes the encoding to display it on the screen. To access the data locally, you need to obtain a copy of the pixels, just like you’re doing here.

At this point, pixels holds the raw pixel data of image. The next few lines iterate through pixels and print out the brightness:

// 1.
#define Mask8(x) ( (x) & 0xFF )
#define R(x) ( Mask8(x) )
#define G(x) ( Mask8(x >> 8 ) )
#define B(x) ( Mask8(x >> 16) )
  
NSLog(@"Brightness of image:");
// 2.
UInt32 * currentPixel = pixels;
for (NSUInteger j = 0; j < height; j++) {
  for (NSUInteger i = 0; i < width; i++) {
    // 3.
    UInt32 color = *currentPixel;
    printf("%3.0f ", (R(color)+G(color)+B(color))/3.0);
    // 4.
    currentPixel++;
  }
  printf("\n");
}

Here's what's going on:

  1. Define some macros to simplify the task of working with 32-bit pixels. To access the red component, you mask out the first 8 bits. To access the others, you perform a bit-shift and then a mask.
  2. Get a pointer of the first pixel and start 2 for loops to iterate through the pixels. This could also be done with a single for loop iterating from 0 to width * height, but it's easier to reason about an image that has two dimensions.
  3. Get the color of the current pixel by dereferencing currentPixel and log the brightness of the pixel.
  4. Increment currentPixel to move on to the next pixel. If you're rusty on pointer arithmetic, just remember this: Since currentPixel is a pointer to UInt32, when you add 1 to the pointer, it moves forward by 4 bytes (32-bits), to bring you to the next pixel.

Note: An alternative to the last method is to declare currentPixel as a pointer to an 8-bit type (ie char). This way each time you increment, you move to the next component of the image. By dereferencing it, you get the 8-bit value of that component.

At this point, the starter project is simply logging raw image data, but not modifying anything yet. That's your job for the rest of the tutorial!

Jack Wu

Contributors

Jack Wu

Author

Over 300 content creators. Join our team.