iOS 7 Blur Effects with GPUImage

iOS 7 embodies deference, clarity and depth. Using GPUImage, this tutorial looks at one of the most appealing aspects of depth; the iOS 7 blur effect. By Mike Jaoudi.

Leave a rating/review
Save for later
Share

iOS Blurs using GPUImage

TutorialLogo

Among the many visual changes of iOS 7, one of the more appealing is the subtle use of blurs throughout the OS. Many third-party apps have already adopted this design detail, and are using it in all sorts of wonderful and creative ways.

This tutorial will walk you through several different techniques for implementing iOS 7 blur effects, all with the assistance of a framework called GPUImage.

Created by Brad Larson, GPUImage is a framework which, by taking advantage of the GPU, makes it incredibly easy to apply different effects and filters to both images and videos, whilst maintaining great performance; often more performant than any of the built-in methods provided by Apple’s APIs.

Note: This tutorial requires a physical device to build the sample project; it won’t work in the simulator. You’ll also need an iOS Developer account. If you don’t have a developer account, you can sign up for one here. There are a bunch of awesome perks of being a registered developer, such as being able to develop using physical devices, early access to betas, and a ton of developer resources.

Getting Started

Download the starter project here and extract it to a convenient location on your drive.

Open Video Blurring.xcodeproj in Xcode and run it on your device. It will look similar to the following:

Start

Tap the Menu in the upper-left of the screen (the three horizontal stripes). You’re presented with two options: record a new video or play back an existing video.

Notice how all of the user interface elements have a gray backdrop; that’s rather dull. You’ll be replacing those dull gray backgrounds with some nice iOS 7 blur effects instead.

Why Use Blurs?

Beyond looking cool, blurs communicate three important concepts to the users of your apps:
depth,context and focus.

Depth

Depth provides cues to the user that the interface is layered, and helps them understand how to navigate your app. Previous iOS versions communicated depth with three-dimensional bevels and glossy buttons reflecting an emulated light source, but iOS 7 communicates depth using blurs and parallax.

The parallax effect is evident when you tilt your iOS 7 device from side-to-side. You’ll notice the icons appear to move independently from the background. This provides cues to the user that the interface is composed of different layers, and that important elements sit on top of other less important interface elements — which leads into the next concept: context.

Context

Context allows a user to get a sense of bearing within your app. Animated transitions provide excellent context; instead of having a new view instantly appear when you tap a button, animating between the views gives the user a moment to understand where the new view originates from, and how they may get back to the previous one.

Blurs allow you to show the previous view in the background, albeit out of focus, to give the user even more context as to where they were a moment ago. The Notification Center is a great example of this; when you pull it down, you can still see the original view in the background whilst you work on another task in the foreground.

Focus

Focusing on selective items removes clutter and lets the user navigate quickly through the interface. Users will instinctively ignore elements that are blurred, focusing instead on the more important, in-focus elements in the view.

You will implement two different types of blurs in this tutorial: Static Blur and Dynamic Blur. Static blurs represent a snapshot in time and do not reflect changes in the content below them. In most cases, a static blur works perfectly fine. Dynamic blurs, in contrast, update as the content behind them changes.

It’s much more exciting to see things in action than to talk about them, so head right into the next section to get started on adding some iOS 7 blur effects!

Adding Static Blur

The first step in creating a static blur is converting the current on-screen view into an image. Once that’s done, you simply blur that image to create a static blur. Apple provides some wonderful APIs to convert any view into an image — and there are some new ones in iOS 7 to do it even faster.

These new APIs are all part of Apple’s new snapshot APIs. The snapshotting APIs give you the ability to capture not just a single view, but also the entire view hierarchy. That means if you instruct it to capture a view, it will also capture all the buttons, labels, switches and various views that are placed on top of it.

You’ll implement this capture logic as a category for UIView. That way, you can quickly and easily convert any view and its contained view hierarchy into an image — and get some code reuse to boot!

Creating your Screenshot Category

Go to File/New/File… and select the iOS/Cocoa Touch/Objective-C category, like so:

NewCategory

Name the category Screenshot and make it a category on UIView, as shown below:

CategoryInfo

Add the following method declaration to UIView+Screenshot.h:

-(UIImage *)convertViewToImage;

Next, add the following method to UIView+Screenshot.m:

-(UIImage *)convertViewToImage
{
    UIGraphicsBeginImageContext(self.bounds.size);
    [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    
    return image;
}

The method above starts with a call to UIGraphicsBeginImageContext() and ends with UIGraphicsEndImageContext(). These two lines are bookends for what is known as the image context. A context can be one of several things; it can be the screen, or as in this case, an image. The net effect of these two lines is an off-screen canvas on which to draw the view hierarchy.

drawViewHierarchyInRect:afterScreenUpdates: takes the view hierarchy and draws it onto the current context.

Finally, UIGraphicsGetImageFromCurrentImageContext() retrieves the generated UIImage from the image context, which is the object returned by this method.

Now that you have a category to hold this logic, you’ll need to import it in order to use it.

Add the following import to the top of DropDownMenuController.m, just below the other import statements:

#import "UIView+Screenshot.h"

Add the following method to the end of the same file:

-(void)updateBlur
{
    UIImage *image = [self.view.superview convertViewToImage];
}

Here you ensure you capture not just the view but its superview as well. Otherwise, you’d just capture the menu alone.

Mike Jaoudi

Contributors

Mike Jaoudi

Author

Over 300 content creators. Join our team.