Honing Your Mobile Apps With Savvy User Research

You’ve spent months developing your app. You’ve managed to deal with the intricacies of MVC and emerged victorious with a clean, flexible and scalable architecture. On top of that, your app has some amazing features incorporating the newest iOS technologies that will definitely increase your user base. It’s as close to perfect as possible. It’s […] By Lea Marolt Sonnenschein.

Leave a rating/review
Save for later
Share
You are currently viewing page 2 of 3 of this article. Click here to view the first page.

Passive User Research Methods

Rather than specifically initiating a user research session, you can implement mechanisms in your app that automatically gather user data over time, or let users initiate feedback.

Passive user research:

  • Is user-driven or user-initiated.
  • Requires data collection for a non-trivial amount of time across many users.
  • Is automated and doesn’t require your active participation.
  • Takes place in the app itself.

Some examples of passive user research are pixel logging, experiments, and automated feedback mechanisms.

Pixel Logging

“Pixel logging,” “tracking pixels,” and “pixel tags” are terms describing the same concept: tracking the user’s actions across your product with the goal of uncovering roadblocks and understanding user behavior.

This approach tracks almost every user action like button taps and pages visited on any app or website. Tracking can be as aggressive as logging also scroll and swipe actions, all posted to a remote tracking server for later analysis.

Some use cases for pixel logging are:

  • Understanding long term trends and making predictions based on historical data.
  • Debugging broken experiences by tracking a funnel of actions that leads to a “wrong” state.
  • Identifying pain points and brainstorming solutions for issues in your app.

There are platforms like Adjust, Amplitude, Braze, Branch, Google Analytics and others that make it easy to collect and analyze logs.

Check out our post on Getting Started with Mobile Analytics to better understand which actions to track and how to analyze them.

Remember that collecting and posting tons of analytics from your app can have a significant impact on your users’ data plans, so be mindful of what you collect and how often!

Case Study: Navigation

At Rent the Runway, we wanted to better understand the performance of our main navigation on the product listing page. Users tap the navigation bar to select a new category, and any applicable subcategories will appear underneath.

We had reason to believe that our users had trouble finding this interaction and understanding how to use it well, so we “put a pixel on it.”

After several weeks of data, our suspicions turned out to be correct! Users weren’t navigating through the main category switcher, but rather through the curated content links from the homepage. This gave us the incentive to dig deeper. Our research on this flow is still underway, as it turns out that navigation is a complex UX problem, especially when it comes to mobile and iOS app development.

In the meantime, we added a few small changes that dramatically increased the usability of the navigation: a “Tap to change” label and the number of categories.

While we are still collecting evidence about this we also started running some experiments.

Experiments

Experiments, the bread and butter of product development, are often referred to as A/B tests. The name is a bit misleading, because there can easily be more than two variants. The downside is that experiments significantly increase development time — by about 50% in our experience. This is due to the fact that you have to build two different versions of a feature, and you need to remove one of them when the test is over. Nevertheless, experiments are a great way to test different variations on user interfaces and interactions. The most important factors behind a successful experiment are a clear hypothesis and a clear vision of which data will prove or disprove that hypothesis.

A recent study from Qubit analyzed over 6,700 e-commerce experiments to better understand what types of experiments make sense and are worth your while. Their research showed that to increase conversion, which is the path a user takes between landing on your site and making a purchase, the experiment categories that provide the biggest uplifts are:

  • Scarcity: Point out limited product quantities by saying “Only 2 left in your size!”
  • Social proof: Feature elements such as user reviews and “User who likes X also viewed Y” kind of recommendations
  • Urgency: Include a time limit and countdown timers on actions, such as “You have 22 minutes to redeem this offer.”
  • Abandonment: Persuade users to come back and complete the purchase of items already in the shopping cart.

Some third party tools that can help you set up and analyze experiments are Apptimize, Flurry, Optimizely and Taplytics.

If you’re interested in rolling your own testing platform, check out our Firebase Tutorial for iOS A/B Testing.

Experiments can lead you to implement relevant improvements to your app, but there are also simpler and cheaper methods to spot issues in your product, like automated feedback.

Automated Feedback Mechanisms

User-initiated feedback is fairly straightforward to set up and every app should have at least:

  • A contact form: to enable users to contact you directly through the app.
  • An App Store review prompt: to ask users to review your app at appropriate times.

These might seem like small features to include but, over time, this type of feedback can prove to be incredibly valuable in two ways:

  • Revealing trends about what users want improved in your app.
  • Diagnosing problems and help you nip them in the bud.

For contact forms in particular, try capturing as much information as you can about the device, such as the app version number, iOS version and device model. This will help you, especially when you are debugging or figuring out a situation.

Case Study: Automated Feedback

At Rent the Runway, we receive around 20 feedback emails per day. We built a simple Google Script that collects and categorizes the messages in a spreadsheet.

The image above is an excerpt of a very long list of user feedback about product reviews in the app. We used this feedback to improve our customer reviews feature. The small tweaks that we iteratively implemented accounted for a large increase in our customer satisfaction. Automated feedback was definitely worth in our specific case.

Planning Your Research Approach

There’s a user-research method fit for your situation, no matter which stage of product development you’re in. Here is a handy cheatsheet you can use to decide which method is best for your scenario.

Each approach may be more suited to a specific case, but they all share a few common patterns, related to three phases: before, during and after the research.

Before picking any approach, you should truly understand what your goals are, and to make those goals specific enough to warrant further research. Here’s an example:

Bad: I want to know if my users like using the reviews feature in my app.
Good: I want to understand how important product reviews are to my users, how they use them in their decision-making process, and how I can improve that process.

After you’ve clarified your goals, it’s time for questions.

Examples: Does the user know how to find reviews? What are her goals and motivations when looking at reviews? Are there pain points? How can I alleviate them?

Questions go hand in hand with your hypothesis.

Example: Product reviews are critical to understanding decision-making. By making reviews more accessible and easier to find, we will improve customer confidence in a given item and increase the likelihood of a purchase.

The more specific you are in defining your hypothesis, the easier it will be to answer your questions and implement and effective improvement in your app.
The next phase is deciding what data to collect to prove your hypothesis.

Example: I want to know how many users go from the product page to the reviews page. I want to know how many reviews they look at, which reviews they read in detail, and how that correlates with their profiles. I also want to know whether the number of reviews has an impact the purchase of a product.

Once you have questions, hypothesis and data collection in place, the final step is defining success. Without a way to measure success, you can’t determine the effectiveness of a feature in your application.

Example: Adding a featured review below the product image should result in a 20% lift in the transition from the Product Page to the Reviews List. With all other elements equal, this should increase the Product Page to Purchase conversion rate by 5%, and in turn, increase overall conversion rates by 0.5%.

In the case of active user research methods there are four additional aspects to consider:

Before starting a test, go through the whole flow yourself. Take the happy path through your app, and then try to break it. This is especially important for online user testing, where you can’t help the user recover from a failed navigation path.

  1. Prototypes: There are plenty of prototyping tools. A combo we successfully used at Rent the Runway is Sketch for creating mockups, and InVision to create interactions and deploy the prototype to an actual device. If you’re using an observation room, Reflector is a handy tool to watch the user’s screen.
  2. Scripts: There’s no improvisation in experiments. A script forces you to present the tasks in the same way to every tester. The job of the script is to help you translate your goals into tasks and open-ended questions. If you’re conducting an online test, be as specific as possible. You won’t be able intervene in an online test, and any miscommunication will result in an unusable result. Practice the script with a couple of friends or coworkers to iron out any potential flaws!
  3. Recruit users: Brainstorm your target audience and figure out how to incentivize them. Social media can be a powerful tool, and gift cards can be a good motivator. If you have an established user base, reach out to the most engaged customers via email. As a last resort, you can ask your family and friends to give you some help.
  4. Consent forms: Most user tests require some sort of a consent form. Consider whether you need one, and look at some examples here for ideas on what consent you should collect from your testers.

Once your plan is clear, it’s time to execute!