Honing Your Mobile Apps With Savvy User Research

Lea Marolt Sonnenschein

Honing Your Apps With Savvy User Research

You’ve spent months developing your app. You’ve managed to deal with the intricacies of MVC and emerged victorious with a clean, flexible and scalable architecture.

On top of that, your app has some amazing features incorporating the newest iOS technologies that will definitely increase your user base. It’s as close to perfect as possible. It’s certain to be a hit!

You release the app and wait for the flood of downloads.

You wait a day, a week, and nothing seems to budge. You think, “it’s the users, they just don’t seem to get it!”

What if it’s you? What if you don’t get your users?

In this article you’ll learn what user research is and how it can help you improve your product and iOS app development skills. We’ll introduce and compare common user research methods, point out several tools you can use, and present real-world examples of how user research has informed new releases of the Rent the Runway iOS app.

What is User Research and Why Should You Do It?

To create a product that people love, you first have to figure out what problem you’re trying to solve. The problem has to come before the product. Before diving head-first into development, ask yourself:

  • What problem does this product actually solve?
  • Do people need this?
  • Does it work in a way that they can understand?
  • Will this benefit their day-to-day lives?

Conducting user research can help you answer these questions in three ways:

  1. It will help you uncover and understand common problems.
  2. Once you’ve brainstormed some potential solutions, user research will help you validate how usable and desirable those solutions actually are.
  3. Continuous user research can save precious design and development time.

User research has an extremely high return of investment. For example, you can discover around 85% of the usability issues of your product by testing your ideas with just five people.

User research methods are traditionally divided into two categories: qualitative and quantitative. I find that separation a little confusing, because every research method has aspects of both qualitative and quantitative reasoning and analysis. Instead, I tend to think of the different user research methods in terms of active and passive.

Active User Research Methods

Active user research:

  • Is always initiated by the developer.
  • Requires your active involvement throughout the process.
  • One research session takes a finite and relatively short amount of time.
  • Results can be analyzed immediately.
  • Research usually happens outside of the app.

Some examples of active user research are in-person user testing, online user testing, surveys & forms, and focus groups.

In-Person User Testing

In-person user testing means asking people to accomplish different tasks on your prototype and observing what they do and say.

This type of user research includes anything from asking your friends to try out your app, to fully-fledged research scenarios that involve scripts, two-way mirrors, and gift cards. The more formal you are the more difficult and expensive to set up will be the test. However, it’s also the most dynamic and flexible method that lets you adapt on the fly.

Your familiarity with the product makes you prone to intervene while the user is exploring your prototype. This can heavily bias the tester’s opinion and behavior, leading to useless results. It’s better if you come up with a list of questions you’d like answered, and hand them off to a coworker that will lead the session.

Finding participants for in-person testing can be tricky, especially if you’re just starting out. If you have a well-established product, you can reach out to your existing customers, offer them a gift card, and they’ll come running. If you’re rolling solo, you can talk to people you meet in coffee shops. They’re normally eager to offer their opinion, and you might even make some friends in the process. :]

Online User Testing

When you lack the time and resources to test in-person, online user testing is the next best thing. Services like Loop11, UserTesting, Usability Hub and several others will facilitate your remote testing.

Using these services, you can quickly reach a large number of users that fit specific criteria. For example, you can specify the demographics, habits and salary range of the people that you’d like to test your product. The users will go through your prototypes and tasks, and when the testing is over, you’ll end up with a lot of feedback!

There’s a small downside to using these remote tools. On most sites, the users testing your product are rated for their performance. Such a rating incentivizes users to improve their testing skills, which ultimately influences their payout. At times this rating system can backfire, because testers are primed to be alert and vigilant and will potentially notice many more details than a user in-the-wild would. Unlike face-to-face testing at the coffee shop, it’s complicated to control the amount and level of feedback that you gather from online user testing and you might need to filter out some data before analysis.

Surveys and Forms

Surveys and forms let you gather a lot of data very quickly with minimal effort. This can be crucial when you’re just starting to build your product. Whether freeform or multiple choice, this type of research is best when your questions are broad and your product not yet defined in detail.

Some popular survey tools are SurveyMonkey, TypeForm, SurveyGizmo and Google Forms.

Tip: before you invest in a tool, make sure that you can export the data in a format you can work with. Research can quickly become a nightmare of data transformations if you don’t pay attention to formats.

Focus Groups

Focus groups are another great way to gather a general reaction to your product, as they can provide more granular feedback than online surveys, and are less expensive than in-person user interviews.

While they can be effective they also suffer from some drawbacks. The most important is Groupthink, which can lead to a skewed perspective about your product based on the most vocal person or due to the general group tendency to conform to a common opinion.

Active user research methods are very effective but sometimes they might be too expensive. No worries, you can resort to passive methods :]

Passive User Research Methods

Rather than specifically initiating a user research session, you can implement mechanisms in your app that automatically gather user data over time, or let users initiate feedback.

Passive user research:

  • Is user-driven or user-initiated.
  • Requires data collection for a non-trivial amount of time across many users.
  • Is automated and doesn’t require your active participation.
  • Takes place in the app itself.

Some examples of passive user research are pixel logging, experiments, and automated feedback mechanisms.

Pixel Logging

“Pixel logging,” “tracking pixels,” and “pixel tags” are terms describing the same concept: tracking the user’s actions across your product with the goal of uncovering roadblocks and understanding user behavior.

This approach tracks almost every user action like button taps and pages visited on any app or website. Tracking can be as aggressive as logging also scroll and swipe actions, all posted to a remote tracking server for later analysis.

Some use cases for pixel logging are:

  • Understanding long term trends and making predictions based on historical data.
  • Debugging broken experiences by tracking a funnel of actions that leads to a “wrong” state.
  • Identifying pain points and brainstorming solutions for issues in your app.

There are platforms like Adjust, Amplitude, Braze, Branch, Google Analytics and others that make it easy to collect and analyze logs.

Check out our post on Getting Started with Mobile Analytics to better understand which actions to track and how to analyze them.

Remember that collecting and posting tons of analytics from your app can have a significant impact on your users’ data plans, so be mindful of what you collect and how often!

Case Study: Navigation

At Rent the Runway, we wanted to better understand the performance of our main navigation on the product listing page. Users tap the navigation bar to select a new category, and any applicable subcategories will appear underneath.

We had reason to believe that our users had trouble finding this interaction and understanding how to use it well, so we “put a pixel on it.”

After several weeks of data, our suspicions turned out to be correct! Users weren’t navigating through the main category switcher, but rather through the curated content links from the homepage. This gave us the incentive to dig deeper. Our research on this flow is still underway, as it turns out that navigation is a complex UX problem, especially when it comes to mobile and iOS app development.

In the meantime, we added a few small changes that dramatically increased the usability of the navigation: a “Tap to change” label and the number of categories.

While we are still collecting evidence about this we also started running some experiments.

Experiments

Experiments, the bread and butter of product development, are often referred to as A/B tests. The name is a bit misleading, because there can easily be more than two variants. The downside is that experiments significantly increase development time — by about 50% in our experience. This is due to the fact that you have to build two different versions of a feature, and you need to remove one of them when the test is over. Nevertheless, experiments are a great way to test different variations on user interfaces and interactions. The most important factors behind a successful experiment are a clear hypothesis and a clear vision of which data will prove or disprove that hypothesis.

A recent study from Qubit analyzed over 6,700 e-commerce experiments to better understand what types of experiments make sense and are worth your while. Their research showed that to increase conversion, which is the path a user takes between landing on your site and making a purchase, the experiment categories that provide the biggest uplifts are:

  • Scarcity: Point out limited product quantities by saying “Only 2 left in your size!”
  • Social proof: Feature elements such as user reviews and “User who likes X also viewed Y” kind of recommendations
  • Urgency: Include a time limit and countdown timers on actions, such as “You have 22 minutes to redeem this offer.”
  • Abandonment: Persuade users to come back and complete the purchase of items already in the shopping cart.

Some third party tools that can help you set up and analyze experiments are Apptimize, Flurry, Optimizely and Taplytics.

If you’re interested in rolling your own testing platform, check out our Firebase Tutorial for iOS A/B Testing.

Experiments can lead you to implement relevant improvements to your app, but there are also simpler and cheaper methods to spot issues in your product, like automated feedback.

Automated Feedback Mechanisms

User-initiated feedback is fairly straightforward to set up and every app should have at least:

  • A contact form: to enable users to contact you directly through the app.
  • An App Store review prompt: to ask users to review your app at appropriate times.

These might seem like small features to include but, over time, this type of feedback can prove to be incredibly valuable in two ways:

  • Revealing trends about what users want improved in your app.
  • Diagnosing problems and help you nip them in the bud.

For contact forms in particular, try capturing as much information as you can about the device, such as the app version number, iOS version and device model. This will help you, especially when you are debugging or figuring out a situation.

Case Study: Automated Feedback

At Rent the Runway, we receive around 20 feedback emails per day. We built a simple Google Script that collects and categorizes the messages in a spreadsheet.

The image above is an excerpt of a very long list of user feedback about product reviews in the app. We used this feedback to improve our customer reviews feature. The small tweaks that we iteratively implemented accounted for a large increase in our customer satisfaction. Automated feedback was definitely worth in our specific case.

Planning Your Research Approach

There’s a user-research method fit for your situation, no matter which stage of product development you’re in. Here is a handy cheatsheet you can use to decide which method is best for your scenario.

Each approach may be more suited to a specific case, but they all share a few common patterns, related to three phases: before, during and after the research.

Before picking any approach, you should truly understand what your goals are, and to make those goals specific enough to warrant further research. Here’s an example:

Bad: I want to know if my users like using the reviews feature in my app.
Good: I want to understand how important product reviews are to my users, how they use them in their decision-making process, and how I can improve that process.

After you’ve clarified your goals, it’s time for questions.

Examples: Does the user know how to find reviews? What are her goals and motivations when looking at reviews? Are there pain points? How can I alleviate them?

Questions go hand in hand with your hypothesis.

Example: Product reviews are critical to understanding decision-making. By making reviews more accessible and easier to find, we will improve customer confidence in a given item and increase the likelihood of a purchase.

The more specific you are in defining your hypothesis, the easier it will be to answer your questions and implement and effective improvement in your app.
The next phase is deciding what data to collect to prove your hypothesis.

Example: I want to know how many users go from the product page to the reviews page. I want to know how many reviews they look at, which reviews they read in detail, and how that correlates with their profiles. I also want to know whether the number of reviews has an impact the purchase of a product.

Once you have questions, hypothesis and data collection in place, the final step is defining success. Without a way to measure success, you can’t determine the effectiveness of a feature in your application.

Example: Adding a featured review below the product image should result in a 20% lift in the transition from the Product Page to the Reviews List. With all other elements equal, this should increase the Product Page to Purchase conversion rate by 5%, and in turn, increase overall conversion rates by 0.5%.

In the case of active user research methods there are four additional aspects to consider:

  1. Prototypes: There are plenty of prototyping tools. A combo we successfully used at Rent the Runway is Sketch for creating mockups, and InVision to create interactions and deploy the prototype to an actual device. If you’re using an observation room, Reflector is a handy tool to watch the user’s screen.

    Before starting a test, go through the whole flow yourself. Take the happy path through your app, and then try to break it. This is especially important for online user testing, where you can’t help the user recover from a failed navigation path.

  2. Scripts: There’s no improvisation in experiments. A script forces you to present the tasks in the same way to every tester. The job of the script is to help you translate your goals into tasks and open-ended questions. If you’re conducting an online test, be as specific as possible. You won’t be able intervene in an online test, and any miscommunication will result in an unusable result. Practice the script with a couple of friends or coworkers to iron out any potential flaws!
  3. Recruit users: Brainstorm your target audience and figure out how to incentivize them. Social media can be a powerful tool, and gift cards can be a good motivator. If you have an established user base, reach out to the most engaged customers via email. As a last resort, you can ask your family and friends to give you some help.
  4. Consent forms: Most user tests require some sort of a consent form. Consider whether you need one, and look at some examples here for ideas on what consent you should collect from your testers.

Once your plan is clear, it’s time to execute!

Executing Your Research Plan

If you’re conducting passive user research, watch your data closely for the first few days. It’s important to verify that pixels are tracked and data are stored correctly. After that, just sit back and relax until you collected enough evidence. How much evidence is enough? That highly depends on your hypothesis and type of experiment, but beware of calling the test results too early and making erroneous decisions by extrapolating conclusions from scant data.

Managing In-Person Tests

If you’re moderating an in-person user test, you need to be very engaged with your test subjects:

  • Make them feel comfortable: Introduce yourself and the test, let them understand they can stop at any time, and ask them some background questions about their day to help them relax.
  • Ease them into the flow with a scenario: As they relax, gently transition the casual chat into their first task with a scenario like: “Imagine you’re at home, browsing for clothes, and you stumble upon this …”
  • Ask open-ended questions: Never ask your subjects a question that can be answered with a simple “yes” or “no”. Instead, ask questions such as “How does this make you feel?”, “What do you expect to find?”, “Describe what you think will happen if …”
  • Listen: Don’t interrupt the user. The more they talk, the more data and evidence you will collect.
  • Record: With the subject’s consent, record the session so you can rewatch it. At the same time have an observer take notes, so that you can review the session later. Do not tempt fate and rely on your memory after the fact.

Analyzing your Research Results

Once you have collected enough data, it’s time to draw some conclusions, figure out the next steps, and do it all over again!

Experiments

If the experiment ran for an appropriate amount of time and one of options is the clear winner, give yourself a pat on the back and ship it!

If the evidence is confused, don’t give up. Think about what could have gone wrong and verify it. Question your assumptions. Make sure that there’s nothing functionally wrong with the app, or with your test, and verify that the data is reliable. If the results are still inconclusive, shut down the experiment and move to the next one.

In-Person Tests

Gather all notes and recordings from you and other observers while the impressions are still fresh in your mind. Go through each task and each user to check how many subjects completed a task, find commonalities, and condense your findings.

Your goal is to convert these findings into actionable results and propose further iterations. Don’t forget to send your test subjects a thank you note after their session. It makes a difference. :]

Case Study: Results Analysis

At Rent the Runway we recently ran a series of user tests and experiments to better understand how our customers use reviews in their decision-making process. Using automated feedback forms, pixel tracking and online user tests, we were able to identify several opportunities to improve the display of reviews, which also led to an increase of sales.

The views involved in the tests were the product page, review list and review photos. In the product page we spotted the following issues:

  1. Limited amount of rating information exposed, users want to get more “at a glance”.
  2. The review picture UI was confusing, because users expected to be taken to a specific review, rather than a list.

The review list suffered from the following:

  1. No sorting mechanisms. One of the biggest email complaints we got was: “Why can’t I sort the reviews?”

Finally, the reviews photos had the following problems:

  1. Photo was too obscured by the textual information (even when very little), and users got annoyed by swiping up and down.
  2. Text was small, and the typography nearly illegible. Hard to catch it all in a glance.

Users that paged through reviews converted at much higher rates than users who didn’t. Seeing how a dress fit others with similar proportions was a top factor in making the decision to buy. Our goal was therefore to make it easier for users to find reviews, and allow them to quickly find reviews posted by women with similar body size. These pain points and goals guided several iterations of improvements. In the end we settled on the following:

  1. Display a featured review in the last image with a call to action to read more reviews
  2. Highlight data about fit in the product page
  3. Introduce different criteria to sort reviews
  4. Improve the layout and text of reviews photos

After implementing these changes we noticed an uplift in the conversion from product page to purchase via reviews, which ultimately led to more sales. See how powerful user research is?

Where to Go From Here?

In this article we have I discussed many techniques that you can adopt to guide your product development and usability testing! Here are the key takeaways:

  • User research should be a constant and consistent part of your product development process.
  • Define your goals clearly before starting any kind of test.
  • Every method has pros and cons, so choose the ones that make sense within your constraints.
  • If you are the developer of the app, don’t moderate user tests yourself.
  • Write very detailed scripts for online user testing to avoid miscommunication.
  • Ask open questions, and encourage users to think out loud. Don’t interrupt them.
  • Let experiments run for an appropriate amount of time before declaring the winner.
  • Add a rating prompt and contact form to your app to collect feedback

If you have any comments or questions about this article, or testing in general, please join the discussion below!

Additional Reading

To learn more about active user research, the NNGroup has a fantastic collection of articles on research methods, and in particular user testing.

An Introduction to In-App A/B Testing is an excellent starting point for A/B Testing, but before you run your first experiment read when not to run tests.

Check out these great tips for moderating an in-person user test.

To learn more about usability in general, Steve Krug’s Don’t Make Me Think or Rocket Surgey Made Easy, and Don Norman’s The Design of Everyday Things are key books.

This article by WordStrem weighs the pros and cons of seven of the more popular survey tools. PCMagazine has an interesting chart showing the capabilities and limitations of many of these platforms to help you making the best choice for your needs.

Solving complex UX problems is easier when you understand the basic patterns for mobile Navigation. That link will take you to an excellent primer on the matter.

When you’re done with research and it’s time to start making some (re)design decision, check our article on UX Design Patterns for Mobile Apps. It’s a great starting point for the next iteration of your app!

To learn more about translating your goals into tasks and open-ended questions for a script, read this article about turning user goals into tasks for usability testing.

This excellent template by Steve Krug is a good starting point for a script.

Learn how easy it is to call test results too early. A must-read if you regularly touch A/B tests.

Team

Each tutorial at www.raywenderlich.com is created by a team of dedicated developers so that it meets our high quality standards. The team members who worked on this tutorial are:

Lea Marolt Sonnenschein

Lea is a Product Manager for mobile at Rent the Runway. She writes about iOS, UX and UI, teaches iOS classes at GA and volunteers for Girls Who Code. In her free time she plays piano, and tries to use code and technology to make art. Or she just doodles and listens to MIKA.

Other Items of Interest

Save time.
Learn more with our video courses.

raywenderlich.com Weekly

Sign up to receive the latest tutorials from raywenderlich.com each week, and receive a free epic-length tutorial as a bonus!

Advertise with Us!

PragmaConf 2016 Come check out Alt U

Our Books

Our Team

Video Team

... 20 total!

iOS Team

... 74 total!

Android Team

... 30 total!

Unity Team

... 12 total!

Articles Team

... 14 total!

Resident Authors Team

... 25 total!

Podcast Team

... 7 total!

Recruitment Team

... 9 total!