Every product designer or developer needs A/B testing in their toolkit, including those who design mobile experiences. A/B testing is a valuable method for app creators and product teams, and it’s one of the most impactful testing tools for making iterative changes and optimizing user experiences. We’ve put together this handy resource on A/B testing mobile apps and other mobile experiences to help your team make data-informed decisions while creating a truly user-centric digital product.
A/B testing for mobile apps is a method teams use to compare two app versions to determine which one performs better. The process involves creating two variations of the app, typically differing in one aspect like design, features, or user interface (UI) elements. The two versions then simultaneously launch, each to a different segment of the app’s user base or target audience.
The A/B testing process for mobile apps generally works like this:
Related reading: Demystifying UX statistics: What is p and what does p < 0.05 mean?
A/B testing allows app developers to make data-driven decisions about UX and UI design for mobile apps, including design and functionality changes. By comparing different app versions, product teams can determine which design, feature, or interface changes lead to better outcomes. These decisions will improve user experience (UX), engagement, and overall app performance.
Of course, it’s always a good idea to follow standard mobile app UX best practices during the design process. Some of these include:
Conducting A/B testing during your app design process offers your UX team several advantages, such as:
Consider conducting a mobile app evaluation and a mobile app comparison using our comprehensive Testing template gallery.
Checlout Testing mobile experiences guide, to see how user feedback from mobile testing can guide the conception, design, development, and implementation of your team’s mobile experiences.
Testing a mobile app for quality assurance (QA) means making sure the app functions correctly, meets user expectations, and delivers a positive user experience. Here’s a general overview of the QA testing process for a mobile app:
Outline your goals for the QA testing process. Determine what aspects of the app need to be tested, including functionality, usability, performance, compatibility, and security.
Develop test cases that cover all of the app’s features and functionalities. These should include steps to reproduce specific scenarios, expected results, and acceptance criteria.
Perform manual testing by following your test cases on various devices, operating systems, and network conditions. Test different aspects of the app, including navigation, UI elements, input validation, error handling, and data integrity.
Using testing frameworks and tools, implement automated testing to streamline repetitive tests and get consistent results. You can automate regression tests, UI tests, integration tests, and performance tests to validate your app’s functionality across different environments.
Verify that all of your app’s features and functionalities work as intended by testing these elements:
With real users, evaluate the app’s usability to gather feedback about the app’s UI, navigation structure, accessibility, and overall UX.
Assess your app’s performance under various conditions, including different device configurations, network speeds, and user loads. Test for:
Your app should function correctly on different devices, screen sizes, resolutions, and operating systems. Test for compatibility with popular devices, platforms, and browser versions to reach a wider audience.
Within the app, identify and address security risks and vulnerabilities by testing for common security threats. These include data breaches, unauthorized access, injection attacks, and encryption weaknesses.
Regression testing can help you verify that recent code changes or updates haven’t introduced new defects or regressions. At this point, it’s prudent to rerun your test cases and validate your app’s behavior to maintain its stability and reliability.
During the testing process, document any issues, bugs, or defects you discover. Use bug-tracking tools to log detailed descriptions, screenshots, and steps to reproduce the issues. Prioritize and assign tasks to resolve based on their severity and impact.
Based on feedback, test results, and evolving requirements, continuously iterate and refine your QA testing process. Incorporate user feedback, performance metrics, and market insights to improve the app’s quality and effectiveness over time.
Conducting A/B tests for app store listings involves experimenting with different elements of your app’s listing on platforms like the Apple App Store or the Google Play Store to determine which variations lead to higher downloads, conversions, and engagement. Some common elements you can test are:
For this section, we held a Q&A session with Nancy Hua, the CEO of Apptimize. Apptimize is a California-based organization that enables mobile app design teams to conduct effective A/B tests, implement new features, and create personalized user experiences.
One of the most common problems we see mobile apps doing is not onboarding users well. On mobile, users make decisions about your app very quickly. First impressions matter. According to research, 25% of apps are abandoned after first-time use. A lot of teams out there are focused on making a whole bunch of awesome features for their users, but if a huge chunk of your users are deleting your app after only a few minutes, all that effort goes to waste.
If you don’t nail onboarding, your developers may as well have been drinking beers instead of building those features that no one saw. It’s not a contest of who has the most features. The apps that succeed are the ones that convey their value proposition to the users from the get-go. Instead of focusing on building out more cool features, app teams should focus on how to showcase them properly.
The first step is to drive users toward the aha moment, the instant when users realize what value the app will provide in their lives. This can be done using onboarding tutorials, good clean UI design, and quickly driving users toward core functionality. Once your users reach the aha moment, they won’t delete your app. Then, you can start showing off all your other features.
The aha moment is reached relatively early on in use, so the next key is to show the user what else they can do in an app, and get them deeper into your ecosystem.
Another common mistake is to hide additional features in an app drawer, the most common of which is the hamburger menu. The hamburger menu is problematic. It signals to users that features tucked away in the menu are not important. Since screen space is incredibly limited, only the most valuable features are immediately viewable. Even if your feature is amazing, the impression that users get is that it’s neglected, cast aside in the same drawer as user settings, share buttons, legal, and other non-essentials. Users don’t explore as much as we think they do, so showing features they’ll value front and center is vital.
Make it easy for users to find your best features.
One of our clients, a social fashion app, increased retention rates by 18% by simply moving a few key features from the hamburger menu into a tabbed menu at the bottom of the screen. We also recently talked to Slickdeals and they shared with us a similar problem: users were only using what was immediately displayed on the home screen. Their other features that are very popular on the web were neglected in the side drawer, so now they’re working on a redesign to better showcase their top features.
The point is that users generally don’t spontaneously discover all your features. If you have something your users will love, display it prominently to make sure that it’s being seen and utilized. But of course, you can’t overdo it either and bombard your users with ten thousand different features at once. This is where user testing and A/B testing come in to determine your top features and the ideal layout.
One of the best things that app developers can do is to A/B test changes before deploying them to all of their users. Because the app marketplaces are so slow, iterating fast and understanding how your users are reacting to your feature changes can be incredibly difficult. With these methods, any mistakes cost precious time and resources.
With A/B testing, teams can deploy new features to a small percentage of their users, and test to analyze how the features affect user behavior. If it’s beneficial, they can then instantly deploy that feature to 100% of their users, without having to resubmit.
App teams can also do user testing to get qualitative feedback from their users and hear them describe their experience in their own words. It’s good for pinpointing exactly what is confusing—or delightful—to the user.
The traditional app cycle is incredibly long, drawn-out, and rigid. The typical release cycle looks somewhat like this, with dev cycles ranging anywhere from a few weeks to a few months:
There are two key roles here that A/B testing plays while testing and optimizing an app.
The first is to use instant A/B testing to shorten the QA process, as well as bypass app store approvals. Using those, you’re able to instantly make changes and roll them out to your users, without waiting for an app store review.
The second place A/B testing plays a key role is getting better, more actionable data and analytics after launch. On a typical mobile release, teams usually roll out a bunch of different changes all at once, ranging from bug fixes, to feature additions, new UI elements, and so on. If your metrics are affected, it’s very difficult to figure out why.
If they go up, great! But you don’t know what specifically contributed to those increases. Was it the bug fixes? Did users love the new feature? Or was it due to the easier-to-use UI? This is where qualitative testing comes in.
User testing and A/B testing go hand in hand. While user testing provides the qualitative feedback, A/B testing takes care of the quantitative. User testing is great for macro opinions. They’ll give you early feedback on macro issues with your design. Using it will help you understand why users behave in a certain way, and what they think/feel about your app.
A/B testing, on the other hand, allows you to experiment with more detailed aspects of your app that users may not know they’re responding to. For example, users can tell you when a checkout flow is downright confusing, but they might not know whether cutting out a step in your checkout flow would help them buy more things. The two types of testing are two different approaches, so you can attack a problem from different angles.
Ultimately, both are essential tools that complement each other well. User testing results will give you ideas on what to A/B test. A/B testing results will give you ideas on what to ask your users.
The most common mistake is assuming that the changes they’re building are going to delight users. More broadly, mobile app creators assume that they are good at predicting what their users want and how their users will behave and feel. And hindsight bias allows us to feel like we knew the answer all along. But really, you need to ask your users and experiment in a data-rigorous way to really know how your users will act and react to your decisions.
Don't make assumptions about what will work. Talk to your users, and constantly experiment.
Just because people ask for a change, doesn’t mean they really want it. A great example of this was when the digital magazine The Next Web was inundated with requests and pleas to build an Android app in addition to their popular iOS app. So they did. Turns out, people weren’t downloading and using it.
"We had gotten enough requests for it and had gotten the impression there were thousands of anxious Android tablet owners holding their breath for an Android version of our magazine. Unfortunately, we’ve found out that although Android users are very vocal they aren’t very active when it comes to downloading and reading magazines."
—Boris Veldhuijzen van Zanten, Co-Founder of The Next Web
Without hard data, it’s very difficult to judge what people say they want and what they really want. Figure out ways to validate your features before deploying them to your entire user base.
The most successful apps in the field are staying on top of new technologies and trying out cutting-edge methods of development. They’re the ones that are always learning from their peers, keeping their ears to the ground, and aren’t afraid to make some mistakes.
The App Store almost discourages experimentation. Along with the arduous processes required to make any changes, every time you release a new version, you lose all your ratings. The best apps don’t let these types of things stop them. Instead, they learn to validate their hypotheses and incrementally compound their successes.
The best ones are also extremely user-focused. Rather than guessing and assuming they know their users, they ask them and test out many hypotheses. They constantly question what they can do to improve the user experience.
All. The. Time.
“We don’t have time to test right now because we’re working on this big release that’s coming out in three months.”
“We don’t have the resources to support a nice-to-have.”
At Apptimize, we hear this a lot. But the truth is that your release cycle shouldn’t be three months. While you’re working on your waterfall of three months, your competitors are staying agile, talking to customers, learning from customer behavior, and improving their product 10x with six smaller iterations in the time it took for you to do one big release. We’re not in the ’90s anymore. Mobile isn’t boxed enterprise software.
Staying agile and user-focused is critical to staying alive in this space where customers not only have a lot of choices but switching costs are low and expectations are high.
If you aren't testing, you'll get left behind.
This is why top apps like Facebook and Netflix built their own user testing and A/B testing processes and platforms before anyone else in the space even thought about doing these things. This is why these companies have found so much continued success. And they spent a lot of time and resources on building these capabilities at a time when building your own was the only way to do it. Now any app can just buy the same functionality for one 100th of the cost of building your own. Not doing it would be paramount to getting left behind.
We actually interviewed Lacy Rhodes, iOS Engineer at Etsy, a while ago on this very topic. Essentially, there is no one silver bullet. It’s about incrementally showing value and showing how positive learnings and results from testing compound into huge gains.
Small early wins definitely help to get buy-in from a larger team. Ultimately, testing is about proving value. Both user testing and A/B testing will help you prove the value of your ideas to the rest of the team and get everyone really excited about knowing what’s actually working and what’s not. It helps everyone waste less time, be more focused, and be heard.
For more tips on testing and optimizing your app, check out UserTesting's usability testing templates and checklists.
Mobile user testing plays a critical role in creating winning experiences. Michael Mace compiled a series of answers to the top questions we hear time and time again when helping our clients run their UserTesting mobile studies. Michael, the VP of Market Strategy at UserTesting and a 35-year tech industry veteran, has occupied marketing and strategy roles at Apple and Palm, co-founded two startups, and consulted for multiple tech companies.
First, Michael answers our most commonly asked questions about mobile website user testing.
It depends on your customer base. If you’re sure that none of them ever use mobile devices, you probably don’t have anything to worry about. In a far cry from the early 2000s, most U.S. adults today say they use the internet (95%), have a smartphone (90%) or subscribe to high-speed internet at home (80%), according to a Pew Research Center survey conducted May 19 to Sept. 5, 2023. Ask yourself, if you’re not competing on mobile, are you leaving yourself vulnerable to competitors who are?
Most companies should at a minimum test their websites on mobile to make sure they work properly and meet user expectations. And you should seriously consider either designing a mobile site from the bottom up or modifying your current one for smartphones and tablets. That involves rethinking not just how the site works, but what tasks users will want to do on mobile.
The leading mobile web platforms in the U.S. are iPhone, Android phones and tablets, and iPad. So you should definitely test on at least those three platforms.
You should do user tests throughout the development process, so you can fix problems before they get too deeply embedded in your site. You can start testing as soon as you have anything to show to users, even if it’s just conceptual sketches.
It’s best to run frequent tests of a few testers each than to save up your tests and do them all at once at the end of development
When people hear “user testing,” they tend to assume that means only usability testing. That is, of course, one of the things you should do with user testing: Have users go through the main functions of your site, make sure they’re intuitive, and identify questions or hesitations users might have. This is especially important if you have a purchasing process on your site.
But it’s also important to test for emotional engagement. In other words, how do people feel about your site? Can they quickly accomplish what they want to do? Do they feel rewarded by using it? Mobile users are notoriously impatient and easily distracted. Even if your site is easy to use, people may not stick around unless they feel engaged.
You should also plan different tests for smartphones versus tablets. Smartphones are used most often for quick access to info while people are on the go. For example, on shopping sites, people frequently use smartphones to do product and pricing research, even though they may not be as likely to make the final purchase during their visit. You should test to make sure it’s easy for people to find product information, pictures, and price information on your site.
In contrast, tablets are much more likely to be used for long browsing sessions. So you should make sure that the tablet shopping experience is rich, engaging, and easy to use.
If you’re struggling with a specific design issue, user tests can be a terrific way to end the argument quickly. You’ve heard the old saying that “a picture is worth a thousand words”? In our experience, a user video is worth a thousand hours of debate. If your team is having an argument about a feature, you can use UserTesting to get quick video of some real users reacting to the proposed solutions. We find that those videos can be far more persuasive than a roomful of opinions.
No single answer is right for every organization.
We’re finding that many commerce companies choose to do both. The website is aimed at casual visitors, while the app is aimed at their most loyal customers (the people who are most likely to download an app). So, you use the website for prospecting and the app for deepening the relationship.
Conversely, many major brands are using mobile apps as marketing tools to help spread awareness and affinity.
Related video: Here’s how Burberry increased app engagement 200%
The most important thing is to understand what your mobile strategy is. How does your presence on smartphones and tablets fit with all of the other ways you engage with customers, and what are you hoping to accomplish in mobile? We’re long past the days when you could create something on mobile and expect users to respond just because it’s trendy.
Here, Michael answers some frequently asked questions about mobile app user testing.
Any testing is good, but as beta testers get to know your app, it gets more challenging to spot usability problems because they no longer have fresh eyes. Also, friends and family (the usual source of beta testers) are not necessarily a good proxy for typical users because they are too emotionally invested in your product or aren’t representative of your customer base.
UserTesting gives you feedback from typical users who don’t already know your app and have no emotional investment. User tests help you understand the needs and reactions of normal users, who will write your reviews in the app store.
We strongly recommend testing apps before release and during development. It’s much easier to fix a problem in the early stages of development than after the product is finished (not to mention after you’ve received a slug of bad reviews in the app store).
We have special processes to simplify the testing of iOS apps. We manage UDIDs and don’t deplete your allocation of UDIDs. Just fill out the test form online, give us a link to your app, and we’ll take care of the rest. (Note: due to the extra logistics involved, iOS unreleased tests take an average of three business days to complete after we receive your .ipa file.)
Whenever you have something you can show users, it’s a good idea to get feedback immediately. You can even test prototypes and preliminary wireframes (anything you can display in a browser or in a user test). Generally, the sooner you identify problems, the easier it is to fix them. We’ve seen tragic examples of companies that tested at the end of development and identified problems but released anyway because it was too late to make changes.
People who hear “user testing” assume it means only usability testing. Of course, one of the things you should do with user testing is have users go through the main functions of your app, make sure they’re intuitive, and identify questions or hesitations users might have.
Many apps include icons and controls that the developer custom-created for the app. Although artistic creativity is great, one of the most common causes of confusion in user tests is buttons and controls that users can’t easily understand. It’s essential to test these features.
However, there are also three other essential tasks for user testing. The first is to look for emotional engagement. In other words, how do people feel about your app? Can they quickly accomplish what they want to do? Do they feel rewarded by using it? Mobile users are notoriously impatient and easily distracted. If your app doesn’t engage people, they may move on to something else and never return.
The second task is to gain a deeper understanding of your customers' thinking. The better you understand them, the better you can make decisions on their behalf. User tests are like mini-focus groups, but you can organize them on a day’s notice and with far less expense and hassle.
The third use is settling internal arguments. Because mobile devices are highly personal, you may find that people on your team can be extremely passionate about a dispute over a feature or UI element. Rather than having a knock-down argument on the subject, letting the actual users give you a ruling is often easier and faster. User tests make it easy to bring in that voice of the customer.
If your app is designed to run on both tablets and smartphones, you should test on both of them. A screen layout that looks good on a smartphone can look uninviting on a tablet, and vice-versa. And user behaviors are subtly different on tablets and smartphones.
This post was updated April 15, 2024.
What's next