top of page

User Testing a Mobile App


User Testing a Mobile App Prototype: Essential Checklist

By 2018, app downloads are set to reach 270 billion worldwide. So it’s fair to say that the app marketplace is pretty crowded. Any app that gives users a less than 5 star user experience is doomed to failure.

And yet despite this cut-throat environment, some apps still hit the marketplace riddled with usability issues thanks to sub-par user testing. In fact, according to a report by Perfecto Mobile, 44% of app errors are found not by testers but by users after the app is launched. It’s not only unfair to make the unsuspecting user act as a usability test guinea pig, it’s also bad business: shoddy app usability leads to abandonment, loss of investment and time wasted.

It’s not only the little guy app developers who fail foul of bad usability; even the big guys can get caught out. Take Google Wave, which launched with a hugely complex feature set and commensurately low usability levels, and predictably flopped. That’s not to say that Google didn’t run usability tests on the Google Wave app before launch – of course they did. But it’s possible they didn’t run them adequately.

Inadequate pre-launch testing of mobile apps happens because, put simply, usability testing on apps is hard to get right. Creating a good test environment, working across Operating Systems, multiple device testing and so on are challenges that don’t come up in web or browser-based usability testing. Despite this complexity, thorough usability testing is even more important on an app than on a browser-based software: as Radica Budiu points out in NN Group, “usability requirements increase as the platform shrinks. Smaller screens equal bigger needs to test your design with real users, because there are more ways for users to fail.”

That’s why Justinmind and Loop11 have combined forces to come up with this Essential Checklist for User Testing a Mobile App Prototype. Follow these 7 steps and pro-tips for more effective usability testing and, in the long run, engaging, addictive, usable apps.

1 – Set your objectives

Be strict about the purpose of each and every user test; ill-defined objectives will result in inconclusive results. To define objectives clearly, it helps to break down testing goals into categories ( e.g. functional and ‘feeling’ goals), and to decide what kind of data will help you discover this information. Strong objectives include validating design decisions, and uncover functional usability problems. And most importantly, keep objectives simple! Loop11 will start to build this out for you with just a little information.

And of course, decide whether you’re interested in testing the app on all Operating Systems and devices, or whether you want to focus down.

2 – Select your target users

With clearly defined objectives in place, the rest of the testing process should rapidly start to take shape, including tester demographic. It’s important to consider basic factors when selecting user testers, such as age, gender and income; other factors will depend on the product and the test objectives, like whether they need to be familiar with the product already. When testing mobile apps, it’s important to work with users who are already familiar with their device – problems managing an unfamiliar OS will be indistinguishable from app usability problems, which will skew results.

3 – Stage out your testing & prototypes

Obviously, the term ‘usability testing’ is pretty wide in scope, and each stage of testing requires its own methodology and tools. Are you going to be testing information architecture basics with paper prototypes, interactive user journeys with high fidelity prototypes, or wireframed beta apps, for example?

If you’re testing with a prototyping tool like Justinmind, for example, you can create the prototype from scratch, publish it in your Justinmind online account and then seamlessly test it with remote users.

4 – Establish the tasks

This is where well-defined objectives really show their worth: defining the tasks to be carried out during tests will be a lot easier with clear objectives. If your objective is to focus on one particular feature, set specific, closed-end tasks around that feature; open-ended instructions should be used if you want to test the ‘felling’ and UX of your app. You might want users to voice their feelings and thoughts out loud as they navigate the test, or to complete some scaled 1-10 questions about the experience after the test.

5 – Set up the space

If you’re not doing remote testing with Loop11, set up the user testing zone. As Radica Budiu points out on NN Group, when testing mobile devices, you need to pay special attention to “equipment, the right testing room, and the right users.” How many cameras do you need and where will they go? Will you need webcams, document cams, device cams, or perhaps a combination of all three? Loop 11 has screen-recording footage (CHECK), which can be combined with video recordings to track finger movements. You can watch the footage in realtime, and have it on playback for those in-depth analysis moments.

And of course, you need to have good mobile signal in the room.

6 – Do a rehearsal and revise the test

All too often this step gets left out, but doing a rehearsal of the test will reduce on-the-day stresses and slip-ups. You can do a test-run of the test with a reduced number of participants (even one will do), as well as on all the different device types you plan to test. Don’t be afraid to make changes to the test based on this rehearsal. Loop11 allows you to customize tests based on both design and content.

After the rehearsal, you’re ready to launch your usability test into the real world. Then comes the fun part…

7 – Get analytical

Ok the results are in. But what about the insights? Gleaning useful information from data gathered is actually one of the most challenging parts of any usability test, as is acting upon that data. First, look for trends and organize the findings into categories – findings per screen or per task, for example – and according to severity. Did a user stumble slightly during a task, or were they completely clueless about how to proceed?

The Nielsen Norman Group has some great advice on how to present the test findings to maximize actionable follow-up.

Pro tips
  • Be ruthless with the scope of your study. Focus on 2-3 essential insights, cut back all the deadwood and non-essentials: a tight study will deliver better results than an unfocused study.

  • Select users who are familiar with their devices.

  • Choose the right prototype for each test

  • Go automated! Automatic testing means you can test more users better and with less effort.

bottom of page