“What a waste of time” the researcher says as they throw their hands in the air.
This scene is more common than we, as UX professionals, would like to admit. One of the biggest frustrations for user researchers revolves around participants either not turning up, or dropping out mid-way through a remote user testing session.
We often get support tickets, at Loop11, from customers desperate to improve their completion rates and looking for tips to ensure they are as efficient as possible.
In response to this we decided to dig into the details and see if we could pull out any commonalities consistently associated with high performing tests with low dropout rates.
For this task we pulled the most recent 1,000 user tests which had at least 10 participants complete the study. We then cut the data every which way we could think of to draw out tidbits that will help you run better user tests.
As a point of reference, in these 1,000 usability tests the average completion rate was 59% and the median was 63%. The average number of participants in a study was 103 and the median was 31. Last but not least, the average duration that a participant would take to complete a user test was 23 minutes.
So without further ado, here are the top 3 tips for ensuring the majority of participants who begin your testing process successfully finish, giving you those valuable insights.
Time! Time! Time!
The highest correlation data point was the duration of the study. If you kept your study to under ten minutes long then you’d be projected as having a completion rate about 60%. If you could keep your duration under seven minutes, then you’d be almost guaranteed a completion rate above 80%.
Tasks vs Questions
The first and most interesting insight we pulled out of our search was there was no meaningful correlation between total tasks/questions and completion rates. Rather the correlation showed that getting the ratio of tasks and question right was more important.
The ideal ratio was 1 task to 1.75 questions, achieve this and you stood a great chance of achieving a completion rate above 70%.
The median counts in our data set was 4 tasks and 7 questions per test. The best performing studies (>80%) had 3 – 4 tasks and 5 – 6 questions.
The other interesting factor identified that higher question counts had a negative impact on completion rates while higher task counts did not. An example of this might be if a repeating question matrix set or SUS questions were asked after each task. This pattern can exasperate participants and should be carefully considered before adding to a study design.
The Trade Off With 3rd Party Recruitment Panels
Studies that recruited through their own channels (social media, email lists, website intercepts, etc), versus recruitment panels, were significantly more likely to feature higher completion rates.
It’s worth drawing a distinction between recruitment panels at this point. There are panels which have thousands, if not millions of members, and make their money based on scale. We’ll call them ‘Bulk Panels’. Their participants are generally paid less than $10 per hour, often a lot less, and have little reason to complete a challenging study. In comparison there are other panels who can target hard to reach professionals and often pay the participants between $50 and $200 per hour. We’ll call these ‘Targeted Panels’.
The Bulk Panel participants showed a lack of willingness to use an app or browser extension during the study. They were also more likely to drop out at a stage where they were asked to grant permission to record their screen or audio during the study.
For Targeted Panels, and participants recruited via a researcher’s own user list, these considerations were not a problem. This highlights that both trust and remuneration are key factors in motivating a participant to complete a study.
So how do you increase the chances of having a great completion rate in your user testing?
- Keep your study under 10 minutes in length. The duration of a study is often dictated by the complexity of the tasks. If you’re on a tight timeline and can’t afford any drop outs then aim to get your average study duration to under 10 minutes.
- Have fewer tasks than questions. Ideally 3 or 4 tasks and no more than 7 questions. This point is closely related with the first point. If you start seeing a need for 6 or more tasks and 10 or more questions, perhaps it’s worth considering splitting into two separate UX studies.
- If you’re going to pay participants, don’t skimp. Pay for quality and expect to get what you paid for. Remember, these are real people and you’re asking them to spend time on your study. It’s got to be worth their while. Secondly, if you have the option, try to recruit through your own channels. Unless you are Trump-like in your approval ratings, then it is most likely in your users’ interests to help improve your product, so not only will they participate but they’ll also share valuable truths that only come from your target audience.
Give feedback about this article
Were sorry to hear about that, give us a chance to improve.