fbpx
Home » Blog » News » Top 10 Questions When Recruiting Participants For User Tests

Top 10 Questions When Recruiting Participants For User Tests

7 min read

Written by Ben Newton,

4 June, 2018

Some of the most common questions we receive at Loop11 from customers revolve around participant recruitment. Since Loop11 allows you to use your own lists, integrate with 3rd party panels, or recruit via pop-up intercepts on your website, there are a lot of choices.

Freedom is great but can sometimes lead to indecision. We’ve decided to put together the top 10 questions we receive relating to recruitment of participants and share our take on the answers.

For the purpose of this article, when we use the term ‘panel’ it refers to a third party which anyone can go to in order to recruit participants for their study. Generally speaking, you will provide your demographic requirements and the panel company will provide you a per participant quote.

1. Should I use a panel or my own list?

This is probably the most common consideration for a user researcher when recruiting participants for their studies. If you are a product that is new or not yet launched then you probably don’t have a list of participants, so your decision is made easy for you.

The vast majority of Loop11 customers use their own lists and a major reason we see for this is that they are optimizing an existing product and it helps that the users have some baseline knowledge.

Instances where you might want to use a panel over your own list are times when you’re testing an onboarding flow and you want participants who are not familiar with your product. Another example is if you are benchmarking performance or A/B testing, it can be valuable to have participants who are all at the same level of experience. Recruiting from your own list can sometimes skew to power users which may give unrealistic benchmarking results.

2. If I use my own customers, should they be limited to active users?

Sometimes your most valuable insights come from users who are no longer active in your product. This may be because they are disgruntled, didn’t find what they were looking for, or completed their task and no longer need your product. Either way, each of these users can provide valuable insights and whether or not to use them depends on the context of your study.

Are you trying to discover pain points? Are you testing a new UI or product flow? Do you have a new feature in BETA? All of these necessitate a different kind of test and also a different kind of participant demographic.

3. How many participants is enough?

This is one of those questions that can illicit an annoying answer like ‘it depends’. But it’s true, it does depend. Are you in the discovery phase, learning about your users and their behavior? Or are you in the validation phase having already done a bunch of research and design? Are you running moderated testing and planning to consume video and audio of participants? Or are you looking at collecting large sets of data based on task success metrics, click streams, heatmaps and survey results?

All of these lend themselves to certain recruitment sizes. Our recommendation generally is as follows:

  • If you are aiming at fast, discovery based insights then running 5–10 participants through a study is reasonable. Watching the videos of this many participants is manageable and you’ll often uncover a lot of what you need.
  • Are you wanting to obtain statistically significant data from your study? If so, then ideally you’ll be running 150 or more participants through your study. You may be able to get away with as little as 50 participants, however, the lower the numbers the greater chance for unreliable data.

It’s unfortunate to say but often the biggest determining factor here is the equation between budget and cost per participant. You can only afford as many participants as the money you have to spend.

4. Should I offer an incentive?

In almost every scenario the answer to this is ‘yes’. The better question might be; “If I were to offer an incentive what should it be?”

The answer to this comes down to who your participants are. If you are recruiting from your own list then you should look at how you can leverage discounts within your own product suite. Often you can offer something like a $50 voucher but it really only costs you $25. So that’s a big win.

If you are recruiting hard to reach people, like tech executives for example, then you’ll need to offer them something that equates to over their hourly rate. Ultimately the best way to think about this is to ask; “What would motivate my participant to leave what they are doing and spend time participating in my study?”

5. Are All Panels The Same?

No, they are not. There are some panels that promote having millions of participants from around the world waiting to join your study. In these instances the participants are probably ‘professional testers’ who are low income earners and are looking to make some extra money. This can be fine, but it depends on your requirements for the study. If your product is geared at executives and need them to think aloud in an unmoderated study then the aforementioned type of panel will probably render bad results for you.

On the flip side there are many panels that don’t have millions of participants, but they do have tools to recruit hard to reach participants. These panels may cost you up to 50 times more per participants, but you’re results will also be 50 times better. Examples of panels you might want to try for targeted, hard to reach participants are:

6. Should I recruit via an intercept pop-up?

The first question to answer here is; Can you get a snippet of JavaScript into your website? This is the first hurdle that needs to be crossed if you want to have an intercept pop-up invite on your website. Adding JavaScript is easy to do and can often be done via tools like Google Tag Manager, here is an article we wrote to help with this, but we realise that many UX professionals are blocked by IT when attempting to do this.

If you’ve been given the green light to add the JavaScript then the next questions are simple:

  • Do I want to interrupt a visitor while they are navigating my product?
  • If so, what am I hoping to gain from an active visitor as opposed to an email list member?
  • Where on my product/website should the pop-up appear? All pages, one page, or after a specific action was taken by the visitor?

Depending on your traffic and the mindset of your visitors, intercept recruitment can take a bit longer than other methods but can yield great results.

7. Should I recruit via social media?

This shares some commonalities with intercept recruitment via your website or app. It depends greatly on the type of study you are doing. Also, if you do recruit via social media will you be:

  • broadcasting a public post, or
  • messaging a closed community, or
  • messaging individual fans, or
  • using social media advertising features?

Depending on your audience, as well as the potential for trolls, this can be a great way to recruit quickly and at low cost but needs a considered strategy. Of all the recruitment options this may the most relevant to use a screener before the study begins.

8. Should I expect my participants to be trained?

Never expect a participant to possess any skills relevant to performance in a UX study, unless you’ve somehow recruited for these. The counter to this is that some panels do promote that their participants have received some kind of training.

As a general rule, unless you are sure that your participants have received relevant training, ensure that your introductory messaging clearly spells out what you want them to do. Even go so far as stipulating that payment won’t be given if certain tasks (such as think aloud) were not executed properly.

9. How do I stop cheaters and speeders?

People who speed through tests or otherwise cheat are an unfortunate element of user testing. Although it’s generally a small percentage, if your participant numbers are large enough you’ll come across some who aren’t doing their best.

In Loop11 you can filter out participants who are clearly speeding through the study, or conversely are sitting on the one screen and not navigating through the product. We’re also going to be working on a feature that can call out the cheaters, essentially catch them in the act.

All that said, most cheaters are cheating because they are trying to earn money from as many tests as possible. If you specify in your recruitment or study introduction that speeding and cheating will result in no payment, then it should reduce the chances of it happening to almost nil.

10. Getting a representative sample?

Ensuring you get a representative sample of the varying cohorts you wish to participate in your test can be approached from two angles:

A. Screeners placed at the front of your study which take care of the participant distribution for you.

B. Ask questions during the study which identify the participants.

Option B doesn’t really help you obtain a representative sample, more, it allows you to apply filtering and/or weighting after the study closes.

So as you can see, participant recruitment requires a lot of thought and planning. It’s probably the most important part of your user testing. The saying of “junk in junk out” is particularly applicable here. Your study results will rise to the level of your participants. Don’t prioritize it, and you might as well ignore your results.

Give feedback about this article

Were sorry to hear about that, give us a chance to improve.

    Was this article useful?
    YesNo

    Create your free trial account