Prototype Usability Testing


Try Loop11 free. Sign up now to get started. Get Started


Course Chapters



Now that you have created your prototype, it’s time to devise an effective test plan.

Setting appropriate tasks

Earlier we figured out what the project objectives were, now we need to formulate task scenarios that are appropriate for testing. A task scenario is the action that you ask the participant to take on the product you are testing. When formulating tasks, try to mimic the real world as much as possible.

To compliment this, it’s critical that your target users are recruited for the sessions and you ensure that each task scenario:

  1. is realistic and typical for how people actually use the system;
  2. is actionable and encourages users to interact with the interface; and
  3. doesn’t give away the answer.

Ok, let’s look at each of those points in greater detail.

Make the task realistic

It’s super important that you set tasks which reflect activities that are appropriate for your target audience.  The goal here, is to have participants feel they own the task and they are setting about doing something which they would normally do.  If they are being asked to perform a task which is not relevant to them, it is likely that either a better task, or wording, could have been chosen, or they aren’t in fact the target audience and should not be participating in the test.

Let look at a specific example.

The user goal is: to explore whether participants could find information about the sustainable manufacturing process
A poorly written task: As an environmental physicist, you want to check the anatomical compound structure of fibres used in EverWear garments.
A better written task: You are interested in the natural environment and would like to discover more about the EverWear manufacturing processes. See if you can find further information about this?

Make the task actionable

The task should be designed to allow for the participant’s natural interaction with your product, revealing realistic points of frustration or ease of use.  To achieve this, it is best to outline an instructive task with an end-goal in mind.  Here it is best to ask participants to perform an action rather than ask how they would go about the task.

Let look at another example.

The user goal is: Find an item of suitable item of clothing and make a purchase
A poorly written task: You would like to purchase a new item of clothing, starting on the homepage, where would you click next?
A better written task: You would like to purchase a sweatshirt, with a hood, to wear when exercising. See if you can find a suitable garment and add it to your shopping cart.

Don’t give away the answer

When writing tasks, be careful you are not providing participants with hidden clues or signposts which unwittingly assist them in succeeding at the task.  Common instances of this are tasks which refer to navigation or link labels which steer participants in a certain direction.  Similarly, it’s just as important to ensure you don’t over correct, using alternatives which are not natural or intuitive to your target audience.  It’s about finding the balance between providing all the information needed to complete the task and doing so without any form of explicit direction.

An example of this would be:

The user goal is: Locate an on sale item within the ‘Specials’ content area.
A poorly written task: See if there are any items on sale within the ‘Specials’ category which you would consider purchasing.
A better written task: Are you able to find any T Shirts that have been discounted to less than $20 which you would also consider purchasing?

Question selection and writing

Well written survey questions are key to facilitating reliable responses with which to achieve your research goals. Also, when asked immediately after a task, they can offer additional insights not readily available in task completion data.

The first choice you need to make is what type of question to use at different points in your user test. Loop11 offers both open-ended questions that ask respondents to write comments, as well as closed-ended questions that give respondents a fixed set of options to choose from. These closed-ended response choices can be simple single answer (yes/no) options, multiple choice options, rating scales, Net Promoter Score (NPS) and System Usability Scale (SUS).

Your choice of question type and format will be determined based on what information you are trying to elicit from users.

In terms of formulating the wording for your questions, we suggest you consider the following:

  1. Ensure the language is relatable: It is important to speak with people at their level, which is not necessarily at the same level of understanding within your organisation.  Avoid industry jargon or technical concepts wherever possible.  Where not possible, make sure adequate explanation of the concept is provided.
  2. Ask one question at a time: If you try to cover off more than one question at a time, participants will find it difficult to respond and thus consistency will vary in the way the question is interpreted.
  3. Avoid answer bias: Be careful not to word a questions in such a way that it directs users to a participar response, or alternatively, encourages them to be more favorable or discouraging than they would normally be. Try to keep the question balanced to derive the participant’s ‘true’ attitudes instead of what they think you want to hear.

Revisiting an earlier example of a ‘better written task’ all three of these criteria are met:

‘You are interested in the natural environment and would like to discover more about the EverWear manufacturing processes. See if you can find further information about this?’

Determining optimal session length

We are often asked how many tasks and questions are optimal for testing, or what is an optimal session duration.  In reality there are a number of factors which influence the ideal session length or duration.  Some key influencing factors include:

  • How invested participants are in the platform being tested
    If you know you have a passionate user base who’s performing the testing then it’s safe to say they’ll be more engaged and willing to spend longer with your tests.
  • The cognitive load of tasks to perform, questions asked and stimulus involved
    What we mean here, is if each task or question requires a lot of action or thought from the participant then they are more likely to tire quickly.
  • Whether participants are incentivised adequately for their involvement
    It’s common place for a company to either pay participants or offer in-kind incentives to those who complete their test. This isn’t rocket science, if there is something in it for them, they’ll persevere through longer studies.
  • How participants personally relate to the tasks and questions posed
    Even if your participants love your product and want to help, you can still alienate them to the point they drop out if you ask questions which clearly don’t relate to them. For example, if you’re test relates to women’s clothing, on an otherwise unisex online store, it would pay to ensure that your participant recruitment process filters out any male users which would find it difficult to answer your questions.
  • Finally, how broad or narrow the research objectives are for the study
    We discourage clients from trying to cram too many elements into one single study.   Rather, small tests ran frequently will often yield better completion and engagement results when compared to longer, more involved tests.

Generally speaking, three or four tasks per test is plenty. Especially if each of your tasks has two or three trailing questions.

Analysing findings

Once you have completed a user test you of course need to analyse the findings. We’ll start by reviewing the dashboard of a our completed prototype test within Loop11.  Average task completion rates reported here aggregate responses for all tasks which have been set up for this project, in this case 3 in total.

This breakdown reveals that an average of 86% of participants successfully completed the set tasks, whilst 8% failed and 6% abandoned. These scores are determined by two factors:

  1. Whether ‘task complete’ or ‘task abandon’ was clicked during the task, and then,
  2. If ‘task complete’ was clicked, based on the page the user was on at the time of completion, we verify that against any success URLs which have been defined for the task

Additionally, an overview is available for each of the three individual tasks, along with averages for ‘pageviews’ and ‘time spent’.

Moving deeper into the report we have the ability to detailed breakdown of user behaviour within tasks including page views, clickstreams, heatmaps, user videos and individual participant information.

Each of these data points combine to paint a picture of how users are interacting with our prototype. Using the quantitative data, such as task success rate and click streams we can start to see where problem exist, then using the videos we are start to piece together why people are having problems.

Looking at our third task, on the surface there is a reasonable success rate of 88%. However, when we dive deeper into the reporting we can see that 33% of participants are mislead and go to the wrong page. It’s comforting to know most of them eventually find the correct page, but it shows that our architecture could be improved to make the process easier for users.

With these types of insights we can either amend the prototype or roll the changes into the live website. Because of what we’ve discovered in this round of testing, in the next chapter we are going to revisit our information architecture to see where we can improve.

Want more inspiration?
Join the Fab-UX 5 newsletter!

Five links to amazing UX articles,sent to you once a week.

No SPAM, just pure UX gold!

No Thanks