Now that you have created your prototype, it’s time to devise an effective test plan.
Earlier we figured out what the project objectives were, now we need to formulate task scenarios that are appropriate for testing. A task scenario is the action that you ask the participant to take on the product you are testing. When formulating tasks, try to mimic the real world as much as possible.
To compliment this, it’s critical that your target users are recruited for the sessions and you ensure that each task scenario:
Ok, let’s look at each of those points in greater detail.
It’s super important that you set tasks which reflect activities that are appropriate for your target audience. The goal here, is to have participants feel they own the task and they are setting about doing something which they would normally do. If they are being asked to perform a task which is not relevant to them, it is likely that either a better task, or wording, could have been chosen, or they aren’t in fact the target audience and should not be participating in the test.
Let look at a specific example.
The user goal is: to explore whether participants could find information about the sustainable manufacturing process
A poorly written task: As an environmental physicist, you want to check the anatomical compound structure of fibres used in EverWear garments.
A better written task: You are interested in the natural environment and would like to discover more about the EverWear manufacturing processes. See if you can find further information about this?
The task should be designed to allow for the participant’s natural interaction with your product, revealing realistic points of frustration or ease of use. To achieve this, it is best to outline an instructive task with an end-goal in mind. Here it is best to ask participants to perform an action rather than ask how they would go about the task.
Let look at another example.
The user goal is: Find an item of suitable item of clothing and make a purchase
A poorly written task: You would like to purchase a new item of clothing, starting on the homepage, where would you click next?
A better written task: You would like to purchase a sweatshirt, with a hood, to wear when exercising. See if you can find a suitable garment and add it to your shopping cart.
When writing tasks, be careful you are not providing participants with hidden clues or signposts which unwittingly assist them in succeeding at the task. Common instances of this are tasks which refer to navigation or link labels which steer participants in a certain direction. Similarly, it’s just as important to ensure you don’t over correct, using alternatives which are not natural or intuitive to your target audience. It’s about finding the balance between providing all the information needed to complete the task and doing so without any form of explicit direction.
An example of this would be:
The user goal is: Locate an on sale item within the ‘Specials’ content area.
A poorly written task: See if there are any items on sale within the ‘Specials’ category which you would consider purchasing.
A better written task: Are you able to find any T Shirts that have been discounted to less than $20 which you would also consider purchasing?
Well written survey questions are key to facilitating reliable responses with which to achieve your research goals. Also, when asked immediately after a task, they can offer additional insights not readily available in task completion data.
The first choice you need to make is what type of question to use at different points in your user test. Loop11 offers both open-ended questions that ask respondents to write comments, as well as closed-ended questions that give respondents a fixed set of options to choose from. These closed-ended response choices can be simple single answer (yes/no) options, multiple choice options, rating scales, Net Promoter Score (NPS) and System Usability Scale (SUS).
Your choice of question type and format will be determined based on what information you are trying to elicit from users.
In terms of formulating the wording for your questions, we suggest you consider the following:
Revisiting an earlier example of a ‘better written task’ all three of these criteria are met:
‘You are interested in the natural environment and would like to discover more about the EverWear manufacturing processes. See if you can find further information about this?’
We are often asked how many tasks and questions are optimal for testing, or what is an optimal session duration. In reality there are a number of factors which influence the ideal session length or duration. Some key influencing factors include:
Generally speaking, three or four tasks per test is plenty. Especially if each of your tasks has two or three trailing questions.
Once you have completed a user test you of course need to analyse the findings. We’ll start by reviewing the dashboard of a our completed prototype test within Loop11. Average task completion rates reported here aggregate responses for all tasks which have been set up for this project, in this case 3 in total.
This breakdown reveals that an average of 86% of participants successfully completed the set tasks, whilst 8% failed and 6% abandoned. These scores are determined by two factors:
Additionally, an overview is available for each of the three individual tasks, along with averages for ‘pageviews’ and ‘time spent’.
Moving deeper into the report we have the ability to detailed breakdown of user behaviour within tasks including page views, clickstreams, heatmaps, user videos and individual participant information.
Each of these data points combine to paint a picture of how users are interacting with our prototype. Using the quantitative data, such as task success rate and click streams we can start to see where problem exist, then using the videos we are start to piece together why people are having problems.
Looking at our third task, on the surface there is a reasonable success rate of 88%. However, when we dive deeper into the reporting we can see that 33% of participants are mislead and go to the wrong page. It’s comforting to know most of them eventually find the correct page, but it shows that our architecture could be improved to make the process easier for users.
With these types of insights we can either amend the prototype or roll the changes into the live website. Because of what we’ve discovered in this round of testing, in the next chapter we are going to revisit our information architecture to see where we can improve.