In order to get insightful and actionable feedback from your user testing, you first need to be clear on your objectives. In other words, have you or your team already identified what you want to learn from the exercise?
Before you get stuck straight into writing your test script, ask yourself: what are the most important activities visitors must be able to accomplish on your site?
For example, EverWear visitors typically set out to accomplish a number of activities including:
With these activities in mind, tasks should measure how effectively the site’s current design and functionality helps users complete these activities. A common failing is to try and cover off too many activities in a single project.
Whilst this is often spawned by the desire to save time and investment, keeping to 2 to 4 tasks per test helps to raise completion rates as well as ensure insights are focussed and actionable.
In short, most of the time you are better to run multiple tests, rather than one large study. Depending on your project objectives, you can choose to set tasks to either test the end-to-end customer experience, or delve deeply into a key area of focus. In the case of EverWear we’d run three separate tests.
Each of these tests are focussing on separate participant motivations. The first is finding information, the second is purchasing and the third is testing specific functionality we’re interested in. By keeping this level of focus and separation the tests should produce results which are clear and actionable.
Are users routinely leaving your website at a specific point? Do you get a lot of support queries about the same functionality or information? If so, then your testing should initially be geared around discovering what users are finding difficult, and then subsequently testing alternate designs or functionality based upon the learnings from the initial tests.
If you’re still in the early development phase, the information you’re after might be more general. For example, what design elements are enhancing or inhibiting product discovery?
Alternatively, you could be trying to form benchmarks for your platform’s performance against behavioral metrics and user opinions for future comparison? A simple example of this is the Net Promoter Score and System Usability Scale questions.
When viewed over time they can help indicate the health of a product or brand based on a user’s experience.
Without a clear baseline, formed from benchmarking studies, it can sometime be unclear how to interpret future results and measure improvement.
Lets combine what we’ve learnt so far and apply it to our example website.
At EverWear , we found that we were not only receiving a lot of queries about product sizing, but were suffering a high bounce rate on product pages, and an increasing rate of product returns. Therefore we prioritised the testing surrounding product sizing and selection. First, we formed an internal hypothesis that the product display and sizing information was not adequately meeting user needs.
At this point we decided to launch a two step testing process aimed at addressing our product sizing problems:
With a clear goal in mind, we can empirically assess how effectively user needs are being met and lock in an iterative program of design and testing to help ensure continual improvement.
Running experiments and testing frequently are synonymous with successful product design. Determining the correct testing methods is influenced by the stage of your product’s development and your testing objectives.
Rapid online testing can be conducted at each stage of the development lifecycle and complements agile workflow. For the purposes of this training course, we will be focusing on how testing can guide design and development during the most critical phases, these include prototype development and testing, information architecture development and live site user testing. We will explain more about each of these in the chapters to follow.