Why you should always prototype & user test multiple designs

5,127. That’s the number of prototypes that James Dyson claims to have created trying to perfect his bagless vacuum cleaner. Five thousand, one hundred and twenty seven. You see designing stuff is a messy business. Some ideas work out, some don’t. It’s only through a certain amount of trial and error (or in James Dyson’s case, a lot of trial and error), that you end up with a great design. This is why it’s so important to always, always prototype and user test multiple designs.

Why prototype and user test multiple designs?

Here are 10 (yes 10!) good reasons why you can’t afford not to prototype and user test multiple designs.

1.     You (and every other designer on this planet) never gets it right first time around

Sorry to burst your bubble but like James Dyson and his endless vacuum cleaner related tinkering, you never get a design right first time around. It doesn’t happen. It’s about as likely as the Qatar football team holding the FIFA world cup aloft after winning their home tournament (they’re currently the 109th best team in the world!). By testing multiple designs you can continue that tinkering for that bit longer.

2.     You can test and keep alive alternative design ideas

Invariably lots of your great UX design ideas will have been rejected, and consigned to the great idea graveyard in the sky. Testing multiple prototypes allows some of these ideas to be kept alive that little bit longer. You never know, that idea that you weren’t sure would work might just turn out to be a belter!

3.     You can evaluate one design against another

Rather than just saying, “Yep, it tested well”. You can say, “This design tested better than this one”. If all the designs bomb (which sadly sometimes happen), at least you should know which one sucks the least.

4.     You can spread your bets

Like a punter betting on a number of horses at the Grand National, prototyping and testing multiple designs helps to spread your bets. You don’t have to hedge all your bets on just the one design.

5.     You can demonstrate more designs in context

Clients and users alike have to really see and interact with a design in context, before they can truly evaluate it. That’s of course why prototyping is so important. Demonstrating a wireframe or sketch, just isn’t the same as creating a living, breathing design (even if it’s all smoke and mirrors). Prototyping and testing multiple designs gives you the opportunity to demonstrate more than just one design in context.

6.     Users (and clients) have something to compare against

As I’ve said before in my introduction to pairwise comparison article, people find it much easier to evaluate something when they have something else to compare it against. Multiple prototypes give users and clients that something else to compare against – namely an alternative design.

7.     You can gather more objective data

Testing multiple designs allows you to gather more objective data, as you can get feedback for multiple designs. This is especially important when you want to demonstrate the case for a particular design, and perhaps try to convince a particularly reluctant stakeholder.

8.     It’s not much more work than prototyping and testing one design

Now it might seem that prototyping and testing two designs is twice the work as one design, but this actually is isn’t the case. You should be utilising a lot of the same framework for the prototypes, and often you can cover multiple designs in the same user test (more about this below), so actually the extra workload is not that great.

9.     It’s more fun

OK, so this might not be as important as some of the other points, but prototyping and testing multiple designs is more fun (at least I find it more fun). You get to explore more designs, and if you’re a prototyping junky like me, you get to create even more funky prototypes to play with.

10. It’s what other design disciplines do

Visual designers, industrial designers, architects and so one will all typically create and trial multiple prototypes, so why should UX designers be any different?

How best to test multiple designs?

Ok, so hopefully I’ve now convinced you that prototyping and testing multiple designs is a really good idea. But how is it best to test multiple designs? Well, you basically have two different options.

1.  Comparative user testing

2.  Split user testing

Comparative user testing

Comparative user testing involves getting users to use multiple designs (usually just two), and then asking them to compare and contrast them. For example, a user might carry out some tasks with design A, some tasks with design B, and then provide feedback on which he or she found easiest to use. Of course, we all know that it’s what users do, not what they say which is most important, but this way you can observe users actually using the different designs (what they do), and get their feedback as well (what they say).

Comparative user testing is useful because you get lots of feedback, and users have a point of comparison (i.e. the different designs). However, testing multiple designs invariably make the sessions a little more complex to run (unmoderated comparative user testing is probably best avoided) and limits the number of tasks you can cover across the different designs. You also have the potential for some bias as participants are being exposed to one design before the other(s). This is why it’s always a good idea to vary the order in which designs are tested across sessions. For example get half of the users to use design A first, and half to use design B first. It’s also a good idea to try to cover different tasks across the different designs. This allows you to cover more ground, and mitigates the issue of participants being exposed to the same task (and therefore leaning an approach) on a prior design.

Split user testing

With split user testing you test different designs separately, so users will only ever see one design. Typically you’ll want to test the same (or at least very similar) tasks across the different designs, usually in the same order. This allows you to make accurate ‘Top Trumps’ style like for like comparisons across designs.

Split user testing allows you to cover more tasks and eliminates the potential for bias, as of course users will only ever see the one design. It also avoids the complication of having to ask users to switch between designs. On the negative side split user testing doesn’t capture comparative feedback from users and you’ll need to run more tests, as each session will only cover the one design. Because you’ll need to run more tests split user testing is a particularly good candidate for unmoderated user testing (i.e. self-service user testing). For example using a service such as Loop11.

Any other advice?

Before you run off and start frantically prototyping and user testing, here are some further hints and tips that you’ll hopefully find useful.

Plan to prototype and user test multiple designs from the start

Ok, so I know that I said that prototyping and user testing multiple designs is not that much more work than prototyping and user testing just the one design, but you’ll still need to plan for it. Certainly don’t suddenly spring it on an unsuspecting client or project manager at the last minute (although this can be fun, if anything just to see the look of horror on their face). By planning in advance you can properly think about the best user testing method to use, plan the additional time you’ll need to create multiple prototypes and help to set stakeholder expectations from the start.

Don’t spend too long crafting prototypes

This advice is as true for prototyping and testing one design, as multiple designs. However, the effects of over crafting a prototype are amplified when you need to create multiple prototypes, so it’s worth re-iterating. Don’t spend too long crafting, refining and honing the prototypes. After all, you’re only going to throw them away (otherwise, they’re not really prototypes). Prototypes should be like a Pot Noodle (an instant noodle snack in the UK) – quick, a little bit dirty, but just about enough to get the job done.

Don’t user test too many variations

Brilliant, our brainstorm came up with 6 possible design directions. Let’s test them all to see which is best… It can be tempting to test lots and lots of different designs, but resist that temptation, because like drunkenly eating a greasy burger at 3:00am in the morning, it’s generally a bad idea. You don’t want to have to create lots and lots of different prototypes, run a ridiculous number of user testing sessions, or present users with a bewildering number of different design options. Instead whittle the designs down to 2, certainly no more than 3 designs before you even start thinking about prototyping and user testing.

Don’t ask users to compare lots of different designs

Related to the last piece of advice, try to avoid asking users to compare any more than 2 different designs. At a push you could possibly ask them to compare 3, but that’s really the limit. If you really must test more than that, then you’ll want to focus on split user testing (or split comparative user testing, but that’s just confusing for all), because asking people to compare and contrast more than 3 different designs will make their head explode. It’s also a good idea to visually show the different designs when asking users to compare them, otherwise they have to remember which design was which, and that will also make their head explode.

User test divergent designs

There is little point testing two designs which are virtually identical, as it’ll invariably be a case of ‘spot the difference’. Instead try to test the Arnold Schwarzenegger and Danny DeVito of your designs. Namely, designs that are related but quite different (for those that didn’t get the reference, Schwarzenegger and DeVito played twins in the movie called Twins). For example, you might want to test two quite different navigation methods, such as mega menus vs left hand navigation. Of course the designs don’t need to be wildly divergent, but if they are too similar, it kind of defeats the object of testing multiple designs.

Avoid creating Frankenstein designs

I said that you should be user testing around 2 to 3 divergent designs. Easy, so you take a bit from this design, a bit from this one, another bit from this one and, horror of horror – you’ve created a Frankenstein-esque design monstrosity! Like Dr Frankenstein and his monster (who incidentally in the original novel had no name, he certainly wasn’t called Frankenstein) you can’t just lump different design ideas together and expect them to work. Make sure that the different designs are coherent and have at least some design consistency. You can certainly test different design elements within the different designs, such as Navigation method 1 and footer 2 in design A, and Navigation method 2 and footer 1 in design B, just ensure that the designs work as a whole.

This is a guest post by Neil Turner.  Neil is A UK based UX designer, researcher and trainer. When he’s not trying to make the world a slightly better place he likes to share UX ideas, tips, tools and techniques on his UX blog – UX for the masses.

The First Rule of Usability Testing: Test the Right Users

OK, so the real first rule of usability testing is, “do it.” But we can assume you already know that usability testing is important, and that you need to be doing it in order to make sure your apps and software are creating value for your users—and thus for you—as efficiently as possible.

Given that, the most important consideration when it comes to usability testing is making sure your testers can give you the information you need.

The Best Usability Results Come From Your Users

No one can provide you more information about your app’s user experience than your app’s actual users. Of course, usability testing means a whole lot more than simply surveying users, and it isn’t always feasible or advisable to ask your current customers and clients to fully test your app.

That’s why you need testers who are just like your users to give you real, usable information.

Ask a bunch of software developers how much they like your app aimed at accountants, and you’re going to get information on all the wrong things. Even the most carefully and accurately designed usability test won’t yield the results you’re looking for if you don’t have the right people taking the test.

The more specialized and narrowly-focused your target niche is, the more important it is to find representative usability testers—a banking app aimed at the average consumer can be adequately tested by a wider range of people than a professional-level graphic design program. There’s always a target audience, though, and making sure your usability test group is as close to the target as possible is essential.

Stay tuned for future articles explaining how to figure out exactly who your testers should be and how you can get them to help you out. Until then, we’ll do our best to keep you in the loop!

The Power of Words in UX Research

It’s said that a picture paints a thousand words. But it’s also worth considering that a single word can evoke a powerful image. Consider, for example, what comes to mind when you encounter the word ‘gambling.’ Alternatively, consider the word ‘gaming.’ Likely, different mental images are triggered by each word. Every day, during the course of verbal and written discourse, people have a choice of words to pick from. And it turns out that which words are used can have a significant impact on how the message is received, or the question is interpreted.

Consider, for example, the following pairs of words:

  • Liberal v. progressive
  • Liquor v. spirits
  • Used v. pre-owned

Although the words in each pair are similar to each other, I suspect that each word in the pair brings to mind a slightly different mental image, along with a slightly different emotional response, as well.

Research on the power of words

Researchers were interested in learning about the effect of wording, and how different ways of asking the same question might affect judgments. They devised a study where research participants were shown a video of a car accident and were then asked to estimate the speed of the car that had initiated the collision.

But individuals in each of two groups who had viewed the video were asked a slightly different question:

  • Group 1: At what speed did the first car contact the second car?
  • Group 2: At what speed did the first car smash into the second car?

The difference between these two versions, of course, is that the first example uses the word ‘contact,’ while the second example uses the words ‘smash into.’ Would this difference in wording affect people’s estimates of speed? It turns out that:

  • When the word ‘contact’ was used, people estimated the car to be going 31.8 mph, on average
  • When the words ‘smash into’ were used, people estimated the car to be going 40.5 mph, on average

When asked to recall what had been shown from the video, those who had encountered the words ‘smash into’ also claimed they had seen broken glass, even though in reality, no glass had been broken. The words themselves, then, had a powerful influence both on how people remembered the incident as well as how fast they judged the car to be traveling.

The power of words in UX

But why is all of this important for UX designers and researchers? Well, not surprisingly, the choice of words used when interacting with research participants and with business partners, matters. As we’ve seen, word(s) are powerful drivers of what happens in the mind of the receiver.

As UX researchers, we ask a lot of questions. And how we ask those questions – the actual words we use – can have a significant influence on how participants ‘hear’ and interpret the question and consequently, on how they make judgments and respond. So how do we manage such a situation effectively?

My recommendation is to think carefully about the various ways a question can be phrased, and pay attention to how responses might be influenced. You could also ‘usability test’ the wording prior to your actual study to glean how your questions are being received and interpreted by a potential participant. Using language that is more neutral can also work to your benefit, as stronger language tends to drive mental perceptions and images that are more vivid and ‘extreme.’

Another option is to take a step back and ask a broader question. For example, let’s say you’re trying to learn how research participants feel about a specific aspect of the thing being tested, and you’re particularly interested in any negative emotions they express. The question could be phrased such that any of a variety of terms could be used:

  • Is there anything about this [thing being tested] that makes you feel [aggravated / annoyed / confused / frustrated / irritated]?

Likely, each of these terms brings to mind a distinct mental image and type of emotion. So here, it might be better to take a step back and ask a more open ended question, such as: Tell me about how this [thing] makes you feel. The broader nature of this question leaves room for the participant to express positive and/or negative feelings, rather than being directed to think more narrowly about a specific type of feeling or emotion.

In summary

How we communicate is a vital aspect of life in general, but even moreso for those who facilitate and moderate research studies. Never underestimate the power of words. Although it’s true that a picture does paint a thousand words, a single word can also paint a very vivid picture in the mind of your research participant, thereby driving their behavior and responses, and consequently, your research outcomes.

This is a post by Colleen Roller.  Colleen is forever fascinated with the workings of the human mind, and with the art and science of designing for it. She has written extensively on this topic, authoring columns for UXmatters and UX Magazine. On the personal side, Colleen is a classically trained musician and enjoys performing on alto and soprano recorder. She also designs jewelry, and you can find her necklaces in a popular shop in West Concord, MA.

Next Page »