How to Run an Effective True Intent Study

An effective way to assess the user experience your website offers is to understand who your users are and what tasks or goals they are trying to accomplish there. A True Intent Study helps you do just that.

What’s a true intent study?

As the name suggests, a true intent study aims at understanding a user’s objective as they browse your site. What are they there to do? And are they able to achieve it?

A true intent study helps you to:

  • Determine what your visitors intend to do and how they behave on your website
  • Determine the demographic makeup of visitors coming to your site
  • Determine whether your visitors were able to successfully accomplish their tasks/goals
  • Discover flaws in your website that might inhibit users from completing their intended task
  • Analyze the overall experience of your visitors

One interesting angle a true intent study provides over a run-of-the-mill usability test is, because you’re asking completely open-ended questions that do not make assumptions about the tasks/goals your users aim to accomplish, you might learn something surprising about the site. In other words, a true intent study helps you gather data that you probably wouldn’t have gathered if you were relying strictly on a highly controlled test with specific tasks or scenarios.

How does a true intent study work?

The process is pretty simple.  Site visitors are intercepted at random and their subsequent behavior is tracked. A lightbox poses questions like, “Why are you visiting the site today?” and “Were you able to successfully accomplish your goal(s)?”

We’ll illustrate this via screenshots in the next section.

How to run a true intent study with Loop11:

1.  Choose a website you want to test.

2.  Enter the details of your user test.

 Screen Shot 2015-04-22 at 1.50.56 PM

3.  Insert a multiple choice question that aims to understand what the visitor came to the site to do.

Screen Shot 2015-04-22 at 1.51.11 PM

4.  Provide a range of response options covering the most common reasons people visit your website.   Randomize the order of the responses, make the question mandatory & offer an “Other, please specify” response at the end. In the example below, we ran a true intent study on

Screen Shot 2015-04-22 at 1.51.24 PM

5.  Insert an open task that starts on the homepage of your website.

Screen Shot 2015-04-22 at 1.51.33 PM

6.  Insert the following three questions:

Screen Shot 2015-04-22 at 1.51.42 PM

7.  Enter the “User Test Options,” spelling out the max number of participants you’d like in the study, the “thank you” text and other details. It’s possible to include only a percentage of your overall site visitors in the true intent study.

Screen Shot 2015-04-22 at 1.51.51 PM
8.  Choose “Create a pop-up invitation for your own website” as the method of inviting participants:
Screen Shot 2015-04-22 at 1.52.02 PM

9.  At this point, Loop11 will provide you a code snippet. Place the code on the page(s) of your website where you want the pop-up invitation to appear, launch the test and sit back whilst the data comes in.

A true intent study can be a valuable resource in discovering flaws in your website that are making it difficult for users to complete their intended task.  A true intent study sometimes casts a wider net than ordinary usability testing, as it dives into the goals of real users navigating your website and looks at how effectively your users are able to meet those goals. It’s an easy test to set up. Website owners can learn a lot from this exercise!

To learn more about true intent studies and how to run your first one, contact us at


This blog post was a collaboration between Jacob Young and Arijit Banerjee, who assisted with vital research.


Why you should always prototype & user test multiple designs

5,127. That’s the number of prototypes that James Dyson claims to have created trying to perfect his bagless vacuum cleaner. Five thousand, one hundred and twenty seven. You see designing stuff is a messy business. Some ideas work out, some don’t. It’s only through a certain amount of trial and error (or in James Dyson’s case, a lot of trial and error), that you end up with a great design. This is why it’s so important to always, always prototype and user test multiple designs.

Why prototype and user test multiple designs?

Here are 10 (yes 10!) good reasons why you can’t afford not to prototype and user test multiple designs.

1.     You (and every other designer on this planet) never gets it right first time around

Sorry to burst your bubble but like James Dyson and his endless vacuum cleaner related tinkering, you never get a design right first time around. It doesn’t happen. It’s about as likely as the Qatar football team holding the FIFA world cup aloft after winning their home tournament (they’re currently the 109th best team in the world!). By testing multiple designs you can continue that tinkering for that bit longer.

2.     You can test and keep alive alternative design ideas

Invariably lots of your great UX design ideas will have been rejected, and consigned to the great idea graveyard in the sky. Testing multiple prototypes allows some of these ideas to be kept alive that little bit longer. You never know, that idea that you weren’t sure would work might just turn out to be a belter!

3.     You can evaluate one design against another

Rather than just saying, “Yep, it tested well”. You can say, “This design tested better than this one”. If all the designs bomb (which sadly sometimes happen), at least you should know which one sucks the least.

4.     You can spread your bets

Like a punter betting on a number of horses at the Grand National, prototyping and testing multiple designs helps to spread your bets. You don’t have to hedge all your bets on just the one design.

5.     You can demonstrate more designs in context

Clients and users alike have to really see and interact with a design in context, before they can truly evaluate it. That’s of course why prototyping is so important. Demonstrating a wireframe or sketch, just isn’t the same as creating a living, breathing design (even if it’s all smoke and mirrors). Prototyping and testing multiple designs gives you the opportunity to demonstrate more than just one design in context.

6.     Users (and clients) have something to compare against

As I’ve said before in my introduction to pairwise comparison article, people find it much easier to evaluate something when they have something else to compare it against. Multiple prototypes give users and clients that something else to compare against – namely an alternative design.

7.     You can gather more objective data

Testing multiple designs allows you to gather more objective data, as you can get feedback for multiple designs. This is especially important when you want to demonstrate the case for a particular design, and perhaps try to convince a particularly reluctant stakeholder.

8.     It’s not much more work than prototyping and testing one design

Now it might seem that prototyping and testing two designs is twice the work as one design, but this actually is isn’t the case. You should be utilising a lot of the same framework for the prototypes, and often you can cover multiple designs in the same user test (more about this below), so actually the extra workload is not that great.

9.     It’s more fun

OK, so this might not be as important as some of the other points, but prototyping and testing multiple designs is more fun (at least I find it more fun). You get to explore more designs, and if you’re a prototyping junky like me, you get to create even more funky prototypes to play with.

10. It’s what other design disciplines do

Visual designers, industrial designers, architects and so one will all typically create and trial multiple prototypes, so why should UX designers be any different?

How best to test multiple designs?

Ok, so hopefully I’ve now convinced you that prototyping and testing multiple designs is a really good idea. But how is it best to test multiple designs? Well, you basically have two different options.

1.  Comparative user testing

2.  Split user testing

Comparative user testing

Comparative user testing involves getting users to use multiple designs (usually just two), and then asking them to compare and contrast them. For example, a user might carry out some tasks with design A, some tasks with design B, and then provide feedback on which he or she found easiest to use. Of course, we all know that it’s what users do, not what they say which is most important, but this way you can observe users actually using the different designs (what they do), and get their feedback as well (what they say).

Comparative user testing is useful because you get lots of feedback, and users have a point of comparison (i.e. the different designs). However, testing multiple designs invariably make the sessions a little more complex to run (unmoderated comparative user testing is probably best avoided) and limits the number of tasks you can cover across the different designs. You also have the potential for some bias as participants are being exposed to one design before the other(s). This is why it’s always a good idea to vary the order in which designs are tested across sessions. For example get half of the users to use design A first, and half to use design B first. It’s also a good idea to try to cover different tasks across the different designs. This allows you to cover more ground, and mitigates the issue of participants being exposed to the same task (and therefore leaning an approach) on a prior design.

Split user testing

With split user testing you test different designs separately, so users will only ever see one design. Typically you’ll want to test the same (or at least very similar) tasks across the different designs, usually in the same order. This allows you to make accurate ‘Top Trumps’ style like for like comparisons across designs.

Split user testing allows you to cover more tasks and eliminates the potential for bias, as of course users will only ever see the one design. It also avoids the complication of having to ask users to switch between designs. On the negative side split user testing doesn’t capture comparative feedback from users and you’ll need to run more tests, as each session will only cover the one design. Because you’ll need to run more tests split user testing is a particularly good candidate for unmoderated user testing (i.e. self-service user testing). For example using a service such as Loop11.

Any other advice?

Before you run off and start frantically prototyping and user testing, here are some further hints and tips that you’ll hopefully find useful.

Plan to prototype and user test multiple designs from the start

Ok, so I know that I said that prototyping and user testing multiple designs is not that much more work than prototyping and user testing just the one design, but you’ll still need to plan for it. Certainly don’t suddenly spring it on an unsuspecting client or project manager at the last minute (although this can be fun, if anything just to see the look of horror on their face). By planning in advance you can properly think about the best user testing method to use, plan the additional time you’ll need to create multiple prototypes and help to set stakeholder expectations from the start.

Don’t spend too long crafting prototypes

This advice is as true for prototyping and testing one design, as multiple designs. However, the effects of over crafting a prototype are amplified when you need to create multiple prototypes, so it’s worth re-iterating. Don’t spend too long crafting, refining and honing the prototypes. After all, you’re only going to throw them away (otherwise, they’re not really prototypes). Prototypes should be like a Pot Noodle (an instant noodle snack in the UK) – quick, a little bit dirty, but just about enough to get the job done.

Don’t user test too many variations

Brilliant, our brainstorm came up with 6 possible design directions. Let’s test them all to see which is best… It can be tempting to test lots and lots of different designs, but resist that temptation, because like drunkenly eating a greasy burger at 3:00am in the morning, it’s generally a bad idea. You don’t want to have to create lots and lots of different prototypes, run a ridiculous number of user testing sessions, or present users with a bewildering number of different design options. Instead whittle the designs down to 2, certainly no more than 3 designs before you even start thinking about prototyping and user testing.

Don’t ask users to compare lots of different designs

Related to the last piece of advice, try to avoid asking users to compare any more than 2 different designs. At a push you could possibly ask them to compare 3, but that’s really the limit. If you really must test more than that, then you’ll want to focus on split user testing (or split comparative user testing, but that’s just confusing for all), because asking people to compare and contrast more than 3 different designs will make their head explode. It’s also a good idea to visually show the different designs when asking users to compare them, otherwise they have to remember which design was which, and that will also make their head explode.

User test divergent designs

There is little point testing two designs which are virtually identical, as it’ll invariably be a case of ‘spot the difference’. Instead try to test the Arnold Schwarzenegger and Danny DeVito of your designs. Namely, designs that are related but quite different (for those that didn’t get the reference, Schwarzenegger and DeVito played twins in the movie called Twins). For example, you might want to test two quite different navigation methods, such as mega menus vs left hand navigation. Of course the designs don’t need to be wildly divergent, but if they are too similar, it kind of defeats the object of testing multiple designs.

Avoid creating Frankenstein designs

I said that you should be user testing around 2 to 3 divergent designs. Easy, so you take a bit from this design, a bit from this one, another bit from this one and, horror of horror – you’ve created a Frankenstein-esque design monstrosity! Like Dr Frankenstein and his monster (who incidentally in the original novel had no name, he certainly wasn’t called Frankenstein) you can’t just lump different design ideas together and expect them to work. Make sure that the different designs are coherent and have at least some design consistency. You can certainly test different design elements within the different designs, such as Navigation method 1 and footer 2 in design A, and Navigation method 2 and footer 1 in design B, just ensure that the designs work as a whole.

This is a guest post by Neil Turner.  Neil is A UK based UX designer, researcher and trainer. When he’s not trying to make the world a slightly better place he likes to share UX ideas, tips, tools and techniques on his UX blog – UX for the masses.

The First Rule of Usability Testing: Test the Right Users

OK, so the real first rule of usability testing is, “do it.” But we can assume you already know that usability testing is important, and that you need to be doing it in order to make sure your apps and software are creating value for your users—and thus for you—as efficiently as possible.

Given that, the most important consideration when it comes to usability testing is making sure your testers can give you the information you need.

The Best Usability Results Come From Your Users

No one can provide you more information about your app’s user experience than your app’s actual users. Of course, usability testing means a whole lot more than simply surveying users, and it isn’t always feasible or advisable to ask your current customers and clients to fully test your app.

That’s why you need testers who are just like your users to give you real, usable information.

Ask a bunch of software developers how much they like your app aimed at accountants, and you’re going to get information on all the wrong things. Even the most carefully and accurately designed usability test won’t yield the results you’re looking for if you don’t have the right people taking the test.

The more specialized and narrowly-focused your target niche is, the more important it is to find representative usability testers—a banking app aimed at the average consumer can be adequately tested by a wider range of people than a professional-level graphic design program. There’s always a target audience, though, and making sure your usability test group is as close to the target as possible is essential.

Stay tuned for future articles explaining how to figure out exactly who your testers should be and how you can get them to help you out. Until then, we’ll do our best to keep you in the loop!

Next Page »