Can People with Disabilities Use Your Website?

How do you feel when you have trouble using a website or application? When you can’t find the information you need because navigation is inconsistent or unclear? When you click a link and it doesn’t go where you expected? When you fill out a form and lose all of your data when you submit due to a minor error?

What if you clicked on that big BUY NOW button and nothing happened? In fact, what if clicking your mouse on all of the links and buttons and other interactive elements on a site didn’t work? Pretty bad website, huh? Chances are the company that wants you to BUY NOW would be all over it and a fix would come quickly.

Now imagine you are blind…or paralyzed…or imagine you broke your wrist and can’t use your mouse. People who rely on a screen reader or who can only navigate a site using the keyboard often face these challenges because many sites aren’t designed and coded to be fully accessible.

So what does it mean to have an accessible website? How do you know? Why should you care?

One in every 7 people in the world has some kind of physical or cognitive disability. That’s the official count. When you factor in temporary disabilities like broken bones, generative disabilities like presbyopia, and cognitive differences – some people are visual learners and others are verbal – practically everyone will experience some sort of disability in their lifetime.

Because of this, you would think that websites would be designed to accommodate a range of abilities. The W3C’s Web Accessibility Initiative has developed guidelines to help developers understand and meet accessibility needs. In the fast moving world of web and application development however, the Guidelines are too often overlooked and millions of people are left out of online experiences. As a result, advocacy groups have taken to the courts. Courts in Australia, Canada, Japan, the European Union and the United States are increasingly interpreting technology access to be a basic human right. Legal rights to participate in online commerce, education, employment and social opportunities are explicit in the UN Declaration on the Rights of People with Disabilities and supported by legislation and court decisions by governments all over the world.

So what should you do? What can you do? Well, it depends on your role in your organization, but some steps you can take include encouraging your management to develop an accessibility policy, helping your designers and developers and QA people come up to speed on their roles in developing accessible products, and YOU (if you’re reading this, you likely understand user/usability testing), you can do a test.

Doing an in-person usability test with people with disabilities is very much like any other usability test…except you need to determine what disabilities to include, recruit people with those disabilities who also match your personas in terms of goals and skills and experience, make sure your testing environment is accessible and that your participants can get there, and you need to factor in assistive technology (AT). Plus you need to understand enough about the various assistive technologies to follow along with what the participant is doing.

It’s not hard but it can be challenging. You might want to get started by doing some remote testing first. This can help you know if your site has accessibility issues and broadly what those issues are.

Loop11 has recently linked up with AccessWorks – a panel of people with disabilities who have signed up to be usability test participants. AccessWorks is maintained by Knowbility, Inc., a nonprofit organization that, among other things, advocates for accessibility technology.

Go to Access-Works.com to read more about it, then jump over to Loop11 to define your test. When you get to Step 4, Invite Participants, select “Recruit participants with disabilities for accessibility testing.” This will give you a dialog where you can specify types of disability and assistive technology to include. Specify the number of participants in each category and continue to the payment page. Then sit back and wait for your test results to come.

 

This is a post by Jayne Schurick, a fan of AccessWorks and Loop11.

How to Run an Effective True Intent Study

An effective way to assess the user experience your website offers is to understand who your users are and what tasks or goals they are trying to accomplish there. A True Intent Study helps you do just that.

What’s a true intent study?

As the name suggests, a true intent study aims at understanding a user’s objective as they browse your site. What are they there to do? And are they able to achieve it?

A true intent study helps you to:

  • Determine what your visitors intend to do and how they behave on your website
  • Determine the demographic makeup of visitors coming to your site
  • Determine whether your visitors were able to successfully accomplish their tasks/goals
  • Discover flaws in your website that might inhibit users from completing their intended task
  • Analyze the overall experience of your visitors

One interesting angle a true intent study provides over a run-of-the-mill usability test is, because you’re asking completely open-ended questions that do not make assumptions about the tasks/goals your users aim to accomplish, you might learn something surprising about the site. In other words, a true intent study helps you gather data that you probably wouldn’t have gathered if you were relying strictly on a highly controlled test with specific tasks or scenarios.

How does a true intent study work?

The process is pretty simple.  Site visitors are intercepted at random and their subsequent behavior is tracked. A lightbox poses questions like, “Why are you visiting the site today?” and “Were you able to successfully accomplish your goal(s)?”

We’ll illustrate this via screenshots in the next section.

How to run a true intent study with Loop11:

1.  Choose a website you want to test.

2.  Enter the details of your user test.

 Screen Shot 2015-04-22 at 1.50.56 PM

3.  Insert a multiple choice question that aims to understand what the visitor came to the site to do.

Screen Shot 2015-04-22 at 1.51.11 PM

4.  Provide a range of response options covering the most common reasons people visit your website.   Randomize the order of the responses, make the question mandatory & offer an “Other, please specify” response at the end. In the example below, we ran a true intent study on www.cleverzebo.com.

Screen Shot 2015-04-22 at 1.51.24 PM

5.  Insert an open task that starts on the homepage of your website.

Screen Shot 2015-04-22 at 1.51.33 PM

6.  Insert the following three questions:

Screen Shot 2015-04-22 at 1.51.42 PM

7.  Enter the “User Test Options,” spelling out the max number of participants you’d like in the study, the “thank you” text and other details. It’s possible to include only a percentage of your overall site visitors in the true intent study.

Screen Shot 2015-04-22 at 1.51.51 PM
8.  Choose “Create a pop-up invitation for your own website” as the method of inviting participants:
Screen Shot 2015-04-22 at 1.52.02 PM

9.  At this point, Loop11 will provide you a code snippet. Place the code on the page(s) of your website where you want the pop-up invitation to appear, launch the test and sit back whilst the data comes in.

A true intent study can be a valuable resource in discovering flaws in your website that are making it difficult for users to complete their intended task.  A true intent study sometimes casts a wider net than ordinary usability testing, as it dives into the goals of real users navigating your website and looks at how effectively your users are able to meet those goals. It’s an easy test to set up. Website owners can learn a lot from this exercise!

To learn more about true intent studies and how to run your first one, contact us at support@Loop11.com.

 

This blog post was a collaboration between Jacob Young and Arijit Banerjee, who assisted with vital research.

 

Why you should always prototype & user test multiple designs

5,127. That’s the number of prototypes that James Dyson claims to have created trying to perfect his bagless vacuum cleaner. Five thousand, one hundred and twenty seven. You see designing stuff is a messy business. Some ideas work out, some don’t. It’s only through a certain amount of trial and error (or in James Dyson’s case, a lot of trial and error), that you end up with a great design. This is why it’s so important to always, always prototype and user test multiple designs.

Why prototype and user test multiple designs?

Here are 10 (yes 10!) good reasons why you can’t afford not to prototype and user test multiple designs.

1.     You (and every other designer on this planet) never gets it right first time around

Sorry to burst your bubble but like James Dyson and his endless vacuum cleaner related tinkering, you never get a design right first time around. It doesn’t happen. It’s about as likely as the Qatar football team holding the FIFA world cup aloft after winning their home tournament (they’re currently the 109th best team in the world!). By testing multiple designs you can continue that tinkering for that bit longer.

2.     You can test and keep alive alternative design ideas

Invariably lots of your great UX design ideas will have been rejected, and consigned to the great idea graveyard in the sky. Testing multiple prototypes allows some of these ideas to be kept alive that little bit longer. You never know, that idea that you weren’t sure would work might just turn out to be a belter!

3.     You can evaluate one design against another

Rather than just saying, “Yep, it tested well”. You can say, “This design tested better than this one”. If all the designs bomb (which sadly sometimes happen), at least you should know which one sucks the least.

4.     You can spread your bets

Like a punter betting on a number of horses at the Grand National, prototyping and testing multiple designs helps to spread your bets. You don’t have to hedge all your bets on just the one design.

5.     You can demonstrate more designs in context

Clients and users alike have to really see and interact with a design in context, before they can truly evaluate it. That’s of course why prototyping is so important. Demonstrating a wireframe or sketch, just isn’t the same as creating a living, breathing design (even if it’s all smoke and mirrors). Prototyping and testing multiple designs gives you the opportunity to demonstrate more than just one design in context.

6.     Users (and clients) have something to compare against

As I’ve said before in my introduction to pairwise comparison article, people find it much easier to evaluate something when they have something else to compare it against. Multiple prototypes give users and clients that something else to compare against – namely an alternative design.

7.     You can gather more objective data

Testing multiple designs allows you to gather more objective data, as you can get feedback for multiple designs. This is especially important when you want to demonstrate the case for a particular design, and perhaps try to convince a particularly reluctant stakeholder.

8.     It’s not much more work than prototyping and testing one design

Now it might seem that prototyping and testing two designs is twice the work as one design, but this actually is isn’t the case. You should be utilising a lot of the same framework for the prototypes, and often you can cover multiple designs in the same user test (more about this below), so actually the extra workload is not that great.

9.     It’s more fun

OK, so this might not be as important as some of the other points, but prototyping and testing multiple designs is more fun (at least I find it more fun). You get to explore more designs, and if you’re a prototyping junky like me, you get to create even more funky prototypes to play with.

10. It’s what other design disciplines do

Visual designers, industrial designers, architects and so one will all typically create and trial multiple prototypes, so why should UX designers be any different?

How best to test multiple designs?

Ok, so hopefully I’ve now convinced you that prototyping and testing multiple designs is a really good idea. But how is it best to test multiple designs? Well, you basically have two different options.

1.  Comparative user testing

2.  Split user testing

Comparative user testing

Comparative user testing involves getting users to use multiple designs (usually just two), and then asking them to compare and contrast them. For example, a user might carry out some tasks with design A, some tasks with design B, and then provide feedback on which he or she found easiest to use. Of course, we all know that it’s what users do, not what they say which is most important, but this way you can observe users actually using the different designs (what they do), and get their feedback as well (what they say).

Comparative user testing is useful because you get lots of feedback, and users have a point of comparison (i.e. the different designs). However, testing multiple designs invariably make the sessions a little more complex to run (unmoderated comparative user testing is probably best avoided) and limits the number of tasks you can cover across the different designs. You also have the potential for some bias as participants are being exposed to one design before the other(s). This is why it’s always a good idea to vary the order in which designs are tested across sessions. For example get half of the users to use design A first, and half to use design B first. It’s also a good idea to try to cover different tasks across the different designs. This allows you to cover more ground, and mitigates the issue of participants being exposed to the same task (and therefore leaning an approach) on a prior design.

Split user testing

With split user testing you test different designs separately, so users will only ever see one design. Typically you’ll want to test the same (or at least very similar) tasks across the different designs, usually in the same order. This allows you to make accurate ‘Top Trumps’ style like for like comparisons across designs.

Split user testing allows you to cover more tasks and eliminates the potential for bias, as of course users will only ever see the one design. It also avoids the complication of having to ask users to switch between designs. On the negative side split user testing doesn’t capture comparative feedback from users and you’ll need to run more tests, as each session will only cover the one design. Because you’ll need to run more tests split user testing is a particularly good candidate for unmoderated user testing (i.e. self-service user testing). For example using a service such as Loop11.

Any other advice?

Before you run off and start frantically prototyping and user testing, here are some further hints and tips that you’ll hopefully find useful.

Plan to prototype and user test multiple designs from the start

Ok, so I know that I said that prototyping and user testing multiple designs is not that much more work than prototyping and user testing just the one design, but you’ll still need to plan for it. Certainly don’t suddenly spring it on an unsuspecting client or project manager at the last minute (although this can be fun, if anything just to see the look of horror on their face). By planning in advance you can properly think about the best user testing method to use, plan the additional time you’ll need to create multiple prototypes and help to set stakeholder expectations from the start.

Don’t spend too long crafting prototypes

This advice is as true for prototyping and testing one design, as multiple designs. However, the effects of over crafting a prototype are amplified when you need to create multiple prototypes, so it’s worth re-iterating. Don’t spend too long crafting, refining and honing the prototypes. After all, you’re only going to throw them away (otherwise, they’re not really prototypes). Prototypes should be like a Pot Noodle (an instant noodle snack in the UK) – quick, a little bit dirty, but just about enough to get the job done.

Don’t user test too many variations

Brilliant, our brainstorm came up with 6 possible design directions. Let’s test them all to see which is best… It can be tempting to test lots and lots of different designs, but resist that temptation, because like drunkenly eating a greasy burger at 3:00am in the morning, it’s generally a bad idea. You don’t want to have to create lots and lots of different prototypes, run a ridiculous number of user testing sessions, or present users with a bewildering number of different design options. Instead whittle the designs down to 2, certainly no more than 3 designs before you even start thinking about prototyping and user testing.

Don’t ask users to compare lots of different designs

Related to the last piece of advice, try to avoid asking users to compare any more than 2 different designs. At a push you could possibly ask them to compare 3, but that’s really the limit. If you really must test more than that, then you’ll want to focus on split user testing (or split comparative user testing, but that’s just confusing for all), because asking people to compare and contrast more than 3 different designs will make their head explode. It’s also a good idea to visually show the different designs when asking users to compare them, otherwise they have to remember which design was which, and that will also make their head explode.

User test divergent designs

There is little point testing two designs which are virtually identical, as it’ll invariably be a case of ‘spot the difference’. Instead try to test the Arnold Schwarzenegger and Danny DeVito of your designs. Namely, designs that are related but quite different (for those that didn’t get the reference, Schwarzenegger and DeVito played twins in the movie called Twins). For example, you might want to test two quite different navigation methods, such as mega menus vs left hand navigation. Of course the designs don’t need to be wildly divergent, but if they are too similar, it kind of defeats the object of testing multiple designs.

Avoid creating Frankenstein designs

I said that you should be user testing around 2 to 3 divergent designs. Easy, so you take a bit from this design, a bit from this one, another bit from this one and, horror of horror – you’ve created a Frankenstein-esque design monstrosity! Like Dr Frankenstein and his monster (who incidentally in the original novel had no name, he certainly wasn’t called Frankenstein) you can’t just lump different design ideas together and expect them to work. Make sure that the different designs are coherent and have at least some design consistency. You can certainly test different design elements within the different designs, such as Navigation method 1 and footer 2 in design A, and Navigation method 2 and footer 1 in design B, just ensure that the designs work as a whole.

This is a guest post by Neil Turner.  Neil is A UK based UX designer, researcher and trainer. When he’s not trying to make the world a slightly better place he likes to share UX ideas, tips, tools and techniques on his UX blog – UX for the masses.

Next Page »