Usability vs. A/B testing – which one should you use?

These days, I see a lot of people arguing over the type of testing procedures they should follow – Usability testing or A/B testing. Honestly, I find this argument quite vague. In fact, it’s not an argument at all, because, they serve completely different purposes.  We need to clearly understand which one is required when.  Let’s begin by diving into what is what and when you should go for them.

A/B Testing is what you should perform when you have two different designs (A and B), both having certain benefits, but you need to know which one has an edge over the other, or which one has a higher conversion rate.  Let’s put it this way – If you are looking for answers to questions like “Which design results in the most click-throughs by users?” or “Which design layout results in more sales?” or “Which e-mail campaign performs better?”, etc., an A/B Test is what you should go for.  A/B Testing is an effective way of testing how certain design changes (small or big) in the existing product can produce an impact on your returns.  The main advantage of performing such a test is that you can compare between two versions of the same product where the difference in design elements can be as nominal as the color of a particular CTA button or as massive as being completely different from each other.

On the other hand, if you’re looking for answers to questions like “Can users successfully complete the given task?” or “Is the navigation smooth as butter?” or “Do certain elements distract the user from their end goal?” -  Basically anything related to ease of use; Usability Testing is what you should go for.  Unlike A/B Testing, Usability Testing provides insights into user performance metrics rather than establishing the fact which design (A or B) is better.  The coolest thing about usability testing is that you can perform it on designs having any degree of fidelity – ranging from wireframes to high fidelity mockups, or even the actual finished product.  It’s completely up to you.

A/B Testing is quantitative in nature, i.e., it focuses on the “How Many”, whereas Usability Testing is qualitative in nature, i.e., it focuses on the ‘Why’.

While Usability Testing may require you to recruit participants, script your tasks and questions, analyze and make design recommendations based on your findings, A/B Testing requires no such efforts. A/B Testing allows you test your designs with real time traffic.  This sort of a test is performed on a live site, where an equal percentage of people/users are directed towards either designs and the number of click throughs and successful conversions from each design are recorded.  The results are then analyzed to determine which design trumps over the other.

Concluding, I’d like to say that it is very important to understand what type of test is required when.  Not knowing would result in costing you a lot of money, time and effort.  It is often recommended to pair one up with the other.  Why? Well, the benefits have no limits.  You can conduct usability tests to collect inputs (qualitative in nature) from the users and use A/B Tests to get insights into what can your possible design alternatives be or which alternative performs the best.  Conducting a Usability Test also ensures that there is no guess work involved in designing each of the design alternatives, and hence the designs tend to be bulletproof.

Arijit Banerjee is a UI & UX Enthusiast.  Although a power systems engineer by education, he has always found himself inclined toward the world of UX.  He has been associated with several firms and has helped define experiences across a wide range of products.  Apart from that, he’s a terrible singer, a dog lover, and an out and out foodie with decent culinary skills.  You can visit his website or follow him on Twitter.

Win the ultimate UX & marketing toolkit

Our friends at Optimal Workshop are launching a competition that’s sure to bring joy to aspiring marketers and usability experts. The giveaway: $30,000 worth of cutting-edge UX and marketing tools.

This competition takes place as part of World Usability Day 2014 on November 13.

You have 21 days from today to get involved. To enter, you simply have to:

1. Take a photo of something that engaged you today.
2. Upload it and explain why it’s engaging.
3. Share it with the world!

Enter the contest

We encourage you to share your entry, check out the other entries, and vote on your favorites. The ten entries with the highest number of votes will go to the grand final to be judged by UX Mastery and the CEO of Optimal Workshop.

Twitter handle: @optimalworkshop

Competition hashtag: #marketingUXgiveaway

Some highlights from the $30,000 tools package include:

1. Annual subscription to Moz, a leading SEO & analytics tool ($1,150 value)

2. Lifetime license to use heatmap and overlay tools from Crazy Egg ($7,000+ value)

3. Annual license for usability testing with Loop11 ($9,900 value)

There are many other fantastic marketing and UX tools in the giveaway. Enter now to win.

We’ll see you at World Usability Day 2014!

The importance of planning in guerilla testing

When talking about user testing in the UX circles, there’s formal user testing, and then there’s guerilla testing. Formal user testing is the subject of many research papers and studies, but there is relatively little guideline on how guerilla testing should be conducted. Although many UX purists frown on the practice, guerilla testing has become a useful tool for the everyday UX practitioner. Whether they be struggling with tight UX budgets or looming project deadlines, guerilla testing can help fast track the research and testing phases of their UX design cycle. In fact some UX practitioners might even refer to the practice as the art of guerilla testing.

In essence, guerilla testing is user research done using a lean and agile approach. While this means making the user testing simple, short and relatively flexible, it doesn’t mean going about this in a totally unstructured, undocumented and unplanned fashion. What formal testing tries to avoid by not conducting research and testing in a haphazard manner is the risk of introducing potentially costly design changes that may not lead to any benefits to the end user.

Yet, the danger of guerilla testing comes from poorly planned and executed tests that are not reliable, consistent or meaningful. In this post, we’ll explore some pitfalls of guerilla testing “in the wild,” a few tactics to avoid or minimize the methodology’s weaknesses and tips to improve planning for all research and testing.

Plan B – catering for the uncontrollable

The first and most important difference between guerilla testing and other standard user testing processes is the lack of a controlled environment. You may have carefully picked out a time, location and worked out the groups of people that you want to survey, but when you do turn up, things may not go according to plan. What will you do if some unexpected event happens that week, or if the location becomes unavailable or too crowded/noisy/distracting, and what if you don’t end up running into the type of people you want to speak to? So the first thing to remember when carrying out guerilla testing is always to have at least one backup plan, or even better, to have a plan B and C.

Consistency – stay true to your Q’s

Another common problem with guerilla testing is the temptation to go with the flow when querying the user. Often a particular question or comment triggers interesting insights or unexpected findings, and you become fixated with getting to the bottom of it. This can cause a few different issues, such as not being able to compare results because you haven’t asked the same type of questions, or the questions were asked in ways that produced varying results, or you introduce new variables and behaviour triggers that were not present for other users. In the end you find yourself unable to reconcile all the findings and draw a neat conclusion. So the takeaway message here is to have a focus in mind, stick to the main questions and resist the temptation to chase loose ends. If you must, circle back at the end to dig deeper.

Beyond paper – rich data capture

One of the details that often gets overlooked in guerilla testing is the capturing of information accurately and reliably. Guerilla testing doesn’t mean you are only restricted to pencil and paper.  Although you can get a lot out of paper wireframes, it is not easy for the testers to capture all the feedback on the paper itself or on sticky notes. There’s no shame in taking a PowerPoint presentation or even a semi-interactive prototype into the field. You might even consider a tablet running a usability testing web application if you have the luxury to do so. This way you will be able to have all your data and results captured and stored neatly in one place for you to review later. It also means no more deciphering handwritten notes scribbled down while your mind is focusing on what the user is telling you.

Guerilla testing – what’s it to you?

Last but not least, you should consider what the guerilla test means for you in the grand scheme of things. Not every organization is going to need a research and testing framework document to formalize and standardize the process, but chance are, if you find this to be a useful research tool then you’ll want to know that you can extract reliable and consistent results from your test subjects. So do spend a bit more time thinking about how to eliminate the environmental variables that might affect your users (e.g. testing a weather app on a hot versus cold day might affect people’s mood), consider when and how you approach people to conduct the test, and try to keep the test items simple with a clear focus on the answers you want to get.

With all of these planning details in mind, you should have better luck finding the right balance between flexibility and consistency for your guerilla tests.

« Previous PageNext Page »