6 common challenges in managing UX projects and how to overcome them

As of today, Ive been involved in Web and UX design for 12 years. The last three of them have been shared with my partners at Continuum, leading UX projects for a wide range of clients. Its impossible not to notice certain patterns on how user experience projects wind up, regardless of the scope, the team size or even the deadline and the budget.

These patterns, and the continuous iteration on how to better deal with them, have led us to identify some principles that pave the way to a better experience design. Well talk about them in a future post. But first, lets talk a bit about the most common problems we have faced when doing UX:

Read More

Usability vs. A/B testing – which one should you use?

These days, I see a lot of people arguing over the type of testing procedures they should follow – Usability testing or A/B testing. Honestly, I find this argument quite vague. In fact, it’s not an argument at all, because, they serve completely different purposes.  We need to clearly understand which one is required when.  Let’s begin by diving into what is what and when you should go for them.

A/B Testing is what you should perform when you have two different designs (A and B), both having certain benefits, but you need to know which one has an edge over the other, or which one has a higher conversion rate.  Let’s put it this way – If you are looking for answers to questions like “Which design results in the most click-throughs by users?” or “Which design layout results in more sales?” or “Which e-mail campaign performs better?”, etc., an A/B Test is what you should go for.  A/B Testing is an effective way of testing how certain design changes (small or big) in the existing product can produce an impact on your returns.  The main advantage of performing such a test is that you can compare between two versions of the same product where the difference in design elements can be as nominal as the color of a particular CTA button or as massive as being completely different from each other.

On the other hand, if you’re looking for answers to questions like “Can users successfully complete the given task?” or “Is the navigation smooth as butter?” or “Do certain elements distract the user from their end goal?” -  Basically anything related to ease of use; Usability Testing is what you should go for.  Unlike A/B Testing, Usability Testing provides insights into user performance metrics rather than establishing the fact which design (A or B) is better.  The coolest thing about usability testing is that you can perform it on designs having any degree of fidelity – ranging from wireframes to high fidelity mockups, or even the actual finished product.  It’s completely up to you.

A/B Testing is quantitative in nature, i.e., it focuses on the “How Many”, whereas Usability Testing is qualitative in nature, i.e., it focuses on the ‘Why’.

While Usability Testing may require you to recruit participants, script your tasks and questions, analyze and make design recommendations based on your findings, A/B Testing requires no such efforts. A/B Testing allows you test your designs with real time traffic.  This sort of a test is performed on a live site, where an equal percentage of people/users are directed towards either designs and the number of click throughs and successful conversions from each design are recorded.  The results are then analyzed to determine which design trumps over the other.

Concluding, I’d like to say that it is very important to understand what type of test is required when.  Not knowing would result in costing you a lot of money, time and effort.  It is often recommended to pair one up with the other.  Why? Well, the benefits have no limits.  You can conduct usability tests to collect inputs (qualitative in nature) from the users and use A/B Tests to get insights into what can your possible design alternatives be or which alternative performs the best.  Conducting a Usability Test also ensures that there is no guess work involved in designing each of the design alternatives, and hence the designs tend to be bulletproof.

Arijit Banerjee is a UI & UX Enthusiast.  Although a power systems engineer by education, he has always found himself inclined toward the world of UX.  He has been associated with several firms and has helped define experiences across a wide range of products.  Apart from that, he’s a terrible singer, a dog lover, and an out and out foodie with decent culinary skills.  You can visit his website or follow him on Twitter.

Win the ultimate UX & marketing toolkit

Our friends at Optimal Workshop are launching a competition that’s sure to bring joy to aspiring marketers and usability experts. The giveaway: $30,000 worth of cutting-edge UX and marketing tools.

This competition takes place as part of World Usability Day 2014 on November 13.

You have 21 days from today to get involved. To enter, you simply have to:

1. Take a photo of something that engaged you today.
2. Upload it and explain why it’s engaging.
3. Share it with the world!

Enter the contest

We encourage you to share your entry, check out the other entries, and vote on your favorites. The ten entries with the highest number of votes will go to the grand final to be judged by UX Mastery and the CEO of Optimal Workshop.

Twitter handle: @optimalworkshop

Competition hashtag: #marketingUXgiveaway

Some highlights from the $30,000 tools package include:

1. Annual subscription to Moz, a leading SEO & analytics tool ($1,150 value)

2. Lifetime license to use heatmap and overlay tools from Crazy Egg ($7,000+ value)

3. Annual license for usability testing with Loop11 ($9,900 value)

There are many other fantastic marketing and UX tools in the giveaway. Enter now to win.

We’ll see you at World Usability Day 2014!

The importance of planning in guerilla testing

When talking about user testing in the UX circles, there’s formal user testing, and then there’s guerilla testing. Formal user testing is the subject of many research papers and studies, but there is relatively little guideline on how guerilla testing should be conducted. Although many UX purists frown on the practice, guerilla testing has become a useful tool for the everyday UX practitioner. Whether they be struggling with tight UX budgets or looming project deadlines, guerilla testing can help fast track the research and testing phases of their UX design cycle. In fact some UX practitioners might even refer to the practice as the art of guerilla testing.

In essence, guerilla testing is user research done using a lean and agile approach. While this means making the user testing simple, short and relatively flexible, it doesn’t mean going about this in a totally unstructured, undocumented and unplanned fashion. What formal testing tries to avoid by not conducting research and testing in a haphazard manner is the risk of introducing potentially costly design changes that may not lead to any benefits to the end user.

Yet, the danger of guerilla testing comes from poorly planned and executed tests that are not reliable, consistent or meaningful. In this post, we’ll explore some pitfalls of guerilla testing “in the wild,” a few tactics to avoid or minimize the methodology’s weaknesses and tips to improve planning for all research and testing.

Plan B – catering for the uncontrollable

The first and most important difference between guerilla testing and other standard user testing processes is the lack of a controlled environment. You may have carefully picked out a time, location and worked out the groups of people that you want to survey, but when you do turn up, things may not go according to plan. What will you do if some unexpected event happens that week, or if the location becomes unavailable or too crowded/noisy/distracting, and what if you don’t end up running into the type of people you want to speak to? So the first thing to remember when carrying out guerilla testing is always to have at least one backup plan, or even better, to have a plan B and C.

Consistency – stay true to your Q’s

Another common problem with guerilla testing is the temptation to go with the flow when querying the user. Often a particular question or comment triggers interesting insights or unexpected findings, and you become fixated with getting to the bottom of it. This can cause a few different issues, such as not being able to compare results because you haven’t asked the same type of questions, or the questions were asked in ways that produced varying results, or you introduce new variables and behaviour triggers that were not present for other users. In the end you find yourself unable to reconcile all the findings and draw a neat conclusion. So the takeaway message here is to have a focus in mind, stick to the main questions and resist the temptation to chase loose ends. If you must, circle back at the end to dig deeper.

Beyond paper – rich data capture

One of the details that often gets overlooked in guerilla testing is the capturing of information accurately and reliably. Guerilla testing doesn’t mean you are only restricted to pencil and paper.  Although you can get a lot out of paper wireframes, it is not easy for the testers to capture all the feedback on the paper itself or on sticky notes. There’s no shame in taking a PowerPoint presentation or even a semi-interactive prototype into the field. You might even consider a tablet running a usability testing web application if you have the luxury to do so. This way you will be able to have all your data and results captured and stored neatly in one place for you to review later. It also means no more deciphering handwritten notes scribbled down while your mind is focusing on what the user is telling you.

Guerilla testing – what’s it to you?

Last but not least, you should consider what the guerilla test means for you in the grand scheme of things. Not every organization is going to need a research and testing framework document to formalize and standardize the process, but chance are, if you find this to be a useful research tool then you’ll want to know that you can extract reliable and consistent results from your test subjects. So do spend a bit more time thinking about how to eliminate the environmental variables that might affect your users (e.g. testing a weather app on a hot versus cold day might affect people’s mood), consider when and how you approach people to conduct the test, and try to keep the test items simple with a clear focus on the answers you want to get.

With all of these planning details in mind, you should have better luck finding the right balance between flexibility and consistency for your guerilla tests.

Create a more intuitive website structure with this 5-star card sorting course (get 50% off)

Good information architecture is key to creating engaging, easy-to-use and intuitive websites. We invite you to take this useful online course, brought to you by U1 Group (a UX consultancy firm), comprised of easy-to-follow lectures that will walk you through the card sorting process.

What is card sorting?

Card sorting is a professional exercise undertaken with participants that determines the best navigational structure (formally called ‘information architecture’) for your site. Card sorting can be carried out face-to-face, or online using a software program called OptimalSort. In this course, you will learn how to master both methods.

How to get 50% off the online course

Loop11 fans get a 50% discount on this course by using the code “Loop11” (the code is automatically applied when you follow any of the links in this email).

Why you should check it out:

  • Lifetime access to 14 lectures
  • Join a community of 150+ students
  • 30 day money back guarantee
  • Mobile accessibility
  • Certificate of completion

Carrying out card sorting prior to web design and development enables you to:

  • See how users map out relationships between your content
  • Figure out where users think content should (or shouldn’t) sit
  • Observe what navigation paths don’t make sense to users and why
  • Learn the language visitors use to categorize information

Check out the free preview to see how this course will provide you valuable insights about creating & testing intuitive website structures.

 

How to use Tree Testing to Test the Information Architecture of Your Website or App

Importance of testing IA

Back when I was a kid, I was taught that raw bits of unorganized facts that needed to be processed are called data, and when data is processed and organized into something sensible, or presented to us in a given context so as to make it useful, it is called information. What I wasn’t taught as a kid was that even information needed some sort of organization, so that we could achieve a consistency in task flow.

One of the biggest challenges faced while building a website or an app is organization of content. If your content is not findable or accessible, no matter how pretty or full of bells and whistles your website or app is, your users are going to run away, and conversion rates are going to come down. Testing the organization of content (or information architecture as we call it), thus, becomes very necessary at the early stages of product development lifecycle.

Hello Tree Testing

There are several different ways in which the information architecture (IA) of a particular website/app can be tested. What if I told you there is a simple yet bulletproof technique to carry out such a test? Yes, we are talking about Tree Testing – one of the simplest ways to test the IA of an application.

So what is a ‘tree’? Typically, every website that has more than a few pages translates into a structure that categorizes pages into groups and sub-groups forming some sort of hierarchy of content. This hierarchy of content or ‘tree’ can be formed by the usual IA/User Research techniques (Read: Card Sorting). Once this tree has been formed, it needs to be cross-checked to make sure that everything is perfect. This is where Tree Testing comes into play.

Tree testing is an effective way to assess the findability, labeling & organization of your website’s/app’s structure. Unlike traditional usability testing, tree testing is not done on the website itself; instead, a simplified text version of the site’s structure is used. The prime focus is to test the navigation system of the website.

The questions to be answered are – “Can users find what they are looking for?”, “Does the navigation system make sense to users?”, “Can they choose between menu items, without having to think too much?”, etc. Factors like visual design, motion design, etc. are taken out of the picture.

Typically, a tree test is conducted prior to building a prototype to make sure that users are able to navigate easily through the ‘tree’ (hierarchy of content).

 

Why you should do it

There are various advantages of adopting this process. Few of them are clubbed together below:

  • It allows you to visually test the navigation and findability of your website/app.
  • It allows you to identify navigational issues prior to building a prototype or a dynamic website.
  • It allows you to analyze all attempts where users had trouble navigating before you go live.
  • It allows you to gauge how well users can find items in the ‘tree’.
  • It allows you to determine the ease with which users/participants complete the given tasks successfully.
  • How to go about performing a Tree Test

Here’s a short guide on how to go about performing a Tree Test: Let’s consider a hypothetical situation where you want to test the information architecture of an e-commerce website that sells hair and skin care products. Let’s assume that you’ve already performed a card sort and have come up with a navigation system that seems to be appropriate. The next step is to cross check and make sure everything is perfect. Here’s what you do.

  • Give your users/participants a “find it” task (Example: “Look for American Crew Daily Shampoo”).
  • Show them a text version of the top tier of the menu items of your website.
  • Once they choose a menu item, show them the list of items under that particular category (This is the next tier in your tree).
  • Let them continue to move down through the tree, backtracking if necessary – until they successfully complete the given task or until they give up.
  • Give them several tasks in this manner, every time starting back at the top of the tree.

 

image

Analyze and implement the findings/results.

  • Conclusion Proper analysis of the findings will answer the following questions:
  • Did the users/participants succeed in completing the given task(s)?
  • Did they backtrack? If yes, then where and the number of times they needed to do so?
  • How fast did they click?
  • Which sections need a rework?

The most important task at hand next is to implement these findings/results. Redesign the structure of content using these findings and perform the test once again. If the user interaction is found to be smooth and error free, you are good to go.

Although Tree Testing might seem like overkill to some of us, but it does reveal major flaws in your website’s/app’s structure, and lets you define a more reliable site structure and navigation by validating the results derived from IA techniques like Card Sorting, etc..

 

Interested in learning more?  Get a 50% discount on this 5-star course on Developing an Information Architecture with Card Sorting.  Join a community of 100+ students & receive lifetime access to 14 easy-to-follow lectures from the user experience experts U1 Group (a UX consultancy firm).

 

Arijit Banerjee is a UI & UX Enthusiast. Although a power systems engineer by education, he has always found himself inclined toward the world of UX.  He has been associated with several firms and has helped define experiences across a wide range of products.  Apart from that, he’s a terrible singer, a dog lover, and an out and out foodie with decent culinary skills.  You can visit his website or follow him on Twitter.

World Usability Day: attend and win the $10,000 Prize Package

We’re excited to be sponsoring another exciting usability event called World Usability Day (WUD), taking place on November 13, 2014.

WUD is a single day of events occurring in over 25 countries that brings together communities of professional, industrial, educational, citizen, and government groups for our common objective: to ensure that the services and products important to life are easier to access and simpler to use.

The topic of this year’s event? Engagement.

We’ll be exploring critical questions like:

  • How can you engage people to use technology products and services?
  • What kind of design thinking needs to be incorporated to keep people engaged?
  • How can you engage those outside our field to understand the importance of a good user experience?

Find a World Usability Day event near you >>

Because we believe in this event and want to support the discussion around increasing engagement, Loop11 is sponsoring WUD by contributing 1 free user test, a $350 value.

The World Usability Package will include leading tools for Usability, Marketing & Project Management professionals, with a total value of more than $10,000 in prizes.

To enter the contest for a chance to win, you simply have to upload a photo of the most engaging thing you’ve seen today (online or offline) and then write a short explanation of why the subject is engaging.

Make sure to check out this post for more details on when to enter the contest and follow @optimalworkshop so you can catch the competition start date.

Small words make a big difference: how to ask incisive usability questions for richer results

When it comes to usability research, it’s not really about the method we employ to collect insights. Nor is it about the design we’re testing, although that plays a major part. It’s about the questions. Simply put, questions are the secret sauce to any research dish.

But how do we ask the “right” questions to garner results? There’s a lot to consider, like the environment we’re in (physical or digital?), the manner of the research (formal or informal?), and the relationship we have with our subjects (i.e. the power dynamics).

While it’s difficult to judge what a “right” question is, there are certain ways to improve the impact of our queries. I personally like to use a mnemonic device - abbreviated “ASK” – which helps me to focus on crafting constructive questions. Here it is:

  • A. Avoid starting with words like “Are”, “Do”, and “Have”. Questions that start with these type of verbs are a surefire way to nip insights in the bud. It can lead to what’s called a closed question, i.e. something that can literally close a conversation with a “Yes” or “No” answer. While it may be useful to gather this sort of data at times, try instead to open it up. Using open questions, as Changing Minds notes, gives us time to think, reflect, and provide opinions.
  • S. Start with W. The 5 W’s – i.e. who, what, when, where, and why – are the building blocks for information-gathering. It’s a tool from rhetoric, historically attributed to the Greeks and Romans. Essentially, the 5 W’s help us pull out the particulars. The magic behind them is that none of them can be answered with just a “yes” or “no”, so we’re always going to get a bit more of an expressive answer from subjects.
  • K. Keep it short. As researchers, we can often let curiosity get the best of us. Excited, we may list out a string of questions, asking more than necessary. By asking more than one question at a time, we ruin the focus of a conversation. We should try to keep our questions short and sweet, so that they may be digested more appropriately.

That’s it. ASK: a simple shorthand for asking incisive usability questions.

incisive-questions

If you want to learn more about questions, check out NN/g’s “Talking with Participants During a Usability Test”, for more basic talking techniques. Also, I highly recommend David Sherwin’s “A Five-Step Process For Conducting User Research“, for more on how to choose the right “W” at the right time.

 

David Peter Simon is a consultant at ThoughtWorks, an agile design and engineering firm. Talk with him on Twitter @davidpetersimon.

« Previous PageNext Page »