My gut feel is that many of you who will read this title, and the article, might be confused as to why this needs to be written. The delightful thing about many UXers is their high level of empathy and general emotional intelligence. It’s what makes us good at what we do. However, the truth of the matter is not everyone running user research thinks, or cares, about best practice.
Many considerations that I’ll discuss below are red lines we intuitively know not to cross, and more than likely, never even considered breaching. However, as more and more professionals pile into the world of product development, with differing skillsets, we’re starting to see more user tests ran by people who don’t know what they’re doing and in the process raise ethical concerns.
Further down in this article I have included an example of gobsmackingly poor ethics within a user test, but before I get there I’ll cover off some of the things we think about as makers of user testing software.
For context: most of what I’m discussing below is looking through the frame of an unmoderated user test, however, it is also relevant for moderated testing.
At Loop11, our customers are UX designers, researchers, product managers and business strategists but the vast majority of folks that encounter our user experience software are the participants. Every day thousands of well-intentioned people are joining Loop11 studies to participate in usability tests and earn some extra coin.
At a risk of being called ‘Captain Obvious’; we need to ensure they don’t suffer any negative consequences as a result of our software.
This includes thinking about things like;
- complying with GDPR regulations for EU participants
- ensuring our browser extension and mobile apps are unobtrusive during studies and lay entirely dormant at all other times
- ensure our messaging surrounding recording permissions is clear and concise, and achieves these goals in over two dozen languages
- provide a pathway for participants to contact us directly if they have an issue with a user test ran using Loop11
This is to name just a few. We’re not perfect and are continually looking for ways to improve the experience for participants, and proactively source feedback from all stakeholders.
Preparing User Test Participants
We regularly receive questions about how best to prepare participants for a study. Generally speaking, I don’t believe there is a one size fits all template as user tests can vary in methodology, context and technology. However, I’ve taken a stab at some key pillars that all tests should include.
I’d love to hear your thoughts in the comments. What would you include? What have I got wrong?
The Duh Pillars
1. Never aim to mislead your participant, in fact, double and triple check your study to ensure you are not accidentally opening the possibility for misleading anyone, including stakeholders, not just participants. Have someone with zero context read through your study plan, or even better, run them through a preview of your study. This way you’ll hopefully catch issues which you might be blind to.
2. Be very clear on what you are testing, why you are testing it, and, what data you expect the study to produce. This will also help with the above point. Anything that is not crucial to your goals should be removed from the study. The time of a participant is a valuable resource, respect it, and don’t waste any of it on tasks and questions that aren’t essential.
3. Preview your study in multiple ways, especially using the path you anticipate participants will follow when accessing your study. For example, if you are running a mobile usability study, don’t preview the test on your desktop, use a mobile device. This seems simple, but you’d be surprised how often people make this mistake.
4. Ensure you are explicit with participants regarding what you are asking them to join. This includes whether they are using a live website, a staging environment or a prototype. This will provide valuable context for the participant and may inform how they choose to act within the study. It may even impact whether they choose to begin the study. The below story is a dramatic example of what happens when context is not properly thought through or explained.
When Research Goes Bad
As I had teased above, below is a Twitter thread from UXer, Geoff Wilson. He is an experienced UX professional but also acts as a participant at times. In the below tweet thread he details how he was mislead and ultimately put in a very uncomfortable situation while participating in a usability study for Freelancer.com
I have a personal story to tell of how NOT to do #UX / #usabilitytesting. Seriously, my most recent experience on the participant side of the table left me a bit ethically shaken actually.
Last night I completed a #usability test for the @freelancer website with @usertesting 1/
— Geoff Wilson (@geoffwilsonUX) September 3, 2018
As you’ll no doubt read, Geoff began the study under the premise that we was not testing a live website. When he was asked to create an account, and then post a job on the Freelancer platform, he was not expecting real people would be tricked into responding to his job post. However, that’s exactly what happened.
It’s unclear who at Freelancer created this study and whether the deception was intentional. At best, it was a well intended staff member who did not think through the user test thoroughly before launching it. A result of this lack of thought had an array of negative fallout.
For Geoff, he felt terrible and that he’d mislead job seeking professionals. For those professionals, they wasted their time. As for Freelancer, they’ve been made to look terrible and amateur. Hopefully someone from their team sees Geoff’s thread and not only apologises to him but also launches a review into their UX practices.
I’d love to hear about your best practices not only for UX ethics, but study creation in general.
Give feedback about this article
Were sorry to hear about that, give us a chance to improve.