Release Notes: 5 Second & First Click Tests, Image Testing & More…

Following on from our recent release of moderated user testing, we’re happy to announce the release of some significant additions to Loop11’s testing suite:

  • 5 Second Tests
  • First Click Tests
  • Image Testing
  • The Lostness Metric
  • Safari Testing Update

(more…)

How Lost Are Your Users? Lostness Metric to the Rescue!

Last week we released auto calculations for Net Promoter Score (NPS) and System Usability Scale (SUS) questions. This week we’re happy to announce we’ve released auto calculations of the Lostness metric.

What’s that you say? Not sure what the Lostness metric is?

Don’t worry, you are not alone. Here’s a definition, borrowing from Tomer Sharon in this article:

The lostness metric is a measure of efficiency using a digital product or service. It tells you how lost people are when they use the product. Lostness scores range from zero to one.’

So that gives some insight, but how does it relate to a Loop11 study?

I’m glad you asked!
(more…)

Moderated User Testing Release Notes

Loop11 is proud to announce the launch of a new moderated user testing feature within our existing suite of tools.

This new release will complement our existing unmoderated usability testing features thus giving customers unparalleled ability to run a wide array of UX studies, both qualitative and quantitative. (more…)

Ethics in UX - Yes, you do need to think about this

My gut feel is that many of you who will read this title, and the article, might be confused as to why this needs to be written. The delightful thing about many UXers is their high level of empathy and general emotional intelligence. It’s what makes us good at what we do. However, the truth of the matter is not everyone running user research thinks, or cares, about best practice.

Many considerations that I’ll discuss below are red lines we intuitively know not to cross, and more than likely, never even considered breaching. However, as more and more professionals pile into the world of product development, with differing skillsets, we’re starting to see more user tests ran by people who don’t know what they’re doing and in the process raise ethical concerns.

Further down in this article I have included an example of gobsmackingly poor ethics within a user test, but before I get there I’ll cover off some of the things we think about as makers of user testing software. (more…)

Analyzing User Test Data - The Devil is in the Detail

At Loop11 we like to practice dog fooding.

Recently we ran two usability studies to gain comparative benchmarking data on an existing design and then comparing that to a new one.

Since the new design was yet to be released, we created two InVision prototypes, one for each design. We then created 4 tasks and 5 questions and generated two identical studies, one for each prototype.

Next, we set about running 100 participants though the prototypes, 50 on each, to see if our new design had created a better overall experience for participants.

Disappointment Meets Confusion

An hour after launch we had the results back from the two studies so I set about consuming the reports. It didn’t take me long to see that the average page views per task were higher for our new design.

As I’m sure many of you can attest, it hurts a little when something you’ve put a lot of effort into and believe is better proves to be worse. But in the interest of creating a better piece of software, I swallowed my pride and took a look at some of the highest page count participants to see how we’d failed.

I focussed in on two participants who were large outliers, both having recorded roughly 3 times the amount of page views than their next nearest participant. These two participants alone were enough to elevate the page count averages to the point where the new design was out performed by the old design.

As I watched the videos of these two participants I only became more confused. (more…)

How to use UX Testing to Level Up Your SEO

Search Engine Optimization (SEO) has long been viewed as one of the most important factors in determining the success of online companies and their websites. Leading tools, such as MOZ, and their suite of products, would facilitate SEO professionals diving into the depths of search engine results and pulling out valuable nuggets of SEO gold that could then be applied to their website in order to climb up the search rankings.

Traditionally SEO work has involved looking at large sets of in-personal data, usually mined from Google Analytics and/or it’s general rankings. The problem – often this information lacks color. You can see the ‘what’ but it’s not always clear why the users are thinking and acting like they are.

SEFUS… ah … gazoontite?

This is where Search Engine Findability User Studies (SEFUS) come in and save the day. (more…)

The One Line of Code That Will Kill Your User Tests

Code is a developer’s problem… right? Mmmm, maybe, but when it goes wrong it can sure derail your user testing in a hurry.

There is one inconspicuous line of code that can ruin your UX testing and, as UX professional, it’s your responsibility, not your developer’s, to get right.

What am I referring to? (more…)

The 3 Little Known Factors That Dictate Successful User Tests

“What a waste of time” the researcher says as they throw their hands in the air.

This scene is more common than we, as UX professionals, would like to admit. One of the biggest frustrations for user researchers revolves around participants either not turning up, or dropping out mid-way through a remote user testing session.

We often get support tickets, at Loop11, from customers desperate to improve their completion rates and looking for tips to ensure they are as efficient as possible.

In response to this we decided to dig into the details and see if we could pull out any commonalities consistently associated with high performing tests with low dropout rates.

For this task we pulled the most recent 1,000 user tests which had at least 10 participants complete the study. We then cut the data every which way we could think of to draw out tidbits that will help you run better user tests.

As a point of reference, in these 1,000 usability tests the average completion rate was 59% and the median was 63%. The average number of participants in a study was 103 and the median was 31. Last but not least, the average duration that a participant would take to complete a user test was 23 minutes.

So without further ado, here are the top 3 tips for ensuring the majority of participants who begin your testing process successfully finish, giving you those valuable insights. (more…)

Next Page »
Want more inspiration?
Join the Fab-UX 5 newsletter!

Five links to amazing UX articles,sent to you once a week.

No SPAM, just pure UX gold!

No Thanks