Following on from our recent release of moderated user testing, we’re happy to announce the release of some significant additions to Loop11’s testing suite:
- 5 Second Tests
- First Click Tests
- Image Testing
- The Lostness Metric
- Safari Testing Update
Last week we released auto calculations for Net Promoter Score (NPS) and System Usability Scale (SUS) questions. This week we’re happy to announce we’ve released auto calculations of the Lostness metric.
What’s that you say? Not sure what the Lostness metric is?
Don’t worry, you are not alone. Here’s a definition, borrowing from Tomer Sharon in this article:
The lostness metric is a measure of efficiency using a digital product or service. It tells you how lost people are when they use the product. Lostness scores range from zero to one.’
So that gives some insight, but how does it relate to a Loop11 study?
I’m glad you asked!
Loop11 is proud to announce the launch of a new moderated user testing feature within our existing suite of tools.
This new release will complement our existing unmoderated usability testing features thus giving customers unparalleled ability to run a wide array of UX studies, both qualitative and quantitative. (more…)
My gut feel is that many of you who will read this title, and the article, might be confused as to why this needs to be written. The delightful thing about many UXers is their high level of empathy and general emotional intelligence. It’s what makes us good at what we do. However, the truth of the matter is not everyone running user research thinks, or cares, about best practice.
Many considerations that I’ll discuss below are red lines we intuitively know not to cross, and more than likely, never even considered breaching. However, as more and more professionals pile into the world of product development, with differing skillsets, we’re starting to see more user tests ran by people who don’t know what they’re doing and in the process raise ethical concerns.
Further down in this article I have included an example of gobsmackingly poor ethics within a user test, but before I get there I’ll cover off some of the things we think about as makers of user testing software. (more…)
Recently we ran two usability studies to gain comparative benchmarking data on an existing design and then comparing that to a new one.
Since the new design was yet to be released, we created two InVision prototypes, one for each design. We then created 4 tasks and 5 questions and generated two identical studies, one for each prototype.
Next, we set about running 100 participants though the prototypes, 50 on each, to see if our new design had created a better overall experience for participants.
An hour after launch we had the results back from the two studies so I set about consuming the reports. It didn’t take me long to see that the average page views per task were higher for our new design.
As I’m sure many of you can attest, it hurts a little when something you’ve put a lot of effort into and believe is better proves to be worse. But in the interest of creating a better piece of software, I swallowed my pride and took a look at some of the highest page count participants to see how we’d failed.
I focussed in on two participants who were large outliers, both having recorded roughly 3 times the amount of page views than their next nearest participant. These two participants alone were enough to elevate the page count averages to the point where the new design was out performed by the old design.
As I watched the videos of these two participants I only became more confused. (more…)
Search Engine Optimization (SEO) has long been viewed as one of the most important factors in determining the success of online companies and their websites. Leading tools, such as MOZ, and their suite of products, would facilitate SEO professionals diving into the depths of search engine results and pulling out valuable nuggets of SEO gold that could then be applied to their website in order to climb up the search rankings.
Traditionally SEO work has involved looking at large sets of in-personal data, usually mined from Google Analytics and/or it’s general rankings. The problem – often this information lacks color. You can see the ‘what’ but it’s not always clear why the users are thinking and acting like they are.
This is where Search Engine Findability User Studies (SEFUS) come in and save the day. (more…)
Code is a developer’s problem… right? Mmmm, maybe, but when it goes wrong it can sure derail your user testing in a hurry.
There is one inconspicuous line of code that can ruin your UX testing and, as UX professional, it’s your responsibility, not your developer’s, to get right.
What am I referring to? (more…)