How to test your myBalsamiq prototypes with Loop11

The guys at Balsamiq asked me recently to write a brief article for their customers on how to conduct usability testing with Loop11 on prototypes and wireframes built with Balsamiq. My article below shows the steps involved in putting a simple Loop11 project together to test a prototype of the Kayak website.

How to test your myBalsamiq prototypes with Loop11

Happy testing!

Clickstream analysis has arrived!

We’re excited to announce the first of a number of new features for Loop11 in 2012 – Clickstream Analysis. The clickstream analysis will replace the ‘Most common navigation path’ by allowing you to analyze task navigation graphically and instantly understand how visitors navigate a task through your website.

The clickstream report provides a graphical representation of participants’ navigation through the website so you can see their journey, as well as the path they took before abandoning or failing a task.

You’ll notice that we made the visualization highly interactive so you can interact with the graph to highlight different pathways, and to see detailed information about specific pages. For example, if you want to dive deeper into your pages, you can hover over the node to see more information at a glance.

Below you can see an example of a task that performed well in usability testing (in this instance with a task completion rate of 92%). A quick look at the analysis shows that 90% of participants went directly to the success page from the homepage. The orange lines are a visual indication of the magnitude of participants who failed the task at different points in their journey through the website. Navigation through the website is clean and uncomplicated, which should be the case when the participant has a clear direction.

By contrast, the clickstream below is for a task that was performed comparatively poorly, where the task completion rate was only 32%. In this example, there is clearly confusion as to where participants should navigate from the home page to complete the task.

This is our first step in tackling clickstream analysis and we look forward to hearing your feedback as you begin to use the report in the coming weeks. We’re excited to bring you the first of a number of new features for the year, so stayed tuned for more!

As always, we welcome your input on how we can make the clickstream analysis more useful for you, so let us know in the comments below.

How many participants should be used for online, quantitative usability testing?

How many participants should be used for Online, Quantitative Usability Testing?

Qualitative usability testing has traditionally been based around small sample sizes of 5-20 participants. However, the growth of online testing tools and quantitative usability research is changing the game. Whist many experts agreed that for qualitative, lab-based testing small samples sizes of 5-20 participants is sufficient.  Online user testing has created a new wave of analysis such as benchmarking, A/B testing, competitor comparison, validating and much more. These kinds of quantitative analyses require larger numbers of participants to validate the data.

For example: Let’s say your company is testing two different versions of wireframes so management can decide and approve one to implement and allocate resources to. It would not be very good practice to use 10 participants and get a 60%-40% success rate. It would be very hard to validate a study and implement a strategy based on 10 participants. However, if 500-1000 participants were used, that data would be a lot more accurate, and management would have valid data to approve of the findings.

Some specialists, such as Usability Sciences, recommend up to several thousand participants for more high level quantitative testing such as, click-stream data, or multiple cross-tabulations. They have broken down the % error for margin based certain numbers of participants. Read the full article here.

Happy Testing!

« Previous PageNext Page »