Analyzing usability test results
Conducting usability tests can feel like a bit of a maze – there’s a lot of data to sift through, and it can be difficult to know where to start. One of the biggest risks when trying to solve usability problems is suggesting solutions that don’t actually address the problems you identify. This can happen due to cognitive biases, not understanding the users’ needs and behavior, a lack of creativity and innovation, or decision fatigue, which can all cloud our judgment. So, what’s the solution? Knowing how to analyze data to identify common patterns, prioritize issues, and identify and propose relevant solutions. In this chapter, we’ll walk you through these steps and offer tips and best practices so that you can be confident making data-informed recommendations.
Usability testing guide
Categorize and organize your data
Let’s take a look at the steps you can take to categorize, sort, and organize the data you gather during a usability study.
1. Categorize your data
Begin by carefully looking through your data to identify any patterns or trends. Keep an eye out for any issues your participants encountered while performing tasks, the actions they took, and the comments they made (both positive and negative), and how many times these problems occurred across your participant group.
Next, add categories or tags so that you can sort and filter your data later. These categories will likely line up with the tasks you asked participants to complete during testing, such as signing up for an account, finding a specific product, adding an item to the cart, or using the search function.
It can be helpful to use a tool like Airtable or Google Sheets so that you can easily tag and move this data around. This is also easy to do in Lyssna, where you can create and apply tags to qualitative data, and then filter to drill down. Quantitative data like goal completion, time to complete, total clicks and misclicks are already analyzed for you, with downloadable data visualizations like heatmaps, click maps, and common paths.
2. Clean and organize your data
Once you’ve categorized the data, it’s time to clean and organize it. This involves removing any irrelevant or duplicate data, using a consistent naming convention, and checking for any errors or inconsistencies.
3. Ensure data integrity and accuracy
It’s also important to make sure your data is accurate. You can do this by checking for any outliers or anomalies.
Finally, it's always a good idea to have a colleague review your work to ensure accuracy and consistency. If you’re using Lyssna, you can use the comments feature to easily tag a team member and ask them to review the test results.
Prioritize usability issues
Now that you have a clean dataset to work from, here are some tips for how to prioritize usability issues.
1. Rank issues by severity
When reviewing the data collected during usability testing, identify the most important issues by considering how global the problem is and how serious it is. Not all issues will be equally serious, so you can use a scale to classify them. For example:
Critical: If the problem isn’t fixed, users won’t be able to complete tasks or achieve their goals. Critical issues will impact the business and the user if they aren’t fixed.
Serious: Many users will be frustrated if the problem isn’t fixed, and may give up on completing their task. It could also harm our brand reputation.
Minor: Users might be annoyed, but this won’t prevent them from achieving their goals.
Suggestion: These are suggestions from participants on things to improve.
Read more: Check out the Nielsen Norman Group’s advice on severity ratings for usability problems.
2. Consider issue frequency and impact
In addition to severity, consider how frequently the issue occurred during testing and how much it affected users’ ability to complete a task. You can calculate the frequency of an issue by taking the number of occurrences and dividing it by the total number of participants.
3. Use qualitative and quantitative analysis
Use both qualitative and quantitative analysis to assess your data. For quantitative findings, make calculations such as success rates, time on task, error rate, and satisfaction rating. You can also add demographic data so that you can sort the data to see if any demographic variables come into play.
For qualitative findings, group according to observations about the pathways participants took, the problems they experienced, comments or recommendations they shared, and answers to open-ended questions.
Read more: This article from Think With Google explores the importance of usability testing in improving brand perception and user experience for mobile apps, with case studies from H&M, HelloFresh, VanMoof, and Louvre Hotels Group.
Make recommendations
Once you’ve analyzed the data and prioritized usability issues, it’s time to identify potential solutions and, if needed, prepare a report.
1. Identify solutions
Sometimes solutions are simple, like adjusting the font size of a text block for better readability. But it can be trickier to identify solutions when they aren’t so obvious, like improving the user onboarding process for a mobile app, for example.
To reduce the risk of making the wrong decision, it’s worth generating multiple solution ideas for each issue. When proposing each solution, be as specific as possible so that it’s easier to evaluate them. For example, if you identify that users had difficulty locating the search function on your website, instead of suggesting “make the search function easier to find”, a specific solution could be “add a search icon in the top right corner and make it a distinct color to increase visibility”.
2. Prepare a usability test report
In some situations, you might have the ability to make a proposed solution yourself (like adjusting the font size in that text block). In others, you might need to make a case to your team or the decision makers in your organization. When this happens, you’ll need to prepare a usability test report.
Your report should include:
Summary: Explain what you tested, where and when the test was held, the equipment you used, how many participants you tested, and a brief description of the problems you identified.
Methodology: Describe the test sessions, the interfaces tested, metrics collected, and an overview of the task scenarios. You can also include demographic information about your participants.
Results: Provide a summary of your findings, such as the number of participants who completed each task, average time taken to complete each task, satisfaction ratings, etc. You can measure the success of each task against the evaluation criteria you identified during the planning phase.
Recommendations: List your findings and recommendations. Each recommendation you propose should be supported by data. Although the focus is on identifying and solving problems, it’s also useful to include any positive findings you identified and describe what’s working well.
In your report, it’s good practice to include visuals to illustrate specific points, such as tables, graphs, charts, screenshots, and short video clips. Tailor the report to your audience, and use clear language, avoid jargon, and be concise.
Read more:
Read the UX research report chapter in our UX research guide for more information on writing and presenting reports.
Make changes based on recommendations
Once you’ve identified usability issues, it’s time to develop a plan for making changes to improve the user experience. Here’s a summary of what to consider.
Determine how you’ll make changes
Based on the usability testing report recommendations, determine how each change will be made and whether it requires changes to the user interface, content, or functionality.
Establish a timeline for making changes
Set a realistic timeline for making the changes. Consider the complexity of the changes, the resources required, and any dependencies on other projects. Communicate the timeline to your stakeholders to make sure everyone is on the same page.
Assign roles and responsibilities
Determine who will be responsible for making each change. Consider creating a cross-functional team to make sure all aspects of the changes are addressed.
Determine how changes will be tested and evaluated
Develop a plan for testing the changes to make sure they have the desired effect on the user experience. Establish metrics for evaluating the effectiveness of the changes, such as user engagement, task completion rates, or customer satisfaction scores.
Remember that implementing changes is an ongoing and iterative process. It’s likely that you’ll need to make further changes based on the results of testing, and that’s okay. The key is to keep iterating until you have a product that’s easy to use and meets the needs of your users.