LyssnaGet paid to test
search icon
Resourcesright arrowGuides

Tree testing guide

0% Complete

Tree testing guide

Progress 0% icon

    Analyzing tree testing results

    Progress 0% icon
    1. Organizing responses
    2. Identifying patterns and trends in navigation choices
    3. Interpret the findings to inform design decisions
    Resourcesright arrowGuides

    Analyzing tree testing results

    Creating and sending your test to participants is just the beginning of the process; the fun stuff happens when you receive your results and can start analyzing them for insights. In this chapter, we'll cover how to analyze your results and use them to inform design decisions.

    Tree testing guide

    search icon
    Tree testing guide - Analyzing tree testing results

    Organizing responses

    If your tree test is moderated and in-person, you'll need a system to capture both the quantitative and qualitative data and a way to organize the data for analysis. We'd recommend organizing the different data types separately and bringing them together afterward for comparison and to build a narrative.

    Luckily, if you're using a platform like Lyssna for remote unmoderated usability testing, your data should be organized for you.

    Additionally, various tree testing software can help streamline data analysis by providing automated reports, visualization of navigation paths, and success rate tracking.

    Identifying patterns and trends in navigation choices

    With your results organized in a meaningful way, you can start identifying patterns and trends in the data. The metrics you'll likely have at hand include success rate, directness, time to completion, and the common paths your participants took.

    Tree testing guide - Analyzing tree testing results

    An example of tree testing results in Lyssna

    Success rate

    Your success rate will tell you the percentage of respondents who found the correct answer. A high success rate indicates fewer (or no) severe issues. According to a study conducted by Bill Albert and Tom Tullis, a ‘good’ success rate can be considered in the range of 61–80%, a ‘very good’ success rate in the range of 80–90%, and anything above 90% can be considered ‘excellent’. 

    However, as the NNGroup highlight in that same article, the best frame of reference is your own previous data. You should also consider the complexity of the task you’re asking your participants to complete – for mission-critical or revenue-generating tasks, you should aim for a high or excellent success rate. 

    Directness

    Directness measures the efficiency of participants in reaching the correct answer without backtracking. It reflects the percentage of users who navigate directly to the correct destination within the tree structure. A high directness percentage signifies a smoother user experience, indicating that users can easily find what they're looking for without unnecessary detours or confusion.

    Benchmarking directness

    Understanding what constitutes a "good" directness score helps you evaluate the effectiveness of your tree structure. While less standardized than success rates, here are some guidelines:

    • Target benchmark: Aim for a directness rate of at least 75% for good navigational clarity.

    • Relative evaluation: Compare directness to your success rates – ideally, directness should closely approach success rate.

    • Track over time: Establish your own baseline and monitor improvements across iterations

    Warning signs

    Several patterns in your directness metrics can signal potential issues with your information architecture:

    • A high success rate paired with low directness suggests inefficient navigation paths.

    • Participants backtracking or exploring multiple options signals potential usability issues.

    • Multiple participants selecting incorrect answers without backtracking can indicate structural flaws in your tree.

    For the most meaningful results, use directness metrics to identify specific pain points in your information architecture. A significant gap between success and directness metrics suggests users can eventually find what they need but are experiencing a non-intuitive journey.

    Time to completion

    Time to completion measures the amount of time it takes for participants to successfully navigate through the tree structure and complete a given task.

    Low completion times suggest that users can quickly and effortlessly find the information they need, indicating an intuitive IA design. It reflects the clarity of labeling, the logical organization of content, and the ease of navigation within the tree. 

    High completion times can indicate usability issues within the IA, such as confusing labeling, unclear hierarchy, or non-intuitive navigation paths. Participants might take longer to find information or come across challenges that make it hard to finish their task. This can make them feel frustrated or give up.

    Paths

    Analyzing the paths your participants take helps you identify both patterns and unexpected choices.

    Identifying common paths allows you to recognize prevalent user behaviors and preferences, highlighting areas of the IA that are intuitive and well-structured. 

    Uncovering unexpected paths can reveal potential areas for optimization within the IA. If a significant number of participants deviate from the expected navigation paths or encounter obstacles during their journey, it suggests usability issues or points of confusion within the IA design.

    Tree testing guide - Analyzing tree testing results

    Interpret the findings to inform design decisions

    Analyzing the findings from a tree test involves more than just examining numerical metrics; it requires interpreting both quantitative data and qualitative insights gathered from follow-up questions. By combining numerical data with qualitative feedback, you can gain a deeper understanding of user behavior and preferences.

    Once you’ve identified patterns and trends in the data, you can draw insights to inform design recommendations. For instance, if your tree test reveals that a significant number of participants consistently took an unexpected path instead of the correct one, this could prompt you to recommend revising the labels or structure of the IA to better align with user expectations. 

    Similarly, if follow-up questions reveal confusion or frustration related to specific labels, you can use this qualitative feedback to refine the wording or organization of information within the IA.

    The key is to use ‌numerical data as a guide, but also to rely on qualitative insights to provide context and depth to the findings. By triangulating both types of data, you can develop actionable recommendations that address usability issues and enhance the overall user experience. This iterative approach ensures that design decisions are grounded in evidence and directly address the needs and preferences of your target audience.

    See Lyssna's tree testing solutions.

    Tree testing guide - Analyzing tree testing results
    Company

    About us

    Book a demo

    Careers

    Contact us

    Customers

    Privacy policy

    Security information

    Status page

    Terms & conditions

    Trust centre

    Integrations

    Figma

    Google Calendar

    Microsoft Outlook

    Microsoft Teams

    Zoom

    Platform

    Overview

    Pricing

    Analysis features

    Card sorting

    First click testing

    Five second testing

    Integrations

    Interviews

    Live website testing

    Panel order calculator

    Preference testing

    Prototype testing

    Recordings

    Research panel

    Screeners

    Self recruitment

    Surveys

    Tree testing

    Sign in

    Solutions for

    Concept testing

    Desirability testing

    Enterprises

    Financial services

    Gaming industry

    Marketers

    Market research

    Product designers

    Product managers

    Tech & Software

    Travel industry

    Usability testing

    UX and UI Designers

    UX Researchers

    Resources

    Resources hub

    Blog

    Events

    Guides

    Help center

    Reports

    Templates

    Videos

    Compare

    Lyssna vs Maze

    Lyssna vs UserTesting

    Lyssna vs Userlytics

    Company

    About us

    Book a demo

    Careers

    Contact us

    Customers

    Privacy policy

    Security information

    Status page

    Terms & conditions

    Trust centre

    Integrations

    Figma

    Google Calendar

    Microsoft Outlook

    Microsoft Teams

    Zoom

    Platform

    Overview

    Pricing

    Analysis features

    Card sorting

    First click testing

    Five second testing

    Integrations

    Interviews

    Live website testing

    Panel order calculator

    Preference testing

    Prototype testing

    Recordings

    Research panel

    Screeners

    Self recruitment

    Surveys

    Tree testing

    Sign in

    Solutions for

    Concept testing

    Desirability testing

    Enterprises

    Financial services

    Gaming industry

    Marketers

    Market research

    Product designers

    Product managers

    Tech & Software

    Travel industry

    Usability testing

    UX and UI Designers

    UX Researchers

    Resources

    Resources hub

    Blog

    Events

    Guides

    Help center

    Reports

    Templates

    Videos

    Compare

    Lyssna vs Maze

    Lyssna vs UserTesting

    Lyssna vs Userlytics

    © 2025 Lyssna.