UX research methods
In this chapter, you’ll learn about essential UX research methods, including user interviews, usability testing, surveys, A/B testing, card sorting, tree testing, focus groups, diary studies, and contextual inquiry, with practical examples provided for each method. Gaining insights into when and how to use these techniques will enhance your user-centered design processes and improve the overall user experience. By the end of this chapter, you’ll have an understanding of how to select the most appropriate research method for your studies, along with strategies for effectively planning and conducting each method to uncover valuable user insights.
In this chapter, you’ll learn about essential UX research methods, including user interviews, usability testing, surveys, A/B testing, card sorting, tree testing, focus groups, diary studies, and contextual inquiry, with practical examples provided for each method.
Gaining insights into when and how to use these techniques will enhance your user-centered design processes and improve the overall user experience.
By the end of this chapter, you’ll have an understanding of how to select the most appropriate research method for your studies, along with strategies for effectively planning and conducting each method to uncover valuable user insights.
A user interview is a qualitative research method used to gather insights and understand the experiences, needs, and behaviors of users. It involves conducting one-on-one conversations with individuals (either in-person or remotely) who represent your product or service’s target audience.
In a user interview, you ask open-ended questions to prompt participants to share their thoughts, opinions, and experiences related to your product or service. The goal is to understand their perspectives, motivations, pain points, and preferences to inform design decisions and improve the user experience.
User interviews are typically conducted in a conversational and semi-structured format, allowing participants to express their thoughts in their own words. The questions you ask during a user interview can cover a range of topics, such as goals, tasks, frustrations, expectations, and perceptions of the product or service. You might also show your participants prototypes, wireframes, or existing designs to gather feedback and more specific insights.
The insights you gather from user interviews are valuable for identifying user needs, validating design assumptions, uncovering usability issues, and generating ideas for product improvements. The findings from user interviews can be synthesized, analyzed, and used to guide the design process, iterate on prototypes, and create user-centered solutions.
Read more: Check out our example user interview questions.
When would you conduct user interviews?
User interviews can be conducted at any stage of the design and product development process. Below are some key points when you might organize user interviews.
User interviews are often conducted at the beginning of a study to gain a deep understanding of your target audience. They help uncover user needs, motivations, and pain points, which inform the design strategy and product goals. User interviews in this phase can also help identify potential opportunities for innovation and problem-solving.
User interviews are valuable for gathering requirements directly from users. They provide insights into users’ expectations, desired features, and goals, which can be used to define the scope of the project and prioritize design and development efforts.
User interviews can be used to generate ideas and validate design concepts. By presenting participants with early-stage prototypes or design concepts, you can gather feedback and insights to refine and iterate on the designs. User interviews help ensure that the design aligns with user expectations and needs.
Usability testing and iteration
User interviews are commonly conducted as part of usability testing. You observe participants interacting with a product or prototype and ask questions to understand their thought process, challenges, and satisfaction. Conducting user interviews during usability testing helps to identify usability issues, gather feedback for improvement, and validate design decisions.
User interviews can be held after a product or feature is launched to gather user feedback, understand user satisfaction, and identify areas for improvement. Post-launch interviews provide insights into real-world usage and help prioritize future updates and enhancements.
What’s an example of a user interview used for UX research?
Let’s say your team is developing a mobile banking app. You’re in the early stages of designing the app and want to ensure the concept will meet the needs and preferences of your target users – first-time homeowners.
You decide to conduct user interviews to gather insights and validate your assumptions about what first-time homeowners are looking for in a banking app. You target existing banking customers and other first-time homeowners who have expressed an interest in mobile banking services.
You focus the interviews on understanding how users manage their mortgage payments, budget and track expenses, plan for renovation costs, and whether they use educational resources. You ask open-ended questions to encourage participants to share their experiences and provide detailed feedback.
During the interviews, participants might discuss topics such as their preferred app features, their concerns about security and privacy, how often they use banking apps, and any challenges they face while managing their finances online.
The insights you gather will help the product team identify key priorities, features, and design considerations for the app. For example, the interviews might reveal a strong demand for a seamless and secure login process, the need for easy navigation and quick access to account information, and preferences for personalized financial insights and notifications.
Based on your recommendations from the user interviews, the team can refine the design concepts, prioritize features, and make informed decisions about the user interface, information architecture, and overall user experience. This helps to ensure that the product is user-centered, meets the needs of the target audience, and provides a seamless banking experience on different mobile devices.
Usability testing is a method used to evaluate the usability of a product or design. It involves observing users while they perform specific tasks and collecting data on their actions, feedback, and overall user experience. The goal of usability testing is to identify usability issues, uncover user frustrations, and gather feedback for improving the design.
Before you conduct a usability test, it’s important to define clear objectives. This helps focus the testing activities and ensures meaningful results. The design of your test is another important aspect, which includes selecting participants who match the target user demographic, creating realistic and relevant tasks, and deciding whether to use moderated or unmoderated testing approaches.
Once you’ve collected data from usability testing, you can analyze the results to identify patterns, common issues, and areas for improvement. You can synthesize qualitative and quantitative data to gain a comprehensive understanding of the user experience, and use those insights to make informed design decisions and iterate on the product.
When would you conduct usability testing?
Usability testing can be conducted at various stages of the design and product development process to evaluate the usability and effectiveness of a product or design. The timing of your tests will depend on your project timeline, resources, and specific goals, but ideally usability testing should be conducted ealy and iteratively throughout the design process so you can catch and address usability issues proactively.
Here are some common points at which usability testing is usually conducted.
Early design stage
You can run a usability test during the early stages of a design to gather feedback on initial concepts, wireframes, or low-fidelity prototypes. This helps you identify usability issues early, understand user needs, and make informed decisions before investing significant time and resources.
It’s useful to run usability testing during the iterative design process to evaluate and refine design decisions. Testing throughout helps uncover any usability problems that may have been introduced, validate changes, and gather feedback to inform further iterations.
Usability testing is often conducted before launching a product or a new feature. It helps ensure that the final design is user-friendly, intuitive, and aligns with user expectations. This testing phase can catch any last-minute usability issues and provide valuable insights for making final improvements.
Usability testing can also be conducted after the product or feature has been launched. This allows you to gather user feedback, identify any unforeseen usability issues, and make ongoing improvements.
Redesigns or major updates
If you’re planning to do a significant redesign or make a major update, you can conduct usability testing to assess the impact of the changes on usability and user satisfaction.
Usability testing can be an ongoing process throughout the product life cycle. By regularly testing, you can gather ongoing feedback, track improvements, and identify emerging usability issues or user needs. This iterative approach allows for continuous improvement and optimization of the user experience.
The timing of usability testing will depend on the project timeline, resources, and specific goals of the research. Ideally, usability testing should be conducted early and iteratively throughout the design process to catch and address usability issues proactively.
What’s an example of usability testing used for UX research?
Let’s say your company is developing a language learning app that offers interactive lessons, vocabulary exercises, and language practice for learners at different proficiency levels. You’re in the pre-launch stage and want to make sure the app is intuitive, engaging, and effective in helping users learn a new language.
To evaluate the usability of the app, you run remote unmoderated usability testing with a group of ten participants. You recruit participants with different language backgrounds and proficiency levels to capture a diverse range of perspectives.
During the usability test, you give participants specific tasks that align with the app’s features and learning objectives. For example, asking participants to navigate to a specific Spanish language lesson, complete a vocabulary exercise, and record their pronunciation. You include some follow-up questions, asking participants to share their thoughts on the design and voice any challenges or feedback they encountered while using the app.
This leads to some valuable insights. Some participants found it challenging to locate the Spanish lesson and struggled with the recording feature. Based on this feedback, you can identify areas for improvement – refining the app’s navigation structure to make it easier to find lessons and making it easier to use the recording feature. Once these changes have been made, you can conduct another round of usability testing to ensure these enhancements have improved the usability of your app.
Surveys are a method of gathering data and feedback from users. Because they can be conducted online and are fairly easy to set up, they’re a particularly useful way to collect data from a large number of participants.
In UX research, surveys are used to gather both qualitative and quantitative data. Qualitative data is gained via open-ended questions that encourage participants to provide detailed explanations, opinions, or suggestions. It can offer deep insights into users’ thoughts, behaviors, motivations, or pain points.
Quantitative data is gathered through close-ended questions with predefined response options. This helps you to analyze and measure trends, frequencies, percentages, or ratings, and provide insights into user preferences, satisfaction levels, and demographics.
The focus of your survey will depend on your research goals. For example, you might be looking to find out about user satisfaction, product usability, feature preferences, brand perception, or demographic information.
When designing surveys, it’s important to write questions to ensure clarity, avoid bias, and gather meaningful data. You should consider your target audience, research objectives, and the specific information you want to gather.
Read more: Check out our UX survey questions article to learn how to gather valuable insights about your product, with example questions and best practices for improving user experience.
When would you conduct a survey?
Surveys can be conducted at different stages of the design process, from initial user research to post-launch evaluations. Below are some common scenarios where surveys are used in UX research.
Assessing user needs
Surveys can be used to understand the needs, preferences, and pain points of your potential users. This helps to identify target user demographics, their goals, and the features they’d expect to see in your product or service.
Validating designs and assessing usability
Surveys can be used to validate design concepts and assess usability. By presenting design variations or specific features to users and collecting their feedback, surveys help to assess user preferences and guide design decisions.
They can also accompany a usability test to gather quantitative and qualitative data on the usability of a product or UI. They can capture a user’s experience, satisfaction level, and any usability issues they encountered during the testing process.
In Lyssna, for example, you can run a prototype test or preference test and incorporate a follow-up survey asking users what they think about the design, how they found the task, why they liked or disliked a particular design, and so on.
User feedback and feature requests
Surveys can be used to ask participants for their feedback and suggestions for new features, and to share their opinions on existing functionalities. This can help you to understand user expectations and prioritize future enhancements.
What’s an example of a survey used for UX research?
Continuing with our language app concept, here’s an example of a survey that could be sent to active users (e.g. users who have spent 20+ hours on the app) post-launch to gather user feedback and feature requests.
It would begin with an introduction thanking participants and letting them know what to expect from the survey and how long it will take, and some demographic questions, followed by:
Language learning goals:
What is your primary motivation for using the app?
Which language(s) are you currently learning?
How often do you use the app to practice language skills?
App features evaluation:
On a scale of 1 to 5, how would you rate the app’s user interface in terms of intuitiveness and ease of navigation?
Which features of the app do you find most useful for language learning?
Are there any features that you find difficult to use or understand?
How satisfied are you with the variety and quality of lessons provided?
Are there any specific topics or language skills that you would like to see?
Do you find the content level appropriate for your language proficiency?
How motivated do you feel to continue using the app regularly?
What factors contribute to your motivation?
Would you recommend the app to others? Why/why not?
Is there anything else you would like to share about your experience using the app?
Do you have any suggestions for improving the app’s user experience?
Do you have any suggestions on features you’d like to see added?
The survey could include a mix of question types, for example open text (short or long answer), single select (e.g. yes/no), multiple choice, and rating scales.
Top tip: In Lyssna, you can use a variety of question types when designing your surveys, including short text, long text, single and multiple choice, Linear scale, and ranking questions.
Also known as split testing, A/B testing is a research method that involves creating two different versions of a website or landing page and then directing traffic to each version to see which one performs the best. It’s used to evaluate and optimize user experiences, conversion rates, and engagement metrics.
During an A/B test, you show two different variations of a design or feature. Users are randomly assigned to different groups and exposed to one of these variations. Data is collected on user interactions, behaviors, and metrics, such as click-through rates, conversion rates, and time spent on page.
After the test period, the data you collect is analyzed using statistical analysis to find out if the differences in user behavior between the two variations are statistically significant. This helps you to identify which variation performs better to achieve the desired outcomes. These insights can then be used to optimize and improve the user experience.
When would you conduct A/B testing?
A/B testing is usually conducted in the later stages of the UX research process, specifically during the evaluation and optimization phase. After initial research, user feedback, and design iterations have been incorporated into a product or feature, A/B testing can be used to assess the performance and impact of different variations.
Once the design or feature is relatively final, A/B testing can be conducted to compare how effective different options are. This allows you to evaluate specific design elements or variations to work out which one performs better.
A/B testing can also be used iteratively throughout the design process. For example, after initial testing and feedback, modifications can be made to the design and then tested again using A/B testing to validate the improvements and gather more insights.
It’s important to note that A/B testing shouldn’t be your only UX research method. It’s most effective when it’s used in conjunction with other research methods, such as usability testing or surveys, to gain a comprehensive understanding of user needs and behaviors.
What’s an example of an A/B test used for UX research?
Say you’re conducting UX research for a project management tool that helps teams collaborate, track tasks, and manage projects. The company wants to increase the conversion rate for users signing up for their product, and has identified some usability issues with its sign-up form.
You want to find out if modifying the form layout will improve the conversion rate, and decide to test:
Variation 1: All fields in a single column.
Vation 2: Fields divided into multiple sections for improved visual flow.
The method includes:
Randomly assigning website visitors into two groups: Group A and Group B.
Group A sees the sign-up form with all fields in a single column.
Group B sees the sign-up form with the fields divided into multiple sections.
Monitoring and recording the conversion rate – the percentage of visitors who complete and submit the form, over a two-week period.
Analyzing the data and comparing the conversion rates between Group A and Group B.
Based on the data, you find that Variation 2 achieves a higher conversion rate compared to Variation 1. This insight helps the company make an informed decision to modify the design across their sign-up process, leading to a higher number of conversions and customer acquisitions.
Card sorting is a UX research method used to understand how users organize and categorize information. It involves presenting participants with a set of labeled cards representing different elements, such as navigation labels, features, or content items, and asking them to group and organize these cards into meaningful categories.
The goal of card sorting is to gain insights into users’ mental models and understand how they perceive the relationships between different elements. You can use it to help inform information architecture, content organization, and navigation design decisions.
There are two main types of card sorting: open card sorting and closed card sorting. In open card sorting, you ask participants to create their own categories and group cards based on their own understanding and logic. This method is useful when exploring new or unfamiliar domains and when you want to understand users’ natural categorization patterns.
In closed card sorting, you give participants predefined categories and ask them to sort cards into these categories. This method is useful when evaluating existing category structures, testing the effectiveness of predefined labels, or comparing different categorization options.
The data you collect can be analyzed to identify patterns, similarities, and discrepancies in how participants organize the cards. The results inform the design and organization of information, aiding in creating intuitive and user-friendly structures for websites, applications, or other digital products.
Read more: Discover the 11 best card sorting tools, their key features, pricing options, and how to use them to elevate the structure of your website.
When would you conduct card sorting?
Card sorting is usually conducted during the early stages of the design process to understand how users naturally group and categorize information. It can help you gain insights into users’ mental models and organization preferences, which informs the creation of user-friendly information architecture.
Card sorting is useful in different situations, such as exploratory research, information architecture design, website or app redesign, content organization, and comparative analysis. By involving your target audience, card sorting ensures that the way you organize and label information aligns with their expectations.
What’s an example of card sorting used for UX research?
Let’s say you’re working for a careers portal aimed at job seekers and recruiters. It’s in the early design stages, and in order to ensure its usability and navigation align with your users’ expectations, you decide to conduct an open card sorting exercise.
You recruit participants who represent the two target user groups, with job seekers and recruiters from various industries and backgrounds. During the test, you provide participants with a set of labeled cards representing different features, functionalities, and categories that could be included on the job portal. These cards could include options like: job search, resume upload, company profiles, job recommendations, application tracking, salary information, interview tips, networking events, and so on.
You ask participants to conduct an open card sort by grouping the cards into categories that make sense to them based on how they would expect to find these features on a job portal, and label each category.
During the analysis phase, you identify common patterns and groupings among participants. For example, you may find that most participants create categories like: search and filters, profile management, saved jobs, application history, company insights. This information reveals the mental models and expectations of users when it comes to organizing and accessing job-related information.
Based on the results of the card sorting exercise, you can refine the information architecture and navigation of the job portal. You can ensure that important features are prominently displayed and easily accessible, optimizing the user flow for both job seekers and recruiters. For example, if participants consistently group ‘job search’ and ‘filters’ together, it suggests the importance of providing robust search functionality with advanced filtering options.
Tree testing is used to evaluate the organization and labeling of information architecture and navigation structure. It’s a useful method to follow on from card sorting, as it focuses on the organization and hierarchy of content elements rather than the visual design, with a goal of assessing how well users can find and locate specific information within a given hierarchical structure.
In tree testing, you present participants with a text version of your site structure showing the main categories and subcategories of content. You give participants specific tasks or scenarios to complete, such as finding a particular piece of information or navigating to a specific section of the site.
You ask participants to navigate through the tree structure by selecting the categories and subcategories they believe would lead them to the desired information. They can choose to explore multiple paths or backtrack if they feel they’ve made an incorrect choice. The goal is to understand how well the structure aligns with their mental model and whether they can effectively locate the information without getting confused or frustrated.
Through tree testing you can gather quantitative data, such as success rates and time on task. The findings can help identify potential issues with the information architecture, such as ambiguous labels, unclear categorization, or navigation pathways that don’t lead anywhere. This feedback can then be used to refine and optimize the structure of your website or app, improving its usability and user experience.
When would you conduct tree testing?
You’d typically conduct tree testing during the early stages of a website or app design or redesign, when the information architecture is being decided or refined. It’s a useful method for evaluating the effectiveness of the proposed navigation and to make sure that your users can easily find the information they need.
What’s an example of tree testing in UX research?
Let’s consider an example of a large ecommerce website selling a variety of electronic products. The product team is planning a redesign to improve the navigation and overall user experience. You want to ensure that customers can easily find the products they’re looking for and that the information architecture is intuitive.
You decide to conduct tree testing and create a simplified text-based diagram representing the proposed information architecture. The tree includes main categories, subcategories, and product pages.
You recruit participants who fit the target audience – including both frequent online shoppers and those less experienced with the website. Each participant is given a set of tasks, such as:
Locate an xBox console
Find the DSLR cameras category
Locate a soundbar
Locate a Garmin smartwatch
The participants interact with the text-based tree, clicking through the categories and subcategories to complete the tasks. From there, you can see how well each participant completes the tasks, the time it takes them to find the correct items, and any issues or confusion they experienced. You analyze the results to identify any patterns and trends, and make informed recommendations about how to optimize the information architecture and navigation.
Focus groups are used to gather insights from participants about a specific topic, product, or service. To conduct a focus group, you bring together a group of around 6-10 participants who share common characteristics or experiences relevant to your research objective.
During a focus group session, a moderator guides the discussion by asking open-ended questions and encouraging participants to share their thoughts, feelings, and experiences related to the topic.
Focus groups offer a number of benefits in UX research, such as:
Rich insights: Through open discussion and group dynamics, focus groups can gather in-depth insights that may not come from individual interviews or surveys alone.
Group dynamics: Participants can influence each other’s perspectives and generate new ideas, providing a more comprehensive understanding of user perspectives.
Interactions in real-time: Observing participants’ body language, facial expressions, and emotions during a focus group can provide context for understanding their attitudes and reactions.
Efficiency: A single focus group can gather input from multiple perspectives simultaneously, which can make it a time-efficient method.
While these aspects can be positive, they have their downsides too. For example, while group dynamics can lead to deep discussions, you might find some participants dominating the conversation, with others hesitant to voice their opinion, which can lead to biased insights. Some participants might also alter their responses to align with social norms, and this social desirability bias can lead to participants providing socially acceptable answers rather than their own opinions.
It can also be difficult to find participants who fit the target criteria and are available to attend a session, which can lead to a less diverse or representative group. Organizing the sessions can also be time-consuming and costly.
As with any research method, proper planning, recruiting diverse participants, and skilful moderation are important to ensure the success and validity of the insights you gather from focus groups.
When would you conduct a focus group?
Focus groups are useful in UX research when you want to gather in-depth qualitative insights and understand the motivations, attitudes, and preferences of your users or target audience.
They can be useful in the following scenarios:
Conducting exploratory research: Focus groups can be useful in the early stages of a project, when you want to explore perspectives and expectations related to a new product or service. They can help generate ideas and uncover potential design directions.
Gathering feedback on new concepts or designs: When you have a prototype or design concept that you want to evaluate, focus groups can be a good way to gather feedback from potential users.
Understanding expectations: Similar to the above, when you’re developing a new product or service, it’s important to align with user expectations. Focus groups can provide insights into what users expect in terms of usability, functionality, and experience.
Understanding user perspectives: If you want to gain a deeper understanding of how users interact with your product or service, focus groups can be a good way to gather diverse opinions.
It’s worth remembering that focus groups aren’t suitable for every research question – they work best in conjunction with other methods, such as usability testing, surveys, or user interviews.
What’s an example of a focus group in UX research?
Say you’re a UX researcher working for a financial institution developing a mobile app designed to help young adults manage their finances and budget effectively.
You conduct a focus group to gather feedback on the app concept, with a group of eight participants aged 18 to 30 with diverse financial backgrounds and levels of financial literacy.
During the session, discussion topics include current budgeting practices, financial goals, and experiences with budgeting apps. You ask the group to provide feedback on the concept and features of the new app, discussing their interest, potential use cases, and concerns related to financial data security. Participants explore prototypes of the app screens, offering insights on the user interface, ease of navigation, and features.
The focus group provides valuable qualitative insights, including understanding the financial management needs and preferences of young adults, identifying potential usability issues, and suggestions for improvements to the prototype designs. This information can now be used to make recommendations that will guide the design of the app to meet the needs of the target audience.
Diary studies are what the name implies – a UX research method that involves participants maintaining a diary to record their experiences, behaviors, and thoughts over a period of time, ranging from a few days to several weeks.
In these studies, participants are active observers, documenting their interactions with your product or service in real-time and in their natural environment.
The number of participants you recruit will vary depending on your research goals, the complexity of the research questions, and the resources available. But because of their nature, they tend to involve a smaller number of participants compared to other research methods like surveys or usability testing. The important thing is ensuring the sample size is large enough to provide valuable insights.
In a diary study, you provide participants with specific prompts or questions to guide their diary entries, which may be written, photographic, video, or audio recordings.
Diary studies allow you to gain deep insights into participants' experiences and behavior over an extended period of time, capturing both the highs and lows of their user journey. This approach allows you to understand how their interactions evolve over time and in different contexts.
Analyzing diary study data requires careful review, looking for patterns, trends, and insights that can inform design decisions and improve the user experience.
When would you conduct a diary study?
Diary studies tend to be conducted during the exploratory or discovery phase. They’re particularly useful for:
Understanding long-term usage patterns: Diary studies are useful when you want to understand how users interact with your product or service over an extended period. For example, if you’re developing a learning language app, you might want to observe how users practice and progress over several weeks.
Capturing natural behaviors: Diary studies allow you to capture users’ experiences and behaviors in their natural environment, which can lead to authentic insights. For example, you might study how families use a meal planning app in their day-to-day lives.
Exploring complex or emotional experiences: Diary studies can be insightful when dealing with complex tasks or experiences with an emotional component. For example, for a financial budgeting app, you might want to understand how users manage their finances and the emotions that come into play when making financial decisions.
Tracking changes or progress: If a service undergoes significant changes or updates, diary studies can help to assess the impact on users’ experiences over time.
What’s an example of a diary study in UX research?
Let’s use the language learning app mentioned above. Say your objective is to conduct a diary study to gain insights into how users practice and progress in their language learning journey over a six-week period. The goal is to learn about user motivations, study habits, challenges, and successes as they interact with the app each day.
You recruit a diverse group of six participants with varying levels of language proficiency and learning goals. Each person is provided with access to the app and asked to use it regularly for at least 30 minutes each day during the study period.
To capture their experiences, you ask each user to maintain a digital diary within the app, where they’re prompted to record their daily interactions, including the lessons they completed, vocabulary learned, speaking exercises, and any difficulties or breakthroughs they encountered.
After the six-week period, your team is able to analyze the diary entries, looking for patterns and trends, such as preferred learning modules, challenges, and any shifts in motivation or engagement. The findings reveal interesting insights into users’ language learning experiences. Some participants reported feeling motivated when they achieved language milestones, while others found difficult grammar concepts frustrating. You also find that interactive exercises, like pronunciation practice and speaking challenges, received positive feedback.
The research findings provide valuable input. Based on your recommendations, the design team decide to further enhance the interactive exercises and gamify progress tracking to boost motivation. They also plan to introduce adaptive learning features to cater to individual learning styles and pace.
Contextual inquiry involves observing users in their natural environment while they perform tasks or interact with a product or service. The goal is to understand their needs, behaviors, and motivations in real-world contexts, so you can gain deeper insights that might not be apparent through traditional lab-based interviews.
Some of the benefits of conducting contextual inquiries include:
Rich contextual insights: Contextual inquiries provide authentic insights into users’ experiences, needs, and challenges.
User empathy: You can gain a deep understanding of users’ perspectives and develop empathy for their needs and goals.
Identifying pain points: Contextual inquiries reveal pain points and areas of improvement that users might not have expressed in an interview.
Opportunities for iterative improvements: By observing users in their natural environments, you can identify potential design improvements and iterate the product to make it better.
There are of course some challenges too, like:
It’s resource and time-intensive: Onsite visits and interviews can involve a lot of time, especially when working with geographically dispersed participants.
Logistical challenges: Similar to the above, coordinating visits, ensuring availability, and obtaining consent can present some logistical challenges.
Observer bias: Your presence might influence participants’ behavior or responses.
Despite its challenges, contextual inquiry can be valuable in gathering in-depth insights for user centered design, especially when you’re seeking a deep understanding of user behavior in real-world contexts.
When would you conduct a contextual inquiry?
Contextual inquiries can be conducted during the early stages of product development to uncover user needs and shape the initial design and features. In this scenario, they’re particularly useful for identifying usability issues and challenges that users face when using a product, offering direct observations of potential barriers.
It can also be useful to run a contextual inquiry when you want to evaluate the overall user experience of a product or service and make iterative improvements. By observing how users integrate it into their routines, you can understand its impact on their daily lives. This method is especially ideal for products or services with complex or long-term user journeys, so that you can follow participants over time and understand their evolving needs.
What’s an example of a contextual inquiry in UX research?
Let’s say your company is looking to improve its podcast recording and editing software to an all-in-one solution for recording, editing, and managing episodes. You want to understand how podcasters use the software currently in a studio environment.
Your goal of the contextual inquiry is to gain insights into the podcasters’ workflow, pain points, and needs, as well as observe how they interact with the technology during the recording and editing process.
You’ve recruited a group of professional podcast producers and hosts who regularly use the software in a studio setting, and visit them onsite to observe their recording and editing session, paying close attention to their actions, interactions with the technology, and any challenges they encounter. Where possible, you also ask follow-up questions to gain deeper insights into their experiences and thoughts about the software.
Throughout the contextual inquiry, you’re able to gain insights about how podcasters use the software and how it impacts their recording and editing processes. You also observe how different teams collaborate and their preferences for specific features.
Analyzing the data, you’re able to look for recurring themes and patterns related to user experiences and pain points. With a better understanding of how podcasters are using the software in a real-world studio environment, your design team is able to make informed improvements to the user interface and address technical issues that optimize the podcasting experience.
Exploring UX research methods
In this chapter, we’ve explored various UX research methods. From usability testing that uncovers pain points to focus groups that tap into user perspectives, each method offers a unique lens through which to understand user behavior.
Here are some key takeaways:
Diverse toolkit: Building a rich understanding of user needs requires a versatile toolkit. Combining methods like usability testing, surveys, and contextual inquiries can offer comprehensive insights.
Early and ongoing: Research isn’t a one-time task. Methods like usability testing and A/B testing can be used iteratively, both during initial product development and throughout its lifecycle.
User empathy: Throughout these methods, empathy for users should remain a focus. By truly understanding their perspectives, you can create products that genuinely resonate with your audience.
As you navigate your UX research studies, remember that these methods aren’t one-size-fits-all. Tailor your approach to your specific research goals and user base, and you’ll be on your way to crafting user experiences that stand out in today’s competitive landscape.