Preference testing guide
Read on to find out everything you need to know about preference testing – including presenting design options, crafting effective questions, and analyzing results – so you can gather valuable insights into user preferences and make informed design decisions.
A Lyssna guide to preference testing
Unlocking insights with preference testing
When taking part in a preference test, a participant is shown a number of design options and asked to choose one. These tests are commonly used to measure aesthetic appeal, but participants can also be instructed to judge designs based on their trustworthiness, or how well they communicate a specific message or idea.
You can ask participants to compare videos, logos, color palettes, icons, website designs, sound files, mock-ups, copy, packaging designs - up to six design options can be shown at the same time for feedback.
A benefit of preference testing is that it can be done before a product or design is completed, meaning you can gain feedback early on in the process and adjust it as needed. Preference testing produces both quantitative and qualitative feedback, which allows you to understand the what as well as the why.
)
In this example, participants are asked to choose which font they prefer.
How to run a preference test
When you run a preference test, you include three sections:
The question
The design options
The follow-up questions
1. Ask the right questions
You may be tempted to run a preference test simply to settle an argument about two versions. Asking a question like, “Which design is best?” could give you this answer. The problem is, although the crowd is wise, you’re asking them to give you a design opinion here – and they may not be qualified to do that.
Instead, we suggest asking a question that has a more specific lens, like:
Which design communicates the concept of “human-centered” the best?
Which design is easier to understand?
Which do you find easier to read?
This level of specificity allows you to evaluate your hypothesis with more precision. However, if you do wish to ask a more general question, we recommend the phrasing “Which design do you prefer?” which asks for the participant to reflect on their own preference rather than which design may be objectively better. Participants are much more qualified to tell you about their preferences rather than what is good design.
2. Include design options
When participants are viewing your design options, they are shown an overview of all of them side-by-side, and are required to view each one individually before they make a decision.
Preference tests are great for different versions of the same design, but remember that your participants will be playing “spot the difference” so if your two design options are too similar, they may struggle to identify what they’re being asked to judge. Don’t be afraid to crop a full-page design to focus on a particular area of variance.
Preference tests are compatible with various different asset formats, including JPEGs, GIFs, PNGs, MP3s and MP4s. You can even mix and match asset formats within one preference test if that’s what you need to do.
Your preference test design options don’t need to be the same size or shape, either. Don’t stress about your assets being the exact same height and width.
You can test up to six different options in a preference test, but remember that your participants must review each one before choosing one.
)
In this example, participants are asked to choose which sign-up form they prefer.
3. Ask follow-up questions
Follow-up questions are where you can extract qualitative data from a Preference test selection. You can use any question structure for follow-ups (including multiple choice, rating) but we recommend using a free-text entry field to allow participants to explain why they made their choice.
Follow-up questions show alongside the design option that the participant chooses, allowing them to consider it while they give you further feedback.
Great follow-up questions can be deceptively simple, but can elicit detailed feedback from your participants. They might include:
Why did you choose that design?
What did you like about that design?
What stands out the most to you about this design?
Once you’ve got your follow-up answers, you can then categorize this feedback into groups to get a high-level view of how many participants had similar feedback. You can also filter by which preference was chosen to see the only the follow-up responses from people who preferred each design option.
4. Analyze the results
After running a preference test, you’ll be shown the number of participants who preferred each design on the results page.
)
We will also indicate if you have a statistically significant leader in terms of preferred design options. Statistical significance is defined as the likelihood that the best-performing design is actually the favorite, and isn’t outperforming the other designs by random chance.
The level of significance you can obtain will vary depending on your sample size, with larger sample sizes giving you greater significance. It will also depend on the degree of difference between the designs’ performance, with large differences in performance giving greater significance.
Using preference test choices with logic
Preference test choices can be used as conditions with logic. This means that you can hide or show a section or question after the preference test, based on how the participant answered. This also applies to the follow-up question in your preference test.
)
For example, a preference test at the start of the test could help you understand whether your participants prefer option A or option B. Later on, you might want to show a Five second test to participants who preferred A and a different Five second test to participants who preferred B. Or, you may want to tweak your follow-up questions based on whether they chose A or B - for example, “Why did you choose A?” and “Why did you choose B?”.
Different ways to compare designs
Preference tests use comparison as a test structure, but you can compare designs in various different ways using UsabilityHub.
Preference tests are best used when you want participants to look at multiple designs at the same time, but sometimes that isn’t ideal - especially when seeing both could cause bias to be introduced into the feedback.
If you wish to get feedback on two or more version of a design but don’t want your participants to view both at the same time, you may wish instead to consider a Variation set. Variation sets allow you to send Design A to a different set of participants to those who view Design B, which prevents them from seeing both but gives you powerful and rigorous feedback.
You can also create a test with multiple sections, as many of each section type as you’d like. For example, you can as many Design questions sections to a test as you want, allowing you to compare one design after another but not necessarily side by side.