Skip to main content

I have a theory that every product professional has debated decisions and thought to themselves, I wish that I could wave a magic wand and know which of these versions will bring us closer to our goals. While the product discipline has yet to recruit real magicians, preference testing gives us the tools to make more informed design decisions—whether you’re doing a logo design, designing landing pages, or crafting a complex user experience.

In this article, I'm going to teach you all about how and why you should consider preference testing—one of the easiest, but most telling, usability testing methods that can help you move forward in your UX design process.

What Is Preference Testing?

A preference test is about exploring what users prefer relevant to our goals and brand values. Despite its name, it’s not the simple act of putting designs in front of our target audience and asking which users like better. A preference test asks users to rate different design concepts according to what you want to communicate and provide explanations for their ratings.

Example of Preference Testing

Let’s say that you manage a website for a local hotel where users can learn more about your hotel and book rooms. Your team has decided that you want the website to communicate the fact that the hotel is homey and welcoming, and for the booking page to give a sense of trust.

Your preference test, in this case, will likely show a set of users two different design concepts for your homepage and booking page and then ask each participant to explain how they would rate each one in terms of trustworthiness and homeyness. Most importantly, they’ll explain their answers.

Depending on how you conduct your test and with how many users (don’t worry, more on that soon!), you’ll either get a better sense as to which design concept helps you communicate what you’d like to communicate, or, if done in a way that you’re collecting quantitative data, you may actually gain more than a sense.

Preference Testing vs. A/B Testing

Preference testing may sound a lot like A/B testing by definition, but there are a few important distinctions.

A/B testing offers users two versions of the same thing to see which one is better at driving up your target KPIs. Preference testing is all about understanding why your users choose one option over another.

I'll elaborate more in a moment.

Discover how to lead product teams and build better products.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Why Should You Do Preference Testing?

We all love the certainty of A/B testing and user behavior data. But, there are some things that our typical metrics can’t measure that are important to us in the design process nonetheless—whether we’re working on websites, SaaS tools, or mobile apps.

If it’s important to your brand to communicate trustworthiness or a sense of being "high-end," no user behavior metric will be able to tell you much about whether or not you’re succeeding from the perspective of your target audience. Hence, preference testing!

Preference testing gives you a much better idea as to which design options best serve the goals that are not easily measured. When you do A/B testing, you’ll know which version results in more conversions and less drop off—but you won’t know how your users walk away feeling about your brand, and that has real business implications. 

By giving users the ability to compare and contrast design options, your team can make choices rooted in the reality of what each option communicates to your users.

How To Conduct A Preference Test

Whether you’re a user researcher who needs a refresher or a product professional with little to no research experience, you can most definitely execute a preference test and reap the benefits. Here's a walkthrough of the process.

Step 1: Define your goals and research questions

This may sound obvious, but it’s easy to forget that a vague sense of what you want to learn from your test participants isn’t a strong start. The more specific you are about what you hope to gain from your test, the more likely you are to get the insights you need.

In order to zero in on your specific goals, I recommend:

  • Talking to all of the relevant stakeholders—designers, PMs, senior management. What internal debates are they having about potential design choices? What’s holding them back from moving forward? This is equally important whether you’re in the early stages of the design process or the later stages.
  • Narrowing your design choices to the ones that you believe in most
  • Choosing design options to put in front of users that have significant differences between them. While you may feel like your drop shadows and color palettes heavily influence user perceptions, chances are that you’ll struggle to get meaningful insights unless the options you put in front of users are conceptually different.

After you’ve done your homework, start a new doc and articulate what you want to learn in easy-to-understand bullet points. For example:

  • Which of the design options inspires more trustworthiness?
  • Which of the design options gives a feeling of luxury?
  • What are the range of first impressions that users have for each design choice, and to what extent does that align with our desired brand messaging?

Once you’ve defined what you want to learn and your team is aligned, you can move on to the next step.

Step 2: Write your test

Writing your test should be relatively simple now that you’ve zeroed in on your goals. The basic structure of your test will involve showing users the different design variations and asking them to answer questions about it. 

Here are some tips for writing good preference test questions, which apply to most questionnaires in general:

  • Keep your sentence structure brief, simple, and straightforward—users shouldn’t have to guess what you’re getting at
  • Don’t ask leading questions or hint that you’re looking for a particular answer. For example, "Does design A give you more of a feeling that this brand is a high-end brand?" hints at an answer. Instead, try this: "Which design option(s) give you the feeling that this brand is a high-end brand? Why?" Open-ended questions allow the user to explore the options on their own and give you a genuine answer.
  • Always ask why. A user’s preferences are interesting, but these follow-up questions are almost always the most useful part of your insights when comparing different design versions.

Once you’ve drafted your questions, you may want to have a colleague look them over to make sure that they’re understandable and according to best practices.  When you’re confident in your test structure, move on to the next step.

Step 3: Decide whether your test will be quantitative or qualitative 

Simply put, preference testing can be quantitative or qualitative. You have to make your decision before you actually execute your test because it will affect things like your desired sample size. Here is some guidance to help you make this choice:

Quantitative preference testing: This type of test uses a larger sample size in order to reach statistical significance. The advantage of doing this you end up with more reliable results.

  • Pro: If you do a statistically significant test, you can get "more than a sense" and can use your test results as a serious data point when making your design decisions.
  • Con: The downside of doing a quantitative test is it requires more time and money—but if you can swing it, quantitative is the gold standard.

Qualitative preference testing: Even if you don’t have the time or budget to do a statistically significant test, preference testing with a smaller sample size of target users can still be very helpful.

  • Pro: Having qualitative insights on why users gave their specific responses can help guide your user experience design choices.
  • Con: With a smaller sample size, you can’t treat the results as a solid quantitative data point, limiting the usefulness of your findings.

Discuss with your team to determine the best choice for your situation, and then move on to the next step.

Step 4: Determine your sample size

If you choose to go with a quantitative test, your best bet is to speak with someone on your data team about the details of your test and ask them to help you calculate how many responses would enable you to achieve statistically significant results.  If you don’t have internal resources or want to work toward significance independently, I recommend checking out this guide

If you’re taking a strictly qualitative approach, your sample size is more flexible. However, the larger the sample size, the better—in general, I recommend choosing the largest number of participants that you can afford. You can also check out this guide to qualitative sample sizes to help you brainstorm.

Once you know your sample size, move on to the next step!

Step 5: Choose your methodology and your tools

In terms of methodology, the first choice that you have to make is whether or not you will conduct your preference test in a moderated format (where you are live with each user on a call or in person) or an unmoderated format (where you use a tool to conduct the test and then analyze the data and session recordings later).

If you chose to take a quantitative approach to your user testing, you will almost always choose an unmoderated approach, because the large numbers that you’ll need in order to achieve statistical significance are generally not feasible if your team has to schedule and conduct live sessions.

Tip

If you’re taking a quant approach, I highly recommend using a platform like UserTesting.com or Maze to source your participants, upload your test, and get results rather quickly. Browse some highly-rated usability testing tools and decide which are better for you in terms of functionality and pricing.

If you chose a qualitative approach and are conducting your test just to get a bit of direction, an unmoderated approach using one of the aforementioned tools may still be your best bet since it saves time.

However, if you feel the pull to actually have real-time interactions with your users and are willing to take the time to do so, you can use a tool like userinterviews.com to source and schedule video calls with test participants.

Once you’ve chosen your methodology and your tools, congrats—you’re ready to start your test!

Step 6: Conduct your preference test

Upload your screener and either schedule testing sessions or upload your test and wait for those recordings! You’ve set your goals and made all of the necessary decisions, so you’re ready to roll.

Tip

At this point, it’s a good idea to update all of the relevant stakeholders at your organization as to your expected timeline. Let them know how long the testing process will take, and when you estimate finishing your analysis and delivering results.

I’m a little biased, but nevertheless—the next step is the fun part! 

Step 7: Analyze your preference test and extract actionable insights

You’ve conducted your test, and now you have stats and/or recordings—woohoo! You’re super close to having valuable insights that will help you shape the user experience and guide your upcoming design choices. 

Your analysis process will vary, but here are some guiding principles to help you extract reliable insights:

  • Don’t pretend that qualitative is quantitative. The difference between 5 users and 7 users making a choice is of little meaning. The insight is in the details—who chose what, and why? That’s where your key learnings will be.
  • Don’t only look at statistical results in a quantitative test. Even if you’re sure that a certain design choice was the ‘winner,’ it’s important to listen to and analyze why users made the choices that they did. This increases what your team learns—for example, if a specific image gives users the impression that your team is trustworthy, you can not only make the right changes in the moment, but you can also learn which images inspire trust in your target user.
  • Don’t skip the analysis. Oftentimes, when we’re short on time, it’s tempting to try and trust our perceptions of what we saw rather than going through an actual analysis process. This is a mistake that leaves us open to all kinds of biases and we risk churning out results that aren’t reliable.

Once you’ve conducted your analysis and compiled your learnings, it’s time to ask yourself: so what?

What action items do you have for your team based on what you learned from your preference test? Did your test give meaningful direction for your design choices? How?

Tip

Sometimes, one of your takeaways is that you’d like to perform a second test! If you feel like your insights begged further questions, don’t be hesitant to present your results to your team and subsequently run another test for further insight.

Step 8: Share your results and recommendations with your whole team

You made it! You’ve conducted your preference test, extracted insights, and have thought long and hard about the path forward.

Next, it’s time to create a presentation or a shared document so that all of your team members can learn what you learned and align on what comes next.

Oh Yeah...You Still Need to Get Buy-In

Even after all of this, don't be surprised if not everyone is instantly keen to take action on your user insights. Sharing the bottom line helps, but if you want other stakeholders to share your enthusiasm for your recommended action items, consider showing clips and using user quotes to support your points. It’s always easier to inspire action from UX research when stakeholders see and hear actual users.

Now that you’ve conducted your first preference test, my guess is that you’re yearning for more user research tips and other ways to harness user feedback. Check out our complete guide to usability testing and UX research, and be sure to subscribe to The Product Manager newsletter for more helpful guides like this one.

By Cori Widen

Cori Widen currently leads the UX Research team at Lightricks. She worked in the tech industry for 10 years in various product marketing roles before honing in on her passion for understanding the user and transitioning to research.