Skip to main content

As product professionals, we all like to think that we get up every morning and seek the truth in order to best serve our users and meet our KPIs. Often, we use research methodology to help us get to the bottom of things and inform product decisions. Here’s the thing: no matter how hard we try, we’re susceptible to affecting the validity of our results with research bias.

It’s a tale as old as time, and whether you’re an academic researcher or a product manager, it’s crucial to understand the ins and outs of the biases that could lead you astray.  Let’s take a look at exactly which biases exist and how to mitigate their effect on your work. 

What Is Research Bias?

Research bias, simply put, is any action or mindset during the research process that affects your ability to get objective results. It’s important to note that research bias is extremely pervasive: it can occur in both qualitative and quantitative research (yes, that’s right—even numbers can be distorted by the stories that we construct around them). 

Bias can seep in anywhere in the research process:

  • Defining research questions
  • Data collection
  • Choosing study participants
  • Data analysis
  • Summarizing and communicating findings

Biases are generally unintentional—we’re not aware that our mindset, or how we’re conducting our study, is influencing the validity of our results. The first step in preventing research bias is to admit to ourselves that regardless of our best intentions, every single person in the universe who is doing research is susceptible to bias. Good intentions are nice, but they don’t do the full job in terms of protecting the validity of our study results.

Read on to understand more about the biases that affect our work and how to limit their effects. 

Types Of Research Bias That Can Derail Your Work

I’m a bit biased (pun only a little bit intended), but this is the fun and interesting part. Now, we’ll explore all the ways in which we can unintentionally generate biased research results thanks to the funny ways in which our brains, and the brains of our study participants, work. (Don’t worry: if you keep reading, you’ll also learn how to avoid them.)

Recall Bias

Is it weird to have a favorite bias? Weird or not, this is mine. It’s my favorite because it alludes to the mysteries of the mind: what we remember from an event or series of events is not always complete or accurate.  

Recall bias refers to our tendency to subconsciously remember past events with certain inaccuracies, such as omitted details. This is more likely to happen in cases where you’re addressing unpleasant events or when you’re addressing things that happened a long time ago—but really, it can happen anytime.  

Recall Bias Example

For example, if you’re conducting a user interview about someone’s fitness habits and asking about an app that they haven’t used in a year—well, their ability to recall exactly what they did within the app and how it all went down is limited. Or, if you’re inquiring about someone’s experience at the doctor’s office after an injury, their minds may have done them a favor by rendering some of the less pleasant details somewhat fuzzy in order to avoid psychological distress.

There are countless examples, but the bottom line is that our brains are storytellers. For any number of reasons, whether it’s related to the passage of time or how we want to perceive certain events vs. how they actually happened, what we remember is not always precisely what happened—and this can affect our research results when our study participants unknowingly suffer from recall bias.

Selection Bias

Lest you think that bias only occurs on the part of your study participants, this one’s on you, the person doing the research. You probably already know that the way you define the people who are eligible to participate in your study greatly affects your results. If you’re researching sleep behaviors and you only interview or survey parents of newborn babies, you’re going to have findings that don’t necessarily apply to the general population (yawn).

In selection bias, certain preconceived notions or attachments to hypotheses lead us to define our research sample in a way that lends itself to the results that we expect or want. 

Stay in-the-know on all things product management including trends, how-tos, and insights - delivered right to your inbox.

Stay in-the-know on all things product management including trends, how-tos, and insights - delivered right to your inbox.

  • By submitting this form, you agree to receive our newsletter and occasional emails related to The Product Manager. You can unsubscribe at any time. For more details, please review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Selection Bias Example

Let’s say that your company has a marketing automation tool, and you’re trying to understand why your competitors are leaving you in the dust revenue-wise. You, as the product manager, may have a hypothesis that the email automation features, specifically, are what’s driving the growth of your competitors.

This may lead you to focus on recruiting study participants who are heavily focused on email in their roles – leading you to potentially miss value props and pain points in other areas of marketing automation simply because you didn’t recruit enough participants who are active in other areas of the product.

Confirmation Bias

If you've ever been part of a project in which whoever was leading the research effort seemed only to be interested in user feedback that confirmed their own assumptions, you've seen confirmation bias in action. When looking at any type of data, qualitative or quantitative, our human tendency is to favor or exaggerate the significance of information that confirms our hypotheses or beliefs. This is called confirmation bias, and it’s pervasive. In fact, I’ve come to believe that the ways in which product teams work make confirmation bias incredibly likely if we don’t take steps to fight it head-on.

In product, we’re often called upon to make huge decisions and take big risks—fast. It’s rare to meet a designer or a product manager who doesn’t have a strong opinion about what feature iterations or product pivots will bring the team closer to their overall goals. For that reason, user research sometimes becomes a box to check, and the results are given in a narrative that is rife with confirmation bias.

How is that possible? In all likelihood, your user research study will include a lot of different insights. When analyzing your data, which is a process that usually has little oversight within companies, you may subconsciously dismiss data points that contradict what you believe and present a narrative that doesn’t reflect reality. Yikes!

Confirmation Bias Example

Imagine that a researcher is conducting user interviews about a product feature that uses AI to generate replies to email messages. In this context, if the researcher were to use leading questions, like "wouldn't you find value in this feature?" or "how great would it be if you saved time on writing emails?", they would be creating confirmation bias because they are influencing the research results with their own views on the value of the feature.

Measurement Bias

Whether you’re doing qualitative research or quantitative research, it’s important that how you’re measuring your data points make sense and are consistent throughout. When there is a fundamental problem with the measurements that we’re using, we introduce potentially false results—this is measurement bias.

Measurement Bias Example

Let’s say that you’re doing a quantitative usability study for a feature that is supposed to decrease drop-off at a certain point in your product’s key workflow.  What you’re measuring across hundreds of tests is the ability of a user to successfully complete the flow in the new iteration, and you’re looking for an improvement over the old version of the feature. Your team has to spend some time thinking about what exactly successfully completing the flow actually means: does it mean just making it past the iteration? Or does it mean doing it all? Does it involve getting a satisfactory result?

As you can see, how you define the metric itself can greatly influence your results!

Interviewer Bias

When conducting user interviews for a qualitative study, the questions that an interviewer asks and how they ask them can create a breeding ground for potential bias.

Interviewer Bias Examples

Here are some examples of how interviewer bias can creep into any research study:

  • The interviewer asks questions based on their assumptions of each participant rather than a uniform set of questions for each person in the study.
  • The interviewer asks leading questions that nudge the participant to respond in a particular way that confirms the researcher’s hypotheses.
  • The interviewer hints at an opinion or value judgment in the way that they phrase the question, which encourages the participant to respond in a way that’s favorable (also called social desirability bias on the part of the participant).

Publication Bias

What do you, as a product professional, do when your research results don’t reflect the findings that you’d hoped for? Believe it or not, many researchers and product professionals alike often rationalize and choose not to share their research results with their teams when this happens, and this is called publication bias.

Publication Bias Example

What does this look like? It looks like you or someone on your team saying: “Well, we only tested it with 15 users—I don’t think it means anything. Let’s just leave it for now.”  

The danger here is that you withhold valuable information from your team when it conflicts with your hopes and beliefs. When you do that, the omission of information can send your team speeding off in the wrong direction. 

Sampling Bias

Who exactly are you recruiting for your study? Believe it or not, this is often one of the biggest factors in whether or not you’re getting solid, reliable results. Sampling bias is when you collect data where the sample isn’t nuanced or representative, and then the results have potential bias and can lead you astray.

Sampling Bias Example

In this example, you’re running a survey about cooking habits to help you ideate new features for your recipe app. If your questionnaire respondents come from a pool that you sourced on a college campus, you’re essentially looking at the cooking habits of young people studying full-time. What about families? Retirees?

One of the most important things that you do in the research design process is to make sure that you’re looking at a representative sample

Analysis Bias

After you execute your data collection methods, you enter the analysis phase. Here, you’ll make all kinds of decisions: which data are important? Which are most important? What does all of this mean qualitatively? Statistically? Often, we make these decisions in a way that favors our original hypotheses or desired outcomes.

Analysis Bias Example

An example of analysis bias would be if you find yourself saying things like, “That’s not important. It only came up a few times. These are just outliers.” While that could be true depending on the study, it could also be analysis bias seeping in and causing you to ignore an important nuance in your research results.

These are just a few research bias examples—there are truly countless ways that bias manifests in research projects. So, the next question is, what can we do about it?

Avoiding Bias In Research Study Design

By now, you’re probably sweating a little bit, wondering if maybe research is pointless due to our very human tendency to succumb to biased practices. Fortunately, that’s not the case. Now that you’re aware of the potential biases lurking around every well-intended corner, you can take steps to decrease bias in your user research and steer your product team in the right direction. Let’s take a look at some ways to do just that.

Approach your research study design meticulously

This is the user research version of starting off on the right foot.  When you’re designing your study, you make a series of decisions that, if done properly, can decrease the likelihood of potential bias. 

Here are some tips:

  • Define your user groups in the broadest way possible while still addressing your target population
  • Choose your research methods wisely: make sure that the methodology reflects what you’re trying to learn
  • If you’re conducting user interviews, write an interview guide before you begin interviewing and try to ensure that you are asking the same questions of everyone

Bonus Tip

Have a colleague look over your research plan before you actually start collecting data. This will help make sure that you’re not missing anything that could introduce bias.

Avoid conflicts of interest at all stages of user research

That product manager who’s been harping on the need to make a specific pivot all year long? Ideally, that’s not the person who should be leading the research study. While objectivity is still possible in this case, conflicts of interest like this often make it extremely difficult for people to collect data and analyze research results in an unbiased way.

An alternative here would be to have someone else—perhaps a designer or a user researcher on the team—design and lead the study but consult with the aforementioned product manager along the way. Overall, there is an evidence-based rule that the less attachment someone has to a desired outcome, the more likely they are to deliver reliable research findings.

Acknowledge that research bias applies to all humans conducting research

I mean this in the most respectful way possible: while you are probably special in a variety of ways, when it comes to the potential for introducing bias in a study—you’re not.

All of the biases that we’ve reviewed are evidence-based tendencies that all people have. Once you adopt the mindset that even you could, in theory, suffer from a variety of biases—you’re well on your way to preventing them!

Always conduct a research retrospective

It’s okay to make mistakes in life, and the same is true when it comes to user research. Each time you do a study, make sure you sit with your team and go over your methodology from start to finish—what could you have done differently? Is it possible that some sneaky biases snuck in? 

Usually, you won’t find that your study is totally unreliable, but you’ll bring to light small tweaks that you could make to your next study in order to increase the reliability of your user research going forward.

Research bias is a thing, but it can be managed.

Now that you’re aware of what can trip you up when it comes to reliable user research, you’re ready to get out there and do some good work. At the risk of sounding somewhat ridiculous, I’ll tell you a parting secret: I have a small post-it that simply says ‘bias!’ on my desk. I find that it keeps everything we’ve discussed top of mind.

Even if an alarming post-it note isn’t your methodology of choice, find a way to keep the elimination of bias top-of-mind when you’re conducting user research, and you’ll be well on your way to delivering insights that help your team meet its goals.

Don't forget to subscribe to our newsletter for more product management resources and guides, plus the latest podcasts, interviews, and other insights from industry leaders and experts.

By Cori Widen

Cori Widen currently leads the UX Research team at Lightricks. She worked in the tech industry for 10 years in various product marketing roles before honing in on her passion for understanding the user and transitioning to research.