Skip to main content

When was the last time customers asked you to build a feature during an interview but none of them ended up using it? I am confident that every single PM out there has had at least one round of user interviews that have ended in a colossal flop.

The natural next question is, what went wrong during the interview? More than likely, we encountered one of the many research biases that led to us making the wrong product decisions.

Leading a product based on biased data is dangerous. Fortunately, you can avoid it by learning what bias is, what makes it risky, and learning from these five examples of bias in research that led to undesirable outcomes.

What Is Bias In Product Research, and What Are the Risks?

The term bias refers to the set of errors that product and marketing teams make when conducting research. The result of these errors is usually incorrect analysis that leads to decisions that do not represent the true state of things for their product, market, and users.

A product manager’s decision-making carries a massive impact. A single decision can pivot the entire product line towards a new market or target persona. There are also daily execution-related decisions that impact the effectiveness of the engineering, marketing, AI, and other teams’ efforts.

Therefore, making decisions on biased data is especially dangerous, considering that you can lead people in the wrong direction and fail miserably as a PM.

Luckily, mitigating research bias is not as hard as it might seem. The hard part is to admit that your data is biased. Mitigation itself is relatively easy.

Types Of Research Bias In Quantitative Data Analysis

Let’s begin with the common types of bias that you encounter when working with data. They are usually related to errors in the way you gather, analyze, and act upon quantitative data.

Classical scientific research identifies a wide variety of research biases, including:

  • Publication bias
  • Measurement bias
  • Analysis bias
  • Design bias
  • Observer bias
  • Recall bias
  • Social desirability bias
  • Demographic and cultural bias
  • Response bias, and others.

But, these are a bit too theoretic for us to talk about. So, I’ll focus on the three types of bias that you will encounter in real life as a product manager.

#1: Sampling And Selection Bias

As the name suggests, sampling bias is about getting incorrect data because of the sample you have selected.

If your sample contains data skewed towards a certain characteristic, then the analysis you make will not be representative of the general population. You will end up incorrectly extrapolating the behavior of that skewed sample and applying it to the entire data set.

Netflix is one of the big tech companies that has been guilty of sampling bias. People in the United States loved its recommendation algorithm as it was able to suggest shows relevant to their interests. The rest of the world, however, hated it. The suggestions were simply terrible.

What was the reason behind it? Apparently, the AI model trained to make these suggestions consumed only U.S. viewer data and suggested things that were relevant to Americans.

How To Mitigate Sampling Bias

In Netflix's case, they simply started feeding international data to their models and retrained them based on a less-skewed study population.

But, if you want a more systematic approach to mitigating this type of bias, I can suggest a bit of an unorthodox approach that has worked for me in the past: running something statisticians call a cluster analysis.

A cluster analysis can find common traits in your sample population and cluster them based on these traits.

example photo
Credit: Byjus

In the image above, you can see that the population is clearly divided into three different clusters based on the traits chosen.

What you do is select a wide variety of traits and create these charts for each of them. Your data will be skewed and prone to sampling bias if you see that, for certain traits, you only have a single cluster. In the example of Nexflix, the viewer country trait had a single cluster: United States.

Careful, though! Sometimes, it’s natural for your entire population to have a single cluster for a trait if it is describing their persona. For instance, if we ran a cluster analysis for the readers of theproductmanager.com, the majority of them would be product managers. This is natural as PMs are the persona the website is aimed at. So, you should always remove these types of traits from your analysis.

#2: Procedural Bias

This type of bias refers to the errors you have made in your study design and research process. It can be due to your usage of incorrect tools, statistical formulas, or even your data collection methods.

From my experience, procedural bias is especially prevalent in user surveys. If you have any leading questions in it, then your respondents will most likely fall victim to your “push” towards a certain type of answer.

Here’s what a typical leading survey question will look like:

How much do you love our product?

With the way you phrase it, you are suggesting users give a more positive answer, even if they hate what you do.

The more neutral variant of the same question will look like this:

How would you rate our product?

In this case, we clearly indicate that you are ok with negative answers.

Discover how to lead product teams and build better products.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

How To Mitigate Procedural Bias

The best way to make sure that your research is rid of procedural bias is to pick the right tools and methodologies for the specific type of analysis you are doing.

Randomize Answers

If you're conducting surveys, apart from avoiding leading questions, you will also need to randomize the question order because there’s a chance that people will get bored midway through the questionnaire and start giving meaningless answers. Without randomization, the last couple of questions will get near zero useful answers.

Pay Attention to Statistical Significance

Moreover, consider that surveys are partially quantitative in their nature. So, don’t rely too much on the statistical analysis of that data. For instance, if you improved a certain feature and your NPS score increased by 2%, that is too low a difference to be considered a statistically significant result. A 2-fold increase, on the other hand, would be reliable.

Choose the Right Tools

Now, let’s talk about selecting the wrong tools, too. If you are measuring the conversion rate of your checkout page, then using a heatmap tool would create procedural bias, as the data you get from it is unreliable.

In these cases, it is best to use heatmap software to debug the behavioral problem, and A/B test the behavior change. For CR measurement, on the other hand, you should rely on product analytics tools such as Amplitude or GA4.

#3: Survivorship Bias

Biased research methods have been with us for a long time. There’s an interesting story about American bomber planes of World War 2. Each time the bombers returned to base, the engineers would examine the anti-air gun holes on their hull and plot it like this:

example photo
Credit: Wikipedia

The natural course of action was to reinforce the spots with most bullet hits with thicker armor plates. But this reinforcement had nearly zero effect on the survivability of American bombers.

After carefully examining the case, engineers realized that they were dealing with only those planes that had managed to survive and come back to the base. So, the parts they were supposed to reinforce were actually the ones where the planes had no bullet holes.

How To Mitigate Survivorship Bias

Let’s shift from WW2 planes to digital products. As a PM, when managing your data collection process, you will most likely deal with these two potential instances where bias sneaks in:

Filtering Data For Analysis

When analyzing user behavior, you can end up filtering out users who have performed certain actions that are not relevant to your analysis.

I faced this case fairly recently. We were trying to improve the activation rate for our influencer marketing product. The data analytics team built a funnel report where the conversion rates from one activation step to another were fairly high (somewhere around 50-70%).

This data baffled us because we were seeing an overall low rate of people engaging with the product compared to the number of signups. After carefully analyzing the data, we understood that the data team had applied a filter and only left the data for our power users.

Of course, power users were most likely the ones who had passed the activation flow with ease. So, the conversion rates on the funnel were not really representative of the real situation. When we looked at the funnels, taking all of our signups into account, the rates dropped significantly.

Lesson learned:

  • Always consider the analysis's context and goal when filtering your data.
Analyzing Pre-screened Survey Data

Another interesting case in research design that I had to deal with recently was related to a survey we ran to understand the needs and pains of influencer marketers. Again, the study results were a bit baffling to us because the needs of respondents did not match the insights we had from our user interviews.

The problem here was our pre-screening process for users who took the survey. Apart from asking if they were influencers (to avoid getting responses from irrelevant people), we also asked about the size of their audience/followers.

For some reason, we had disqualified influencers with less than 50,000 followers and only considered the answers for the relatively larger ones. Our product was supposed to target small influencers as well. So, the data we got from the survey was not accurate for us.

Lessons learned:

  • Cross-check your research findings with qualitative data. If there is a big difference between the findings of these two, you might have done something wrong.
  • Be careful with your pre-screening process; you might end up disqualifying relevant users, too.

Types Of Bias In Qualitative Research 

With the systematic errors related to quantitative research covered, let’s move on to another important category of research—qualitative studies.

For product managers, the single most important type of qualitative research is interviewing users. So, I’ll focus on the two major biases you usually encounter when talking to your customers and users.

Note: These are also the two types of biases that you can manage and mitigate using a proper product ideation tool.

#4: Confirmation Bias

One of the greatest traits of product managers is strongly believing in something when others don’t. That’s how you create innovative products and revolutionize your industry. However, tending to stick to their opinions is also a major weakness for PMs.

The reason is that PMs often suffer from confirmation bias. This describes a situation when you are actively trying to hear the parts of the conversation that align with your beliefs and discarding the rest.

tom fonder infographic

This is quite dangerous for people who then make strategic decisions on what the product will look like and how to take it to market. When confirmation bias is at play, your user interviews become completely meaningless, and you end up making blind decisions (and you can forget about user-centricity).

How To Mitigate Confirmation Bias

I have to admit, I am guilty of “practicing” confirmation bias too. But there is a solution that has helped me mitigate it fairly well.

You can call it the “code review” of user interviews. It is a process when product people cross-review some of the interviews of their peers and suggest adding or removing learnings.

If you’re the only product manager in the company (common for small startups), then do this exercise with your founder. Don’t forget that founders are essentially product people too.

#5: Participant Bias

While confirmation bias is a dangerous beast, it is among the easiest ones to fix. The next one I want to talk about, however, is much harder to mitigate.

Participant bias occurs when your respondents say things based on what they think you want to hear instead of what they really think about that particular topic.

For instance, when you are conducting an interview, you can ask the research participants if they would buy your product assuming that you added the feature they have just requested. Can you guess what the answer would be? From my experience, the vast majority will say Yes.

Does that mean you should run to the engineering team and ask them to build that feature ASAP? No! The question you asked is flawed and naturally pushes people to say something that will please you.

I agree, your feature will be able to cover a major pain for some of these interviewees and they will end up paying for it. But the majority will simply use that feature for free or not use it at all.

Apart from asking flawed research questions, study participants will also give you biased answers if you are paying them for the research study. They usually tend to “earn the paycheck” by giving favorable answers and making you happy without realizing that they are doing more harm than good.

How To Mitigate Participant Bias

As I already mentioned, participant bias is quite hard to mitigate. This is because you don’t really know if the participant is being sincere with you or not.

But, two tactics can help you decrease the chances of getting bad answers.

Asking questions about their past experience

If you ask questions about a theoretical event in the future, interviewees will most likely say things that they will never actually do in real life.

If you ask someone on January 2nd if they would be willing to go to the gym for 12 months in a row, they will say yes. But we all know that most people end up ditching their membership only after a couple of visits.

To avoid this, PMs prefer asking questions about things that have happened in the past. This way, the PM can learn about user behavior based on what users have done and not what they wish they did.

Setting the right expectations

Sometimes, the nature of your question or study does not allow you to ask about your users’ past. For example, you want to know how they would feel if you increased the price of your product.

In this case, you can make it clear that it is okay for you to hear negative answers, and their honest feedback is the only way they can “earn their keep.”

Admitting Your Biases Is 80% Of The Battle

That’s right. The worst part of being biased is that most of us genuinely believe that we are right. So, the single most important step that you can take to mitigate your own biases is to recognize them.

The moment you do this, you will naturally start battling them and eventually reach a bias-free state. (Or at least a "bias-managed" state!)

Don't forget to subscribe to our newsletter for more product management resources and guides, plus the latest podcasts, interviews, and other insights from industry leaders and experts.

By Suren Karapetyan

Suren Karapetyan, MBA, is a senior product manager focused on AI-driven SaaS products. He thrives in the fast-paced world of early stage startups and finds the product-market fit for them. His portfolio is quite diverse, ranging from background noise cancellation tools for work-from-home folks to customs clearance software for government agencies.