In the rapidly evolving field of product management, the blend of qualitative and quantitative research is essential for making informed decisions. The challenge, however, lies in how to effectively combine these two approaches to paint an accurate picture of user needs and behaviors.
In this episode, Hannah Clark is joined by Laura Klein—Principal at Users Know—to talk about the intersection of these research methods and how product teams can best leverage them to build successful products.
Interview Highlights
- Meet Laura Klein: UX Expert and Author [01:05]
- Laura is a UX/product person with a background in engineering.
- She founded Users Know in the early 2010s.
- Laura used to work at a design agency where she learned a lot, but left because she wanted more control over her projects and clients.
- She prefers to work with clients who understand her work style.
- Laura is selective about the projects she takes on with Users Know, focusing on those she can truly deliver value for.
- Challenges in Combining Qualitative and Quantitative Research [03:37]
- Teams struggle with both qualitative and quantitative user research methods.
- Qualitative: People lack the knowledge, skills or confidence to conduct it effectively.
- Quantitative: Difficulty understanding statistics, instrumenting products for data collection, and interpreting messy data.
- Combining poorly done research (qualitative or quantitative) leads to unreliable results.
- Some teams excel at qualitative research but struggle to integrate findings into product decisions.
- Overly “data-driven” product managers might miss the “why” behind the data.
- Teams struggle with both qualitative and quantitative user research methods.
- Understanding the Importance of Good Data [07:04]
- Combining qualitative and quantitative research is most effective when addressing a specific question.
- Qualitative data (user research) helps understand why users behave in a certain way, while quantitative data (analytics) reveals what users are doing.
- Start with qualitative research for pre-product or situations with limited users.
- Analyze quantitative data to pinpoint drop-off points in a user journey (e.g., checkout funnel).
- Use qualitative research to understand the reasons behind the drop-off points identified in quantitative data.
- Focus on finding the root cause (why) before jumping to solutions (how).
User research involves really understanding your users: their mental models, their context, what they are trying to do, and the problems they have in general and with your product. All these insights will help you understand why certain things are happening.
Laura Klein
- User Research vs. Jumping to Solutions [14:11]
- Qualitative research is avoided due to:
- Perceived time consumption
- Difficulty in recruiting participants (especially in large organizations)
- Lack of expertise or comfort with user research methods
- Ideating solutions is favored because it’s:
- Easy and enjoyable
- A natural human tendency
- Strong personal opinions about product flaws can bias decision-making.
- The core issue isn’t a lack of features or AI, but rather the product failing to address user problems in a valuable way.
- Qualitative research is avoided due to:
There are problems that you could certainly solve with AI, and some of them might even be solved better with AI. But your problem is not that the product doesn’t have enough features; your problem is that it doesn’t solve the user’s problems in a way that is useful and valuable to them.
Laura Klein
- The Role of Segmentation in Research [17:00]
- Qualitative research might be difficult to conduct in certain situations:
- Enterprise environments with various user groups (buyers, customers, administrators).
- Early stage direct-to-consumer market with limited users.
- Scenarios with user segmentation that makes data analysis unclear.
- Organizations with a small user base.
- When quantitative data (e.g., A/B testing) is unavailable, focus on qualitative research and make informed best guesses.
- Qualitative research might be difficult to conduct in certain situations:
- Sample Size in User Research [18:51]
- Determining sample size for quantitative research (e.g., A/B testing) is complex and requires a data scientist due to factors like:
- Number of people needed in each variation of the test.
- Presence of outliers (e.g., high-spending users) that skew data.
- Qualitative research also benefits from experts but for different reasons:
- Not aiming for statistical significance but for identifying recurring patterns.
- Traditional view suggests needing 5 participants to find usability issues, but this may not apply across product types or user groups.
- Focus on continual data synthesis to identify patterns in qualitative research.
- Sample size in qualitative research is less important than identifying recurring problems and segmenting users based on their needs and behaviors.
- Determining sample size for quantitative research (e.g., A/B testing) is complex and requires a data scientist due to factors like:
- Segmentation for Effective User Research [24:04]
- Segmentation is crucial in research, especially when dealing with features that cater to different user needs.
- Consider the example of an AI-powered email writing assistant for recruiters:
- The feature benefitted most users who struggle with writing outreach emails.
- However, a small group of expert recruiters with their own tested templates found the AI disruptive.
- Segmenting users based on their needs and expertise is key to avoiding solutions that unintentionally harm a valuable user group.
- Fun in the Research Process [28:40]
- Laura argues that research can be fun, especially the qualitative side.
- Creative research methods include:
- Simulating real-world environments (e.g., fake hospitals) to study user behavior.
- Following users around to observe their work and daily routines.
- Co-design activities like using Legos to brainstorm solutions.
- These methods can be enjoyable because they:
- Allow users to express themselves creatively.
- Offer a glimpse into users’ thought processes.
- Usability testing can be challenging but rewarding as it helps identify problems and lead to improvements.
Meet Our Guest
Laura fell in love with technology when she saw her first user research session over 20 years ago. Since then, she’s worked as an engineer, user experience designer, and product manager in Silicon Valley for companies of all sizes. She’s written two books for product managers, designers, and entrepreneurs, Build Better Products (Rosenfeld Media ’16) and UX for Lean Startups (O’Reilly Media ’13), and she’s a frequent speaker at tech conferences, including SXSW, Lean Startup Conference, and Mind the Product. She is currently Principal at Users Know, a UX design consultancy, and works as a coach and adviser to product teams and startups.
There is an enjoyable aspect to understanding how people do things. You can set up co-design activities that are really fun.
Laura Klein
Resources From This Episode:
- Subscribe to The Product Manager newsletter
- Connect with Laura on LinkedIn
- Check out Users Know
Related Articles And Podcasts:
- About The Product Manager Podcast
- How To Write UX Research Objectives (with 14 Examples)
- How To Create A UX Research Plan In 6 Steps (with Examples!)
- How To Create More Intuitive Experiences: The Smartest UX Design Research Process
- The 15 Best UX Research Podcasts
- The Real ROI Of UX: How To Convince Leadership To Invest In Users
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Hannah Clark: Are you more of a left brain or right brain thinker? If you don't know what I'm talking about, left brain folks tend to be more analytical and metric focused, whereas right brain folks are more likely to use storytelling to make sense of the world. So when we think about qualitative and quantitative research, you can probably guess which types tend to gravitate to which research methods. And that's great—we need both, after all—but as it turns out, putting the two together is a lot easier said than done. How do you blend numbers and stories to paint an accurate picture? And regardless of research method, how do you ensure that your data is any good?
My guest today is the amazing Laura Klein, Principal at Users Know and the author of Build Better Products and UX for Lean Startups. Over the past 20 years, Laura has worked with a lot of product teams and watched many of them stumble over the same challenges when it comes to conducting and applying user research. She's about to share some of her most useful observations and advice for product teams, particularly later in the episode when we get into the key role of segmentation and research. You're going to need both sides of your brain for this one, so let's jump in.
Welcome back to The Product Manager Podcast.
Laura, thank you so much for making time in your schedule to talk to us.
Laura Klein: Oh, thanks so much for having me.
Hannah Clark: We'll start off talking a little bit about yourself and how you came to found Users Know.
Laura Klein: Yes. So as you said, I'm a UX and product person. I'm a recovering engineer. I have been around tech since the 90s. So it's been a while. I've done a whole bunch of different things. I originally founded Users Know back in early 2010s. I actually don't remember the date, I used to know for sure. I learned how to do design at a really great boutique design agency where, we would do design and research for clients.
I learned a huge amount there. It was great. This was after I had been an engineer for quite some time at startups mostly. I finally left, honestly, because I wanted the ability to fire my clients. That is the hundred percent truth. When you work for an agency, I always joke that, when you work for an agency, you got to make a lot of people happy.
You got to make your bosses happy, you got to make your clients happy. And you also, ideally, if you're a UX designer, have to make your clients/customers happy. And weirdly, sometimes those things are in conflict. And what I realized was that I did much better when I could work with clients who were very much on the same page with me about how we were going to do the work.
And I've really enjoyed working with people who sort of buy into my way of working. Not that it's, particularly didactic, but I have a specific way of doing things and if you're on board with it, then, we get along great and do a lot of good work together.
Hannah Clark: Yeah, I come from an agency background myself and I can very much relate to things just work so much better if it's a fit. You can't force a fit sometimes.
Laura Klein: Oh, you want this other thing that I didn't negotiate with you and I'm not sure I can do that for you. And half the time when I actually do biz dev for Users Know, which it's not always because sometimes I have full time jobs or sometimes I'm fully booked.
But when I'm actually doing biz dev, I spent a lot of time telling people, Oh, I don't do that. You want to talk to so and so, somebody that I know who does that thing, you're just going to be much happier with them because they're better at it than I am. And so I spent about half my time turning people down.
Because there's only one of you and you only need, a couple of projects. You want to find the ones that like really oh, that's a thing I could really do well. Right?
Hannah Clark: Yeah, I think that's almost like a marker of success to to be in a position where you are able to really choose the projects that are going to get you and your client where you both want to be. So I very much respect that.
So today we're going to be focusing on combining qualitative and quantitative research to make better decisions in a product context. So what can you tell me from your own experience, what do you see teams struggling with in terms of combining those two disciplines?
Laura Klein: All of it. So this is one of the ways in which I work that is very specific. I really think, so here's the problem. I see people struggle weirdly with both sides of this. A lot of folks really have a hard time with doing qualitative user research. A lot of folks have a really hard time with doing quantitative user research.
And I mean, when it comes down to combining the two, you're looking for your unicorn hunting, right? It's just a really hard thing to do because you've got to get so much right. And yeah, I'm not kidding. Like they struggle with all of it. A lot of people don't know how to do good qualitative research.
They don't know when to do it. They don't know when to hire all of the wonderful professionals who are out there who do this for a living and who are fantastic at it and who really know what they're doing. On the quantitative side, math is hard, right? It's not just is like the math hard and statistics are not intuitive for anybody, really.
And beyond that, it's hard to instrument your product so that you can get the right data. So many companies collect all the data, but somehow none of it is correct. It's just a mess on both sides of it. And then when you take sort of badly done qualitative research that maybe didn't address the real problems or didn't have a good question that they were fundamentally answering, and then you combine that with data that sort of haphazardly collected and maybe not even correct.
And then you try to combine the two. You get what you get and none of it goods and this is not everybody. That's right, worst case scenario. I see often companies that are, like maybe they've got a really great qualitative research team and they're great at collecting that information.
Maybe not so good at integrating it into the product decision making or you see product managers sometimes who are so focused on the data that they're not looking at the qualitative side of things at all. So they don't really understand why anything's happening. They're just looking at numbers all day, but.
And they're like, well, maybe it's this. Maybe it's this other thing. Maybe it's another thing. That's a really common pattern is the "data-driven" product manager, which look, I'm all for data informed product decision making, but the data don't inform you about why anything's happening. They just tell you what is happening.
And sometimes they don't even do a great job of that. So yeah, all of it's hard. Which is why you shouldn't bother and you should just go with your gut. No, that's not true. I'm sorry.
Hannah Clark: Yeah, absolutely not the takeaway.
Laura Klein: That's absolutely not. The absolute takeaway is not give up.
Hannah Clark: No, we forge on. The show must go on.
Laura Klein: Yes, we acknowledge that what we are doing is difficult and we give ourselves the space to learn new things and we admit that we might not know it and I say I've learned stuff all the time about both sides of this and we work with experts who are good at each side of it and do the best we can.
Hannah Clark: I'm excited to dig into that a little bit and a little bit about also some of the ways that we can ensure that the data that we are collecting is a little bit more reliable or provide some better source of insight for what we're trying to achieve.
But in the meantime, let's make the leap in assumption that we have good quality data on the qualitative and quantitative research side, just for the purpose of this example.
Laura Klein: Yeah, we're great at both. Yeah.
Hannah Clark: Yeah. Yeah. Well, imagine we have got the best of the best on the job. What are some of the ways that we can start to bring those two types of research together in a product decision making context?
Laura Klein: Right. So let's say that you've got, we won't even say that we have great data.
We'll say that we have the people and the processes in place that we can go out and get the great data, because the step is really trying to figure out what question you're trying to answer. That's just as hard as the rest of it. A lot of people have, like I said, a lot of sort of like, Oh, we have dashboards in place.
So we have, we do, usability testing all the time and we do all these things. It's okay, but you're trying to make a decision, right? Like you're trying to make a specific decision. Do we want to do X? What new feature do we want to build next? What is my priority? Whatever, all the kinds of decisions that we make every single day as product folks.
Great. Which one are you trying to make right now? How should we improve the activation on this particular feature? Or how can we improve acquisition? Or how can we keep people from churning out after X months? These are the types of questions that often we need to ask ourselves, right? Sometimes we have to ask ourselves, is this feature or this feature, right?
Which is more likely to do the thing that I most need to do? Which again comes back to that, what are my metrics and what the hell am I trying to do with them? So you got to start with a good question that can be answered. And it can't be like, How much would somebody pay for that? Well, I mean, it can be, but that's actually a super hard question to answer in either sense.
So that's a tricky one, but sometimes you have to answer that one too, but you really have to narrow it down. What is the thing I'm trying to answer? And then you have to look at, okay, great. Like I said, the quantitative data are going to tell you what is currently going on with your product. And when I talk about quantitative data, I'm not necessarily talking about like market research, although that can also be something that you want to bring into it at sort of maybe at the earlier stages.
But I'm talking specifically about analytics, metrics that, things that you're measuring on your product. So, for example, the quantitative data often don't do a great job of telling you anything when you are pre-product. You don't have a thing out there, you don't have any quantitative data. Or you, if you don't have a lot of users yet, you may not have enough quantitative data.
So you got to think about when you're going to use that. But fundamentally, all it's going to tell you is what are people doing? What are people clicking on? What are people buying? If they click on this, do they go on to buy this other thing? If they saw this, it tells you all sorts of stuff about why.
Qualitative data, and we're talking here about user research and or usability testing, which are two different things. Generally speaking, well, user research is the whole thing, but user research really understanding your users, what their mental models are, what their context is, what they're trying to do, what kinds of problems they have in general and with your product, all these sorts of things that's going to help you understand why the things that are happening are happening.
That's it. Will not tell you what is going on. In fact, considering the fact that qualitative research is often done with much smaller sets of people, and, I, this is, it's an iterative process. It's not like you do five and you're done forever, but, you might be making some decisions based on talking to five people. But, you're talking to some people, and then you're doing another round, and you're talking to more people, so you're talking to people constantly.
But you can only talk to so many people and you're talking to over a long period of time. And it is entirely possible that you can talk to people and you can get a really good vision of who they are and what they're doing. But you are never going to get as good a picture of what is going on just by talking to people.
But you will understand why. And the examples here are things like, let's say that you have a checkout process, we'll just go with something real easy and sort of. Something we've all done, something we may have all built, right? This is a thing that is very common. Everybody on here has been through a checkout process on a digital product of some sort.
They generally tend to be fairly linear. They have several steps. There is some information that the company absolutely needs to get from you. And if you have your data instrumented correctly, you can see where people are falling out of that process. And I guarantee you, somebody has fallen out of that process.
We call it a funnel. It's more like a sieve. Right? Almost everyone gets to the end. Best case scenario, it's a funnel. Actual case scenario, people are falling out constantly. Shitty funnel. So this is happening. Data will tell you exactly where they're falling out. That's great. Then what I see a lot of product managers do is they say, Oh, everybody's falling out at this part where they have to put in their credit card information and then they start solutionizing or ideating.
Oh, we should do this. We should make this change. We should make this easier to see. We should change the okay. Or you could just watch some people use the damn checkout process and try to understand is there something obvious that's happening here? Because it may be a lot of things. I've seen dozens of different things that make people fall out of checkout processes or onboarding processes or anything, data collection, but any, I have seen people fall out for all kinds of reasons.
And the important thing, the reason that this matters, the reason that the why matters is because the correct solution to that problem depends on what the damn problem is. And the problem isn't people are falling out. I mean, that's your problem. Their problem is, I didn't understand this thing, or I've seen this, there was a bug that prevented me from doing this thing that if you don't see it or you don't know that it exists, you just will never fix it.
You'll never fix the checkout problem, or you don't have the right payment methods or a dozen other things. And it's sort of shocking when you just watch somebody and you're like, Oh, it's that little thing. I can just fix that. I've had engineers watching me run user research sessions or usability testing, and they have fixed the bug that was causing the problem during the test and pushed it to production.
Like we've, we caught a problem. It was obviously a problem. It was obviously a bug. The engineer watching it literally, their their laptop in their lap, just fixed it. We came back to it later. It was funny because we came back to it later in the session and the person was like, Oh, I thought this was broken.
I was like, so did I. And I turned around and the engineer's OK. The engineer was just like nod, nodded at me, gave me a little thumbs up. And I'm like, well, I guess it's not a problem anymore. So I'm just saying this is a good example of not just understanding what is happening and then jumping to a whole bunch of ideas about how to solve it, but figure out actually why it's happening.
And then maybe there's this like very simple solution for it, and moving on with your life and moving on to more important problems.
Hannah Clark: What do you think is preventing us from reaching for that solution first rather than solutionizing? Why is that not the go-to in your view?
Laura Klein: Qualitative research is hard. It is often seen as time consuming. And in fact, it can be time consuming. In some organizations, it is just much harder to do than others, right? If you're in an enterprise organization where you are very limited in the number of, as much as you're even allowed to talk to, or it's just very hard to recruit people, or you don't have a solid research ops process set up that helps you to recruit folks.
It's a lot of work. It can be a lot of work. It can be reasonably easy if you already have things sort of set up so that you can spin up the right kind of research at the right time. Some people just don't have that expertise. They don't feel comfortable going out and talking to people.
And also ideating is easy and fun to do, and you can just do it in a room. And it's just a very human nature kind of thing. I can't tell you how many times I can have conversations with people about anything like not products, just anything, what do you want for dinner? Oh, I don't know.
We'll just start talking about different kinds of foods. You're not sitting there thinking okay, well, you don't have a whole process for figuring it out. Human brains, I think are meant for, they're perfect for just coming up with ideas, right? Ideas. Fun. Easy. Not always great.
And not always the right ones. That's sort of the thing. It's just also, I think sometimes a lot of product folks have very strong opinions about what is wrong with their product, right? We all know the things that we make are not perfect and we may have very strong opinions that it is very easy to get caught up into this.
Well, I knew that this was a problem because of blah, blah, blah. And so I'm going to go and solve this problem that I know exists even if it maybe isn't technically related to the problem that you're running into. I see that a lot too. Right? Folks are like, well, but I want to add AI. So all of the problems are that we don't have enough AI. And I mean, I, sorry, I'm sorry. I can tell you right now that your problem is not that you don't have enough AI.
Hannah Clark: Say it louder for the people in the back. I could do a whole other episode on that.
Laura Klein: Yeah, no, I mean, yeah, me too. I mean, look, there are problems that you could certainly solve with AI. Some of them you might even solve better with AI. Who knows? But the problem is not that you don't have, I always joke that like your problem is not that the product doesn't have enough features.
Hannah Clark: Cheers to that. Yeah, and definitely not that it doesn't have an AI feature.
Laura Klein: Yeah, it doesn't have a specific feature. Yeah, your problem is that it doesn't solve the user's problems in a way that is useful and valuable to them.
Hannah Clark: Yes, I need a tattoo of that.
Laura Klein: Maybe just get it embroidered on a pillow.
Hannah Clark: Yep.
Laura Klein: It's a little long.
Hannah Clark: I couldn't agree more. It doesn't really flow on the body.
Anyway, let's talk a little bit about some of the different situations in which it's appropriate to apply one versus the other, so qualitative versus quantitative. I think that's a really good succinct example that you've given about how one informs the other. Obviously, there are contexts in which one is more valuable than the other. So let's talk a little bit about some of the outliers.
Laura Klein: Okay. Some of the things that you need to remember is, are sometimes one or the other is absolutely not available to you.
In which case, you do the best with what you got. Like I said, sometimes it is very hard to get qualitative data from the right people in certain kinds of environments, often enterprise. And enterprise is fascinating because you have multiple types of users, right? Often you have the buyer and the customer who are wildly different and have entirely different needs and desires.
And so it might be very much easier to get input from one or the other, and that's just, don't get me started on the administrators. Although I've done a lot of research with them, and it's extremely useful, but they're hard to get. So sometimes it's just very hard to get that, but still totally worth it.
And in that case, my recommendation is get a process in place that helps you find those folks. Quantitative data, once again, enterprise is a good example. Sometimes you just don't have enough, right? Like quantitative data, especially if you're talking about quantitative data in terms of A/B testing, which I often talk about and think is incredibly useful in certain circumstances and not the way most people use it.
But A/B testing is great if you have enough people to just be able to see the difference. But if if you're early in a direct to consumer market, or you have too many different types of segments to really understand the difference or you're in some kind of enterprise organization where you just don't have that many users.
You're just never going to get that data. You can't. That is not an option for you. I'm sorry. You are going to have to do qualitative and do your best guess.
Hannah Clark: I'm so curious, how do we determine the sample size that we need before we proceed with figuring out what our goals are in terms of getting that kind of quantity?
Laura Klein: Okay, on the data side of things, I'm going to tell you that you need to talk to an actual data scientist because, I wasn't kidding, math is hard, and this particular type of math is incredibly hard. And I've seen people screw it up a lot, and there are all kinds of weird problems that you don't expect to happen, like the peeking problem.
That's PEEKING, where you're not supposed to look at the data before it's done. Anyway, it's a whole thing. There's tons of information out there. So I'm not the one to explain that math to you. And it is too varied, and it depends too much on the number of there's the number of people that you need in each branch of an A/B test.
There's also the question of, there's variants, if it's something that just, we run into this problem a lot where if you are in a situation where you can have massive outliers in your data set. Like if you have whales, if you have people who spend tens of thousands of dollars, but like most people spend a dollar, that's very easily going to throw off your A/B data sets if you need to start looking at things like that.
So there's all kinds of different variables that you need to look at to get your data right, which is why I say most people suck at it because it's hard, and you need experts doing that. Qualitative research, I also recommend that you get experts to do it because it's also hard, but in a very different way.
I always like to say on qualitative for sample size, sample size, you're not looking for statistical significance. That's just not a thing that you're going to get. And people who are like, Oh, we did a statistically significant qualitative. I'm like, no, you didn't. Come on. Now, what you're looking for is you are looking for patterns that repeat and are predictable. And which is why, there was the old study that's been around, I think since I started doing research back in the 90s that said something like, you need you're going to see most of your problems in five people.
But I mean, okay, sure. That's very much about usability testing specifically. That is finding barriers in a particular interface for a product. It makes a lot of assumptions about, everybody uses the product the same way. So that might actually be true for something like a checkout flow that is only in one country and only has a couple of different types of users.
Yeah, you might be able to find a lot of your usability problems in, four or five people. Hell, I've found that bug that we talked about earlier in one person. Oh, if that's happening for that person, that's probably happening for a lot of people. We should just fix that. And that was true. So you can find certain things like that very easily, very quickly.
But you often have to talk to folks until you start being able to predict the patterns that you're going to see. So one of the things I like to do is I like to do sort of continual synthesis of the data. So as we talk to somebody, we then go back and we sort of synthesize it, okay, what were the takeaways?
What were the observations? What were the behaviors of this person? What were their needs? What were their problems? What were their goals? That kind of thing. And then continually sort of building that up until I can say, oh, it's this kind of person who's coming in. I think they're probably in this group.
Like whether you use personas or orgsonas or whatever. You can say, okay, this kind of person is coming in, I predict that they're probably going to do things this way and have these problems. And if you're right, you're kind of like, okay, I'm starting to see the patterns here. And like I said, qualitative should just be an ongoing thing.
You should just always be talking to folks. So you don't need to worry too much about sample size. If you're seeing, if you see a few people really struggle with something, they're probably not the only three people in the world who had that problem. You probably didn't get uniquely stupid users.
If you did, then that's an interesting thing, because maybe you should look at your screening process. But or, I don't know, maybe those are your users and, but they're not, I, generally when you bring people in to talk to them about stuff, you see the same stuff pretty quickly over and over.
Or, if you don't, and you're seeing wildly different things, you may be dealing with a problem where you have not properly segmented your research subjects, so. Not everybody uses products for the same thing. Not everybody uses products the same way. And if you're talking to power users versus brand new users, you're probably going to get different issues.
If you're talking to people who only buy in a particular way, or if you're talking to people who buy wholesale versus people who buy just for their families, you're going to see different behaviors, and they're probably going to have different needs. So if you're seeing wildly divergent needs from all of these folks, maybe you need to figure out why they're different. Are they actually in different groups?
Hannah Clark: Yeah. And I'm so glad that you brought up segmenting. This is something that gets overlooked often when we're looking at developing a data set that actually we can make decisions based on.
So what are some of the advice or pieces of advice that you might offer if we're going into a research process and we know whether it's qualitative or quantitative, that segmenting is going to be a major factor? How do we approach that proactively?
Laura Klein: Yeah. Segmentation is almost always a big deal. But once again, oh God, I'm gonna say it. I'm, I apologize. I'm gonna be a UX designer. It depends. I won't stop there.
Here's what it depends on. A lot of different factors. Some of them, for example, let's talk about an AI feature that I once saw implemented. It was a product that helped job recruiters do outreach to potential job seekers, right? So I'm sorry, but you know, whenever you get that outreach on whatever platform you're on, it says, Hey, we saw your resume and thought you might be interested in X, right?
And so it turns out, and this is not going to surprise any of you, turns out a lot of people are really bad at writing those emails. Okay. Even people who do it professionally, some of them suck at it. Those emails are hard to write. And so a large percentage of people are bad at it, especially people who don't do it all the time.
People who are looking for, like your hiring manager, maybe looking for one or two people. You don't, you haven't really thought about like, how do I reach out to this person and say, Hey, are you interested in coming in and talking to me? So the company decided to do some AI magic, sprinkle a little AI on it to help people write better outreach emails.
And they did. And it's, one of the uses for AI is to sort of summarize a topic and write a better email and they let you sort of, say, Oh, you want to make it more chatty or you want to make it more formal or, whatever. So they let you do that. It turned out overall really helpful.
Turned out that the emails that the AI was writing, like it helped bring in all of the information when the job posting that was important and it helped just make it a little clearer and helped make it feel a little less spammy. Overall, great. Improved things, got more responses. Here's the problem.
There is a small group. I assume there still is. I don't know. There was a small group of people who use this product who were truly experts at this. This is all they did. They just wrote those outreach emails. They A/B tested them themselves. They had corporate standards that they had to follow. These are people imagine like a really professional recruiter who all they do all day send these emails.
They had teams of data scientists on their side figuring out which of these are doing the best, right? Hey, guess what? The AI didn't help them. In fact, the AI could potentially interfere with their flow. They definitely didn't want something rewriting a template that met their corporate standards, that had all of the legal information that they had to require, that, they had A/B tested.
They did not want that replaced. They wanted to use their fan. And of course, those are the people who use the product the most and who were the most likely to pay for it. So here you've got a situation where you've got a huge group of people who don't pay you very much money, and a small group of people who pay you a lot of money.
And now you need to decide how are you going to deal with this? Well, I mean, obviously, the answer is give people the option to do it, but don't require it. But if you're only looking at the data and you're saying, oh, but people, I'm waving my hands about vaguely, people, generally people, all of our people did better with the AI.
No, the majority of your people did better with the AI. The small percentage of people who actually pay you the most money did worse with it. So, no, we are not forcing this on everybody. This is not a thing that everybody needs to have, so. Just keeping that in mind that you can have very different groups of people and some groups of people, I'm sorry, are more important to your company than other groups of people in terms of their behavior, right?
Hannah Clark: Yeah, this is such a very, a good example of how segmenting, it's not even just about who you're segmenting, who your segments are, but the role that they play in their overall business value and the strategy of the product. And so I'm glad that you brought that up because this is something that I think, first of all, I would love for all of us to be taking this into account when we're integrating AI features.
Laura Klein: It relates to all features, but yes, AI especially is a, yeah.
Hannah Clark: It's such a bone to pick with it. But anyway, I digress. So because we are coming to a close here with our time.
I did want to talk a little bit about how we can have fun in the research process because I feel like research, it gets pegged as this dry or, it's like a time sucky kind of thing. But there's just so much insight and I think that there's so much value in the ability to talk to people and get insights, but also a lot of fun involved in that. And I think that's something that you'd spoken to as well. What is the most fun that you've had on a research project?
Laura Klein: So sometimes they turn into arts and crafts, which if you're like me is just an enjoyable thing. Here's the thing - not everybody is like me and adores things like Excel spreadsheets and pivot tables and data. So I get that. And I'm not going to try to convince you that those are fun, even though, they're fun for me. So I like that side of the data. That part of it is fun. The qualitative side can legitimately be really creative.
So if you're a creative kind of person, finding creative, interesting ways to answer problems is very cool. I can't tell you how often I've done things like, Oh, if you're working on like in-person stuff, right? There's a really fun sort of service design exercise for people who design airports, right? And they will sometimes, or hospitals is actually a really good example, they will sometimes set up a fake hospital, right?
Because you don't necessarily want to go in and start messing with an actual hospital where there are people dying. That's frowned upon or, getting better, also getting better, but whatever, you don't want to interfere with that. But I've seen people like set up fake hospitals to study the flow of people going through do carts fit?
There's going to be technology integrated. Where should it go? Should it be like a thing that you carry around? You're actually having people do things. It's fascinating. I've followed people. This always sounds really creepy when I say it, that how much I enjoy following people around at their jobs, but I do enjoy it.
It's fun. Following people around as they're selling things to people or as they're doing their jobs and just understanding. Like you're not just asking them questions. You're not testing them on anything. You're just understanding more about how they live their lives. I don't know, maybe I secretly should have been an anthropologist.
There's an enjoyable aspect to understanding more about how people do things. You can set up co-design activities that are really fun. Some of them involve Legos. Look up co-design activities. They're really enjoyable. You can have people sketching things out, or drawing maps, or working with you.
There's all this sort of mixed methods, qualitative research, and it's not just fun, right? It helps you understand people's mental models. It helps you understand, how does this person think about the thing that they're trying to do? And if you're at all like me, which, oh, I, for your sake, I hope you are not. But it's just enjoyable.
It's a little peek inside people's minds. There are less fun parts of it, like sometimes usability testing, if maybe the thing that you built and you were really excited about doesn't go so well. I knew an engineer who really had a hard time watching usability tests because he called them hostage videos because it always looked like the person who was like really struggling and terrified and like it was just very painful to watch. Like it can be very painful to watch somebody use something that you made, and it doesn't go the way you expected it to go.
Hannah Clark: Oh. But some necessary pain, though.
Laura Klein: And you know what the fun part of that is? Fixing it and watching the next group of people just breeze through it. That's the fun part. And remember, if you didn't go through the hard bit, then the people who are struggling with it, even though you don't have to watch it, they're still there, and they're not buying your stuff.
So anyway, the fun part is, I think, seeing the problem, figuring out the problem, really understanding it, having that aha moment and going, Oh, now I can come up with a really cool idea for how to fix it. Anyway, I think it's great. Like I said, but then again, I, like I said, I like pivot tables. So what are you going to do to each his own?
Hannah Clark: Personally, I think it's very interesting to talk to people and be a fly on the wall. It's part of why I love the podcast so much. But speaking of fun, this conversation has been a great deal of fun for me. I really enjoyed talking to you. And I'm sure everyone else has enjoyed listening to you, and where can they follow you to get more of that online?
Laura Klein: Right now, the only place to really see what I'm doing when I occasionally do it in any way publicly online is on LinkedIn. I am on Blue Sky. I don't really talk much about work there, but I might eventually. But I am on LinkedIn. I will, hopefully, eventually, sometime this year, be launching a class on Maven about qualitative and quantitative data and how to combine them.
So if you want to come and learn things from me directly, it will be much longer than this and have lots more details and lots more exercises and templates and things. I wrote a couple of books, but they've been around for a while. You're welcome to read those. And I sometimes podcast on a podcast called What is Wrong with Hiring?
And it's all about hiring in the design and research and product space.
Hannah Clark: If you're anything on there like you are on this conversation, I cannot wait to check it out.
Laura Klein: Oh, I curse a lot more. I'm sorry.
Hannah Clark: Oh, sign me up. I'm subscribed.
Okay, well, thank you so much for making the time, Laura. This has been immensely fun and very informative. I really look forward to the feedback that folks will have about this episode.
Laura Klein: Thanks so much. It was great being here.
Hannah Clark: Thanks for listening in. For more great insights, how-to guides, and tool reviews, subscribe to our newsletter at theproductmanager.com/subscribe. You can hear more conversations like this by subscribing to The Product Manager, wherever you get your podcasts.