How to Use UX Research to Delight Your Users
Looking to elevate your product and create a better user experience? It all starts with mastering UX research.
Look, we all know what bad UX research looks like. You might even have some open in another tab right now. But how do we take our research from bad to good, and then from good to incredible? How do we ensure that our research is helping us to elevate an experience to a USP that truly resonates with users? Missteps in UX research can lead to missed opportunities, disengaged customers, and ultimately, a product that fails to stand out.
Watch our exclusive live webinar featuring three industry experts who will dive deep into the nuances of UX research and design, helping you turn common pain points into opportunities for innovation.
In this session, you’ll learn:
- How to distinguish good, decent, and great user research from bad practices.
- The key differences between good, decent, and great UX design.
- Strategies to connect UX insights to your product’s unique selling proposition, making it stand out in the market.
In this session, you’ll be hearing from:
- Laura Klein: Author of Build Better Products, Laura is a seasoned UX expert known for her ability to bridge the gap between research and actionable design.
- Steve Portigal: A master of user research, Steve is renowned for his work in uncovering insights that drive meaningful design decisions.
- Thomas Stokes: As a leader in product strategy, Thomas specializes in creating compelling user experiences that are tightly aligned with business objectives.
As an added bonus
Attendees on the live call will receive a discount code for Steve Portigal’s book, Interviewing Users (2nd Edition). We’ll be announcing the code during the call, so be sure to tune in for access!
We’ll also have a live Q&A session with our speakers. Don’t miss this chance to get your questions answered by industry leaders and walk away with actionable insights to enhance your product’s user experience.
Track My Progress
Host
Guests
[00:00:00] Hannah Clark: Thank you so much for joining us. Those who are here, we'll have some people joining us a little by little as we go on. Uh, so, uh, from wherever you are joining us today, either good morning, good afternoon, or good evening. Welcome everybody to the latest in our community event series. So, uh, first of all, I just want to thank everybody for your support.
We've had an amazing response to these kinds of events. Um, hoping to see these grow even more as we go along. And today, um, we are excited to bring you a session on how to use UX research to delight your users. And we have some really incredible folks who are joining us today. But if you don't know me, my name is Hannah Clark.
I'm the editor for the product manager and I'll be your host for the day. Um, and today we have three really fantastic guests. I'm very excited to introduce to you. Uh, we've got Laura Klein, who's the author of Build Better [00:01:00] Products. Um, Laura is a seasoned UX expert. She's known for her ability to bridge the gap between research and actionable design.
Uh, and I'm going to start everybody off with a little bit of a Jeopardy question. So Laura, I'll follow the first one to you. Um, so you've got some pretty helpful insights for anyone deciding whether they should be living at the top of a mountain or at the bottom. What would you say would be the The best place to live.
[00:01:20] Laura Klein: I mean this is not a tough one for me because I live at the top of a very short mountain, um, and I really love it. But I think the most important thing is whether you live at the top or the bottom of a mountain, um, you should always have a funicular. A funicular is the correct way. To get to or from your home.
So, that's just, that's the most important thing. Top or the bottom. And, uh, yeah.
[00:01:45] Hannah Clark: I live in an apartment building and I think I should also have a funicular. Zip line. Zip
[00:01:49] Laura Klein: line or fireman's pole.
[00:01:52] Hannah Clark: I think I would be, I would much prefer that
[00:01:54] Laura Klein: to the staircase. It's a walk up. You'd be the most popular person in the apartment.
[00:01:59] Hannah Clark: I think so too. [00:02:00] Uh, we also have Steve Portigal joining us today. He's the author of Interviewing Users How to Uncover Compelling Insights, which is a must read for those who, uh, if you're not familiar with the work, you should check it out. We've got a discount code that we'll be showing you a little bit later in the session.
Um, so Steve is a master of user research. He's renowned for his work in uncovering insights that drive meaningful design decisions as the book title would suggest. Um, so Steve, your Jeopardy question for today. You just took a week off to camp an Airbnb around Oregon. Which place should we check out next time we're in the area?
[00:02:30] Steve Portigal: I've, I've got two, um, and they're both connected because they're just, I don't know, whatever. Uh, the Barnacle Bistro in Gold Beach, Oregon is got just the best sign. I encourage everyone to go there. Google it. It's like a very strange character playing banjo, and you can get food there, but they also have just the craziest sign.
And along those lines, um, in Medford, Oregon is Blackbird, which is this hardware store in a, with a parking lot with a [00:03:00] giant Kind of sculptural black bird that, um, is a good roadside attraction kind of place. So, um, yeah, we had, we went to Blackbird after having been there maybe 20 years ago. So we went to see how the bird was doing and the bird is doing well.
[00:03:18] Hannah Clark: Got to hear, uh, I, I love a good roadside attraction. If anyone wants to start off in the chat with the weirdest roadside attraction that they've passed in the most, uh, in the last five years. Was I would love to hear it because they're always so bizarre. Uh, we also have Thomas Stokes joining us. He's the principal of UX research and digital strategy at drill bit labs.
Uh, so, uh, Thomas is a leader in product strategy and specializes in creating compelling user experiences that are tightly aligned with business objectives, which of course is highly relevant to what we're talking about today. Um, Thomas, you just took a week off to hike part of the Appalachian trail. Did you grow out the grizzled Appalachian trail beard?
Like you see in the before and after pics of others who have hiked it. I think
[00:03:57] Thomas Stokes: I gave it my best shot, you know, but [00:04:00] I really grow a better mustache than a beard. It's pretty full here. Not so full here. So I shaved really promptly. On my return, but it's good to be back. And, uh, thanks for having me here, Hannah.
[00:04:10] Hannah Clark: Yeah, really important to it. Uh, just a few other words about the product, excuse me, the product manager membership before we get started with the discussion. Um, so, uh, if you are not yet a member of the community, first of all, welcome, thank you for attending. Um, this is one of our monthly sessions that we conduct that we invite members and non members to attend.
Um, but if you'd like to learn a little bit more about membership and some of the exclusive events that we host, uh, please learn more at theproductmanager. com slash membership. We would love to have you on board. Have a lot of fun. And with that, let's get into the discussion. So this discussion is going to be taking part into three parts.
We'll have a discussion on, um, UX research. So what separates the good, decent, and great UX research, uh, from the bad. Um, also we'll be looking at what separates good, decent, and great UX, uh, or user experience from the [00:05:00] bad. And we'll also be looking at how we can connect the dots between those kind of two significant areas to create really meaningful experiences for our users.
So to kick us off on section one. So what separates the good, decent, and great you are from the bad? Uh, I've got Steve. Uh, so Steve, what would you say to that?
[00:05:17] Steve Portigal: Right. Research is a big set of activities. So if you say sort of good research or bad research, you know, I think we often start with kind of data collection or interviewing or testing, whatever your kind of method is.
So, you know, good versus bad or quality interviews, uh, interviewers that don't. Have a lot of experience or training and asking good questions that don't ask follow ups is kind of the first part. Uh, but then even stepping back from that like what's to research? is an area to think about like what is good and bad.
So I think sometimes there's a naive application of research to the challenges that businesses face, like where all you do is test, you make decisions about [00:06:00] what to make, and then you test. Um, or research just means, hey, show people the prototypes. that you're thinking of shipping and seeing if they give you a thumbs up, thumbs down.
Um, and you hear those phrases like, oh, we don't want our customers to tell us what to do because of something that Steve Jobs maybe said or didn't say or whatever. So those, those sort of naive understandings of how you would use research to make what decisions, I think, limit the quality of research. I think, uh, sometimes we do research and it gets treated like stenography.
Again, that's that. customer telling us what to do. The idea that, you know, sort of requirements gathering, like you just ask people and then you, you kind of tabulate what the requests are versus thinking of research as something that feeds into this very active, very creative synthesis process. You know, you, you make things called insights.
Those are not quotes from people directly. And I think if you don't know that, it's not going to work. possible. You don't know that research can do [00:07:00] that because you've never been exposed to that. Then you, your application of research is sort of, sort of limited. The, uh, another piece, I guess, from that is that, um, you know, research is a way of, of driving change in the culture.
And so you hear these things like, oh, well, it's bad research better than no research. And this is a wonderfully, uh, you know, messy kind of hot topic. Uh, Bad research sends you down the path of making bad business decisions. So that's not good. But getting companies out there, you know, people in companies, teams, stakeholders, whoever, kind of out there seeing real people and learning from them, even if it's not good, by whatever measure of good you have, like, that's good.
good for the organizational culture that, hey, we don't know everything. Our customers are going to tell us things that we don't know. Um, and then I don't know, there's just a, there's just this piece around what makes research good or what makes research not good that [00:08:00] actually disagree with. Um, and probably the other folks can add way more nuance than I can, but there's this kind of self own that I see researchers doing, which says that my research isn't good.
If someone else doesn't adopt it and integrate it. And I, I think, yes, that's why we're doing research. Um, but I think when you put that in front of you, like, I'm not successful in doing research if somebody else, who I can't control ultimately, doesn't make some decision or doesn't build something, doesn't implement something.
You know, research has a lot of other sort of softer outcomes than, than just another person taking an action. Um, so I don't know, I like to kind of keep it open to, You know, how do we, I think the question of how do we assess if research is good is not so blunt force as just, did the feature we recommended ship?
I think there's, we have to look for other kinds of signals before we can even say like, was this piece of research good? Anyway, rant over for now, but that's [00:09:00] my, that's my take on that. I
[00:09:03] Hannah Clark: love a rant. Does anyone have anything they want to add to that?
[00:09:07] Laura Klein: I want to, I want to address that because I think that, um, my comment, I have made the comment before that decent research gets used.
And so, um, I, I think I actually sat and this is, I'm sorry. I'm sorry, Steve, but I think that I actually agree with you. I think there's, I know, it's horrifying. Um, I think that there is a difference between bad research. Um, maybe, maybe it's the whole like good, bad, decent thing. Like. You can do great research and it doesn't get used and that is I agree.
I totally agree. That's not your fault We should never I mean we should never blame the human when a system fails But I do think it's a systemic failure if you're paying people To do all of this great research and learn all about your users and bring all this great insight and you're not using it so Maybe it's, maybe [00:10:00] my framing is sort of like, that's not bad research, that's bad company, right?
That's bad product decision making, um, to have all of this insight and just be like, whatever. I, like my gut says we should do this other thing. So I never ever want to blame, uh, the researcher in that case. And I think the researchers I have found that researchers tend to blame themselves a lot for stuff like that.
And I find it so sad. And they, um, the, I always say the question that I always get asked anytime I talk about research is, uh, how do I get people to actually listen to me and do the things that I recommend? And obviously there are a lot of great answers to that. There's storytelling, there's, you know, how stakeholder management and there's all this stuff.
And also sometimes. You are dealing with people who aren't going to listen to you, and it's very important that you not internalize that and make that your fault. Uh, you know, you can do everything [00:11:00] right and have that not turn out well because of systemic reasons. So I, I just wanted to say, I do think it's important that research get used.
It may not, and again, I also agree, it may not get used in You built this feature. It may get used in the, now we all kind of generally know more about the people and the context and the flow and what they're doing and why they're doing it. And maybe it helps us make a better decision next time. So sometimes it's a little slow.
[00:11:31] Hannah Clark: Yeah, I think that's a really good thing to point out as well is that sometimes that the research, even if it doesn't get used in the moment, it's still valuable. There's still a lot of value to conducting it and conducting it properly, but it doesn't always go well. Thomas, I'm curious to hear your thoughts on, you know, some of the things that can go wrong throughout the research process.
[00:11:50] Thomas Stokes: Yeah, and I want to add one thing to what Laura just said real quick before I talk about that. I think there's, there's a useful frame of mind to think about this, and I don't. [00:12:00] I'm not often into sports analogies, but this one is a sports analogy. Uh, there's this coach, I think he coached for like the 49ers or some football team, but he had this thing that he said to his players that said the score takes care of itself.
And the reason why he did that was so that they stopped paying attention to like the end score. So in this case, I guess the score would be. Okay. They ship something because of my research and he specifically wanted people to start focusing on the things that they could control, kind of like their process, their input.
So maybe the way that they execute a play or whatever, again, I'm kind of talking outside of my expertise here on the football aspect, but with the research aspect, I suppose. What that could mean or what that could look like is, okay, focus on my process. What can I do as maybe the person who's helping to actually do and then deliver research findings.
And that's all the stuff that you were talking about, right, Laura. It's okay. What's in my control. Clearly articulating my findings, [00:13:00] doing it in a way that's convincing, putting it in a format that we know people will access and will listen to. So that has the maximum potential. Of actually converting and making that effect.
So I think maybe that's the nuance of the conversation. There's, there are things within our control. We can't always pay attention to the end result, but in theory, without, you know, organizational misgivings, uh, kind of set aside, there's things that we could do to make it so that they will act on a research funding.
Right? So I think maybe that's the element and that's, I think Hannah, you were asking, you know, I've got this way of thinking about. You know, ways research can go wrong, where we talk about what we do before. During and after conducting a study. All that has to do with after. Once we have findings, how do we carry them forward?
But we can also think about what we do before a study. That's talking about planning. Are we doing research on the right things? Are we [00:14:00] selecting methods that are appropriate? And do we go in with unbiased objectives so that, you know, we don't have a finger on the scale, so to speak, and Unduly influence results so that the findings never really mean anything because we went in with an assumption we were trying to prove.
And we can also talk about, I think, Steve, you mentioned this in your initial response, right? Once we start to do the study, uh, are we doing that? Well, are we experienced enough? And do we have enough knowledge about the method? So that, like you said, we're doing good interviewing techniques. We asked follow up questions.
We ask questions in a way that's not leading that sort of stuff. So we can look at before, during, and after. Uh, conducting research to kind of frame up where things might go wrong.
[00:14:45] Laura Klein: And can I just make 1 mention on the methods thing because y'all both brought up methods and I think that is so unbelievably important.
I'm a huge, huge, huge fan of mixed methods and specifically bringing in quantitative [00:15:00] data and qualitative data together. But also just knowing. All of the amazing, uh, opportunities that we have to do different kinds of experimentation and testing and research and ethnography, and these are all really useful for very different things.
And if you are trying to AB test your way through when you should be doing ethnography. Or vice versa, probably not getting the results that you want. And that's, I mean, if you, that's the time that you really need to kind of bring in an expert who maybe doesn't, who maybe does know the sort of, Oh, you should actually be doing, you know, Oh, maybe this should be a diary study, not, you know, yet another prototype test.
So,
[00:15:49] Hannah Clark: yeah. And I recommend
[00:15:51] Steve Portigal: it. I want to recommend Christian Roar's article. I think it's called when to use which user experience research [00:16:00] method. And I think it's a Nielsen Norman article that's been revised like steadily over the last 10 years or something, but there's a great, uh, um, matrix in there that shows different methods and sort of what they produce.
And then the whole article, I think in one revision, at least sort of talks about. What question do you have? Or where are you at in your product maturity? And what types of methods are used to answer what questions? Um, cause to your point, Laura, like that's expertise. And, uh, not everyone's going to, you know, read Christian's article and be like, yep, I know how to use that.
But, um, it's, it, it at least says like, this is not a hit or miss. process here. And, and Christian, in my experience has done the best job at kind of documenting that in a way where like, Oh, we can actually, you know, choose. And I see someone's already pasted the link, Alison. Thank you.
[00:16:53] Hannah Clark: Awesome. And, uh, while we're shouting out additional resources on, uh, some of Laura's thoughts, uh, Laura has actually joined us on the, the product manager [00:17:00] podcast, uh, on, uh, basically that exact topic, uh, I believe the episode is called how to have fun with user research.
Um, so if you, if you're interested in kind of hearing some of Laura's more elaborated thoughts on that topic, please check it out. It's a great episode. Um, and, uh, that kind of brings us to section two, which is what separates good, decent, and great, uh, user research from the not so good. Uh, so, uh, Laura, I actually have you set to, to take this one on head on, if you'd like.
[00:17:25] Laura Klein: This one's for a user experience design, right?
[00:17:27] Hannah Clark: Yes.
[00:17:28] Laura Klein: Yeah. Yeah. So, um, so it's so funny because when you first set the, uh, the, the name of the panel, the, you know, how to delight, you know, users. And I'm like, you know what I find delightful. I find it delightful when things work, which is often, which is often a thing that people, it's so funny because we always talk about, Oh, we need to delight users.
And people kind of think this needs to be like, Oh, it needs to be fun and enjoyable. And I'm like, yeah, it needs to do the thing that I want it to do and get the hell out of my way. So. Yeah. I think there's actually this, [00:18:00] which kind of brings me to the point that good user experience takes into account what the user is trying to get out of the product, right?
If it's a, if it's a productivity app that I have to use every single day for my job, if it's a thing that I'm doing. I mean, if it's a game, I play a lot of video games, right? If it's a game, it should have a very different. kind of user experience than, you know, a CRM. I get a little bit annoyed when people try to force all of the same things into the user experience.
It should take into account, again, my context, the flow of what I'm doing. It should not interrupt me. It should not force me to work a different Way, if possible, unless that way is a way that you can teach me in such a way that I'm like, Oh, no, this is really much better. Um, you know, it [00:19:00] design. Good. Good design is about behavior change, right?
You are changing my behavior, but you should be changing it in such a way that I get what I want out of the product. And I. Get, you know, I, yeah, I should, I should get to be or do whatever I want to do, you know, within reason, but mostly, I don't know that you need to delight me. You just need to make it work.
You need to figure out what I'm trying to do, how I'm trying to do it, and let me, or sorry, not let me, help me to do it the correct way. Um, so. I think bad user experience often tries to be, I mean, sometimes it tries to be too clever, sometimes it tries to be too pretty, sometimes it tries to be, um, you know, sometimes it tries to be too minimal, it tries to be [00:20:00] too, the misuse of the word lean, um, you know, sometimes it does a tenth of what I want it to do, and then I'm like, well, that doesn't really help me.
Yeah, and the funny thing is, a lot of us are willing to put up with suboptimal user experiences for products that actually help us do things that we want to do. Doesn't mean we should have to, but you know, it very much depends on how important the thing is that we're trying to do with the product.
Anyway, big, big deal. Vague, it depends answer. There you go. The answer is, it depends. Yeah. At heart a user experience designer.
[00:20:42] Hannah Clark: I think, I think this is a very common theme with a lot of, I mean, with many, many things. But, um, in this case, yeah, the context is really everything. Um, Thomas, uh, I know that you had had some frameworks in mind that were kind of applicable despite the fact that there was obviously a lot of context that comes into play with each individual situation.
[00:20:59] Thomas Stokes: Yeah. Yeah. I [00:21:00] think one that we can, we can plug into this conversation really well is. You know, Aaron Walter's hierarchy of user needs. And if anyone hasn't come across that before, it's very similar. If you've come across like Maslow's hierarchy of human needs, it's similar in idea, but it's recontextualized to the context of user experience.
So the idea is that there are some very. foundational needs towards the bottom of the hierarchy and higher level needs as you advance. And there's kind of four levels going up. There's that the design is functional. It does what it's intended to do. Next step up would be reliable. Not only does it, but does it consistently at all times.
And then of course, from there, not only is it functional reliable, third level would be that it's usable. It is very usable. It is user friendly. And then the highest level, uh, Walter argues that it's pleasurable. So in theory, if we want [00:22:00] to take that on its face, we could say good user experiences achieve those higher level goals, less good ones, Don't achieve the higher level ones or maybe don't even achieve the lower level ones.
And I think laura one of the things that you said that really stands out to me Is that if someone tries to be or product tries to be too clever like too beautiful without actually addressing foundation of usability that's below that or even reliability or functionality then What's the point of? That kind of pleasurable element, if it's not actually meeting all the other needs that support it, I think we can do all of them if we're conscious of it, but it's just a matter of recognizing that there's kind of foundations, right?
There's, we've got to build from the feed up and experience that achieves all the four of those elements. I
[00:22:51] Laura Klein: agree that we should strive to achieve all four. I actually struggle with it being so linear because I actually think this is a weird, this is a weird thing [00:23:00] that most people don't talk about. I think different types of products, those may be in a different order.
Like if you are doing something that appeals to, you know, like, like shopping or clothes or makeup or beauty or whatever, actually that. Making it beautiful may be more important to your users. Then having it be super reliable. I, I play, like I said, lots of video games and I have a few that are what I would like to call not, um, not unbuggy and, um, and you know, they're not super reliable yet.
They're still fun enough that I play them and I enjoy them. And what I prefer that they work all the time. Yeah, absolutely. 100%. But they're fun enough that I'm willing to kind of forgive that. So. Those, that stack, I think, applies great to something like, again, a productivity app or, you know, email or something that you, that you have to use.
But for things that are more, [00:24:00] you know, you have to figure out where you live in that product and how important your product is to the person and what they are trying to get from it. Um, but I agree that everything should hit all four of those ideally.
[00:24:16] Hannah Clark: Yeah, this is a really interesting insight. The idea that the hierarchy of needs itself can be contextual.
Um, so that's, that's a interesting takeaway, uh, just to ensure that we're kind of hitting all the marks before we start to get into Q and a, um, I'd like to move into section three, which is how do we connect the dots to elevate an experience, uh, and offer this unique, uh, proposition, uh, for our product, um, that kind of combines all the elements of our research to create these great experiences.
Um, and since we haven't heard from you in a few moments here, Steve, I'll get you to start off with this one.
[00:24:46] Steve Portigal: Yeah, and I think it builds nicely from from what we're talking about, right? The unique aspect I think is key here and Lauren Thomas are kind of talking about what should it hit? But also, what does that mean for you and your [00:25:00] product?
And, you know, I think. Because we're speaking broadly to a broad audience about broad topics, we're using words like pleasurable and delight and efficient and, and, and so on. And Laura's kind of hinting at like, it changes as you kind of move around from category. Um, but beautiful for, you know, when we say beautiful, we think of sort of something visual, but there's, There's beautiful when two pieces snick together properly and you just kind of go like, uh, like it just feels good.
I think there's sort of beauty in the light that, um, you know, we have to be kind of, I think diverse in how we think about what that looks like. Um, and yeah, and I think I feel like a hundred people have written in the comments. It depends. It depends in like the very enthusiastic cheerleading way. Like, uh, I feel like we're at a rally here, uh, where we could just say it and everyone would fill in the phrase.
It depends. Um, and Laura, you're getting at this as well, right? What's the, [00:26:00] who are you and what do you kind of stand for? I was thinking, As you were talking, Thomas, about, um, Slack, like Slack has a way and this is more about content than experience. But if you update the Slack app, they have a very specific tone in their contents.
That's their consistence there. And they don't sort of write like anybody else. It's kind of irreverent, but it says a little bit about it. Who they are, who they think you are, and what they think their relationship is with you. I don't personally think Slack translates that into their user experience, necessarily.
Um, but there is something about being a brand, having a personality, knowing who you are in a consistent way. And knowing who your users are, it's just tied back to research. Um, like you have to do a, do a lot of work on yourself just to sort of therapize the language here and, and do the research to sort of understand everybody else and then do the design work to kind of connect all those [00:27:00] in a way that, that you build builds in a way that's consistent and we're not talking here about like, I mean, you talk about, you know, unique selling proposition.
We're not even really getting to like. Doing the thing that you're there to do that people care about accomplishing, but just what's the, yeah, what outfit do you wear while doing that? I guess is really where I'm talking about. So knowing yourself, knowing your customers, kind of integrating those consistently.
That's the connecting the dot piece. I guess that's kind of part of your question. Um, yeah, I'm kind of running out of steam here. Somebody else jump in.
[00:27:33] Hannah Clark: Yeah. Thomas, did you want to add a new insights?
[00:27:35] Thomas Stokes: Yeah, I'll throw one in. Steve, you said a really important line in there. You said, do the research. And one element of that is that that research should happen across the product development lifecycle at all different stages, right?
It's not enough. If we, we really want to have a USP, it's not enough to just like usability tests and flows before things go live. That misses the mark. If we're really going to [00:28:00] draw a circle that says, this is what users want, or this is what people need. And this is what we're good at. And we're going to find out what's in the middle of that.
We have to be able to draw that circle. This is what people want and need. And to do that, right. We need. Very early discovery, foundational research. Also, we understand that what people want, or if we want to keep saying it, what delights people is like a moving target that changes over time. It's not always going to be the same.
It's not static. So we have to actually have like strategies to measure things after they go live. I'm a big believer that a good UX measurement plan helps you keep your finger on the pulse of. how people are receiving what you're putting out there. And so having actual post live measurement will help you understand, you know, how things are actually shifting.
So at all stages, whether or not you're not even sure what to build, if you're building the thing, or if it's even out there, you have to be doing research on all of it to actually have that USP that [00:29:00] people are after.
[00:29:03] Hannah Clark: And then you, I think you'd also had, um, some comments about like, That's kind of one part of it, but then there's the element of, you know, what's feasible.
Like there, there's other, other matters to take into account.
[00:29:14] Thomas Stokes: Yeah. And I think this comes full circle to something that we said, or that Steve said earlier, uh, Steve, you often say like, you know, research isn't stenography. It doesn't tell you, or you don't take it on its face. It doesn't tell you what to do directly.
Further to that, right. We're talking about using research to essentially prove out, um, user desirability and usability. Right. But I mean, I'm talking to budget. Product managers. I'm sure everyone's heard this before, but it's worth just giving this caveat that there's going to be two other elements in addition to that user desirability that we've got a balance.
There's obviously. feasibility. We've got to be able to build the thing. It'd be great if user research reveals that we could build something that you click one button [00:30:00] and your house is clean and your laundry is done and you've got groceries, but maybe that's not one button click, right? Um, so it's gotta be feasible and it's also gotta be kind of viable for the business.
You work within a organizational system that is going to select for things that advance its mission. So the business viability, the technical feasibility, and the user desirability kind of have to all come together. And that involves a bit of decision making, right?
[00:30:32] Hannah Clark: Laura, did you have anything you wanted to add to that?
[00:30:35] Laura Klein: Yeah, I have a kind of a weird side take on this, which is I think it's much easier to do great to deliver great user experiences that fit with your business needs if everybody's incentives are aligned, and that has to do with your business model.
So if you have a fundamentally what I would [00:31:00] consider to be a more ethical business model that says, You know, we are going to deliver a great product that is so good that people are excited to give us money for it. Then the better you make that product, the more people are going to want to give you money for it.
And everybody's incentives are aligned and that's fantastic. And obviously that's simplistic. You can still have, I mean, that's, that's, and that's not also the complete definition of ethical, you can still hurt a lot of people doing that, but that's sort of the baseline of if you have a business model where your users incentives are aligned with your business incentives, it is, you can make a much better argument for, we want to make this better for users.
That's going to make it better for the business. Uh, when you maybe don't have that, or you have, you know, the customer and the user, which happens a lot in B2B, you know, where the person who's buying isn't necessarily the person using it, then you have to be a little bit more creative about [00:32:00] connecting those dots between if we make it better for the user, it actually makes it better for, say, the organization.
Which is a good reason for you to buy it. So you have to make that sort of leap and then you end up, and then you also have the products where those things are just very disconnected and yeah, I don't, you, you, there isn't really a great argument for making this a better product makes us more money because it might not.
And, um, I don't know. I don't think those are great places to work personally. And, uh, I don't think those are great. They're definitely not a great place to be a researcher or a designer. Um, so keep that in mind when you're looking at your next job.
[00:32:43] Hannah Clark: Um, we did get a question that seemed to be building on what Thomas mentioned, uh, which was, uh, where, where would necessity come into play?
I'm just trying to see if that user who posed the question, maybe can elaborate a little bit more on those comments, um, but does [00:33:00] that kind of provoke, uh, any kind of, uh, response from any of the panelists?
[00:33:09] Laura Klein: Necessity for whom? Or for what? Like, if you're, is it like the thing, like, if you're forced to use the product, then. Again, it's, I mean, it would still be great if it were easy to use, but, um, yeah.
[00:33:23] Thomas Stokes: Yeah. I suppose there could be that user buyer disconnect and a lot of B2B spaces, right. Where like they build stuff where the person who's actually kind of in the sales cycle, the one who decides on purchasing the experience for the team, all actual end users be damned.
So I suppose That would be a negative implication of necessity if you're required to use it in a B2B space. But even then, I think we could all argue that it's to the benefit of the folks who are building those experiences that we would, in theory, win out if we support the end user and we can [00:34:00] show, uh, you know, there's some meaningful case studies or whatever that By supporting the end user, we actually achieve better outcomes, which then influences the buyer's decision.
Right. So maybe it's a roundabout type thing, or I'm not sure.
[00:34:18] Hannah Clark: Well, I, uh, I think now is a good time to start transitioning to our Q and A. So, uh, because we do have some good questions and I want to make sure we've got enough time, um, to address them. Um, but in the meantime, uh, first of all, if anyone has a meeting that they have to jet off to now, it would be a good time Uh, but before you split off, uh, there's just a few things to go through.
First of all, Steve has, uh, hooked up everybody who has attended the session with a discount code for his book, uh, Interviewing Users, which as I said before, should be required reading. It's a fantastic book. Um, so if you, uh, if you'd like to take advantage of the discount code, that's just for folks who are attending the session today, the code is delight at checkout, uh, and we'll be posting the link, uh, to access the book in the chat here.
Um, [00:35:00] also if you're loving this discussion, uh, we'd love to see you at our next event. We host these every month. Uh, and, uh, we also have a product showdown that's coming up very shortly. So if you'd like to see me embarrass myself, um, please join us for that. We have another link that'll be posted to sign up for that.
It will be doing an, um, Oh, everyone's going to have to take a drink. An AI automated, uh, automation. Edition of the product showdown. So if you'd like to attend that, um, Thank you everybody for taking a drink. Uh, please take a look at the link there and we'll also be posting a link to, um, I'm not sure that we have a link ready to sign up for our next panel, but we'll be posting more about that in our social media channels as we get more details.
Also, if you are a guest today and you want to continue the discussion by becoming a member of our community, please check us out at the product manager dot com slash membership. Again, a link will be posted in the chat for you guys to check that out. Alright, so let's move on to the Q& A. So, our first question, uh, from a member is, How do I get people to answer surveys?
I send Slack DMs, email, surveys, without a budget [00:36:00] to properly pay and incentivize customers, it's tough to get their time. People are stretched really thin these days, so it's understandable, but it makes my work very tough. Um, so, uh, I, I think Thomas had, uh, some kind of preempt, uh, preemptive thoughts about kind of how we can manage this a little bit.
[00:36:16] Thomas Stokes: Yeah, I think there was a key line in there about not having budget to incentivize people to participate. So I've got two main points. One specifically about budget. If we're not actually incentivizing people for the time, we don't have the appeal of monetary incentive. We have got to appeal to something else.
And one thing that I've found is that in those situations, you might be lucky enough that you're in a situation where your customers actually really care about Impacting the direction of the product. And so you might appeal to that. You might actually really emphasize in whatever recruitment channels you're using that, Hey, this is really something that we are going to use to drive [00:37:00] forward product improvements.
You got to make good on that promise, but use that to appeal potentially to potential partition participants, potential participants, that's two piece. The other thing is, you know, if we're not actually incentivizing them for their time through, you know, A sort of reward with money. Um, be relentlessly, uh, I guess scrupulous and just really look over the survey that you're building.
Just prioritize really what you need. It might not be the perfect survey that you'd have if you had unlimited budget and time, but narrow things down. So you at least get a good response rate when the people who click in thinking, yeah, I'm going to help the product direction. If they see a million questions, they're probably still not going to answer it.
So. Go with both of those, try to appeal something other than just monetary incentive and really relentlessly prioritize your survey down to what's essential for the decisions that you might make off of it.
[00:37:55] Hannah Clark: Great answer. Um, so our next question from our members is, uh, My [00:38:00] challenge is in getting the truth from users about their reasoning and intent for using the software the way that they do.
We hear tons of stories about how they think things should, quote unquote, should work or look. But it's hard to get them to open up about what they're actually doing using the product. Um, Steve, did you want to take this one on?
[00:38:16] Steve Portigal: Yeah, I agree that interviews should cover what people are actually doing. Um, and, you know, I think this, the, the fun of this format is trying to infer some more context from the question.
Like, I really wonder, are these interviews? Um, so I'm thinking about, like, how are they being set up? Who's being asked to participate? Um, it's kind of what you were saying, you know, about, um, about the dynamic that we have with our research participants. I think in interviews, the same thing is true with surveys.
Like what, what's the expectation who's being asked to participate when and how, like, are these people calling in with tech support questions and then they're being kind of escalated to, to interview. That's a different context versus, Hey, we want to [00:39:00] talk to you and learn. How is the interviewer introducing the subjects of interest?
We want to talk to you about x and y. And how is the interviewer asking those questions? Um, because I think I usually ask questions about what are people doing? What are you doing? How does that work? Uh, before I get to anything about what would you want to see different, uh, different in the future? Um, so I'm It's, it's a, it's a great question for me because it makes me wonder, well, why isn't that happening to begin with?
Um, so again, I think just expectation setting. I don't know. What questions are you asking? What's, you know, what follow ups are you asking? I think, and yes, every interview, no matter how well you set it up, somebody's going to come to it with a different sense of what the purpose of that is. And sometimes you just got to let them share what they want to share rather than kind of squelch them into your model of the conversation.
Let them share what they want to share and then [00:40:00] say, this is great. We've got some other things we'd like to know. Can we talk about what your workflow looks like today? I want to go right back to the beginning. How do you configure this? I think you can keep asking questions, um, to build to that kind of outcome.
You know, how are they working is not saying. How are you working, but it's a many, many, many, many smaller questions to get at that information. So I think it goes back to like what's good and so on, like it's technique, it's, it's expectation management, it's all kind of that stuff. So, um, just build based on what that question makes me think of.
I don't know. That's my kind of my, my riffing on where to think about making improvements. Yeah,
[00:40:45] Hannah Clark: yeah, follow. Um, so the next member question is about the administrative pains of research. Uh, so ongoing customer meetings to discuss, uh, their needs and desires and hopes and dreams is a painful administrative task full of cancellations and rescheduling.
Is there a way [00:41:00] to ensure user turnout for these meetings? Um, does anyone want to take a crack at that one?
[00:41:10] Laura Klein: I'm not really a user, like a, like a research ops expert. I don't know if Steve or Thomas is either. This feels very much like a. I mean, so can you make people show up to things? No, sorry. I mean, there's all sorts of things that you can do to make it better. You can send reminders and make sure that you're offering an incentive and all that kind of stuff, but like people are people and they're going to people, so, you know, you're going to get ghosted, it's going to happen.
Uh, but I think that having a good research ops organization, if you can, or person who is responsible and who can sort of help. Guide that and you know, it at least takes away some of the the hassle if it is somebody's job to be able to say Oh, we you know, we got you an extra person and you know, [00:42:00] yeah No, you can't make people show up to things.
They don't want to show up to especially these days
[00:42:06] Thomas Stokes: I'll give one two. There's a useful stat. I think we can we can start off with you can expect 10 to 20 no shows and k it's cancellations That's pretty good Generally true. So you can look at your own practices. If you're trending below that 10%, great work.
If you're over that 20%, maybe there's a lot that you can do to bring it down. Like Laura said, if you establish contact, if you do things, you know, ops wise, like actually. Maybe automate reminders and that sort of stuff. Make sure that the incentives are around the right level, uh, to encourage people to show up.
That's good. But I saw this one trick. I'm wondering how well it works for most organizations. I'm not sure if anyone else has seen this one, but it's a exploiting consistency bias, uh, I don't know if anyone else has seen this, but essentially like in a screener or [00:43:00] recruitment thing, you get people to agree or disagree with a, a scale question.
That's like, I'm the type of person who typically keeps my appointments or I show up on time, something like that. I've seen people try to say that putting those in. And if people agree, they're more likely to show up to your, uh, actual sessions. That's, it sounds fun and interesting. Uh, I'm yet to test it out myself, but who knows.
It's worth a shot.
[00:43:25] Laura Klein: That feels like one of those things that people that, you know, end up getting written up in a very popular airport book and then later on it turns out that it was, you know, exactly eight people, um, you know, all of whom were, you know, college students at Duke. And it is not at all applicable to anybody else in the world or whatever it is, but, uh, but, uh, it's, I mean, it certainly sounds fun.
It seems like an easy thing to try.
[00:43:55] Hannah Clark: Yeah, I would love to run that experiment. Yeah, someone
[00:43:57] Laura Klein: run that experiment. Get back to us. I don't [00:44:00] know.
[00:44:00] Hannah Clark: Yeah, well, you know, I think it's very interesting to kind of play upon people's, you know, self awareness and their perception of who they are as a person as a means to incentivize them to show up for an appointment.
Maybe I should do that on myself. So, okay, we'll move right along here. Um, so, how, uh, this, this is an interesting question that's a little bit more about, uh, bias management. How do you avoid the biased voice of the angry customer who's reaching out to vent and or turn the call into a support case? I guess this is, uh, about doing, uh, research based on, uh, stuff that folks have volunteered.
Uh, does anyone want to take this one on?
[00:44:41] Steve Portigal: Yeah, I might start here. I think It's just like the other question about what we're not hearing from people and I had a lot of questions about context here, um, right? You're in control of your sample. So, um, you know, this to me is a, is a, it's a sampling question and it's a, it's a method question.
[00:45:00] So, You know, Thomas kind of made a reference to screening. So how are you screening people in or out to participate in research, um, asking a question about your disposition towards the brand or the quality of experiences you've had is you could, you could filter in or out people to get kind of a balanced sample.
Um, so I, you know, I think it's right. We do research on individuals, but we do research on a sample. And so, you know, yeah. Somebody has a perspective. I don't know. Is that, is an angry customer a biased voice or is an angry customer an angry customer who has, you know, a lived experience in today's parlance?
Um, uh, and maybe it's the second piece of the question is, so should we include angry people that don't like our product? Yeah, but maybe not exclusively unless we're that suits the objectives of, you know, what we want to research, um, people wanting to turn the call into a support case. I think that goes back to the thing I was saying before, you know, [00:46:00] expectation management, you know, who's asking for the call?
What's the purpose of the call? Let's reiterate that at the beginning of the call. And I think this is really important. Let's, let's live up to that, uh, that behavior. So I, I see researchers telling, um, participants, okay, first of all, there's no wrong answers. go on to be very excited or not about the responses, which basically tells people, yes, there are right and wrong answers.
So, you know, how do you sort of set and live up to that expectation in your actions, in your interactions with the participants? And if someone has something they have to tell you, you know, uh, a wise woman once said, people are people, and they're going to people. You can't stop someone and nor should you from saying the thing that they're, you know, they're the, um, They're the rage character from, uh, from Inside Out, like, if that's where they're at, like, meet them there, that doesn't mean that's the tone or the [00:47:00] content of your entire interaction with them, like, that's just a starting point.
Um, so, lots and lots of interview participants. come with an expectation, and it's a bias about the interview. And then you're like, yes and, yes and, and tell me about this, and tell me about this, and tell me about that. So I think it's very manageable, but it just takes a little bit of, of know how.
[00:47:22] Hannah Clark: Yeah.
Shout out to the, uh, the improv tactic there, the yes and, uh, uh, but, but, uh, very, very, uh, good points. And I do, uh, sorry, we, we're going to be moving on to questions that have been, uh, submitted through the chat. I am going to jump the line just a little bit because I do think that this is a really important question that I want to touch on before we get to some of the other ones, because we've talked a lot about some of the methodologies around, you know, how we're thinking about things and conducting research.
But I also want to make sure that we're mentioning some of the more specific tools and that kind of thing that we use in order to be effective in our roles. So what are some of the go to research tools that you folks are using for capturing insights and gathering user [00:48:00] feedback and especially at scale?
This couldn't go to anybody if you want to jump in
[00:48:10] Laura Klein: again. I'm sorry. Maybe Thomas has an answer to this. I like, I feel like this is a very different answer for people who are on teams and large teams versus small teams and teams that have research ops versus teams that don't and consultants versus non consultant, you know, in house versus not in house.
Um, so It's, it's, anyway, I'm, it's not a question that I can answer specifically. There are a ton of tools right now that are available, and I would say that as with anything else you need to look at, like what does your team need the most? Do you need to, you know, sort of democratize the, um, the response you need to make things easily searchable?
Do you need to, uh, do management of participants? Like what is your specific problem? And there's [00:49:00] a. So many more tools out there than there were back when I was, you know, doing user research every, every week. So, um, but they, they each kind of fill a specific niche and just make sure that. That's the niche you need filled.
[00:49:19] Thomas Stokes: Yeah, and I'll also say, I've got a rule for myself that I don't answer this question in a public venue with people coming from many different backgrounds and organizations, but I do answer this question private channels. Uh, so if anyone does want to actual kind of contextualized advice around tools, I will help you out with that.
You can reach out. But, uh, yeah, like you said, Laura, there's just way too many things that influence whether someone should choose this tool or that tool. There's even just like elements of the way that that tool does business. It might not work with your organization and the way that like your procurement works.
So there's all these [00:50:00] thorny details about actually getting in and starting to use a software tool in the BDB space that just makes it a slog. So
[00:50:08] Laura Klein: yeah. And no, no single person has used all of the ones available and there might've been one released yesterday that's that solves everybody's problems and that's wonderful.
And I wouldn't want to miss that one by saying. I use this thing that I used five years ago.
[00:50:27] Hannah Clark: Uh, okay. Well, I, I think, uh, we're, we have maybe time for one more. Um, so we'll just go ahead and. Oh boy. There's a bunch of good ones to choose from. I'm trying to, uh, on the fly prioritize, which one's going to, everyone's going to get more value out of, uh, okay, well, let's see, there's one on Jane, Jane AI, but I don't want everybody to get drunk before they have to start their work day.
Um, Okay, well, well, let's just jump into it here. It's actually, it's coming from an anonymous user. Um, so, Gen AI is able to answer all the user research questions, or replicating survey [00:51:00] responses, or any other data required based on the input. Where do you think the difference, I don't know, maybe I didn't read this correctly.
What do you think the future of UX research is going to be? You know what, I'm so sorry to the user that posed this question. I feel like this is a little bit too much to get into with the kind of time we've got available. I'm going to move on to the next one. Um, how, how can we make UX easy for a novice user, but at the same time get out of the way of an expert user?
Oh, this is an interesting one.
[00:51:30] Laura Klein: That is also its own whole thing. Like that, that is, that is also a giant question, but again, I, the, the, the answer is going to be understand what, I mean, here's the thing. It's not like advanced users want things to be harder. It's just that sometimes they are doing weird things with your product.
I think a lot of times people think of. easy as we are going to strip away all of your choices or [00:52:00] all of your options or we are going to you know, shove everything into a settings thing. And I think the most important thing is just to understand, um, what is it that people need to actually sort of get started and be successful?
With your product in that onboarding experience, it's by the way, it's probably not five screenshots with little, um, you know, arrows saying, Hey, do this. Hey, hey, we added five new features. Check it out. It's probably not a product tour like that. It's probably some sort of process that helps them get to the first time using your product while training them the steps that they would need to go through to be successful and then having it Be having there be the next time, the option to maybe sort of move through that a little more quickly.
[00:53:00] Um, but it's, it's very much understanding what, what is getting in the way of the novice user? What, you know, you don't drop them into a giant complicated interface and be like, good luck. And then understanding what makes a power user, what does a power user trying to do that is fundamentally different from what a novice user is trying to And how do we make that, you know, again, easy the first time, even for power users, sometimes they might be doing something with your product that is their first time doing that thing.
You still need to make that easy as well. So don't think about it so much in terms of like, Noobs versus, you know, old folks or whatever. It's how do I get people started doing the thing that they want to do in a, in the way that helps them to learn what they should do the next time they want to do it.
But also they probably don't need like, you know, every single time and you can have [00:54:00] settings and you can have things for like actual power users. I, you know, I, I've, I'm an ex engineer and yeah, sometimes we want hotkeys. Deal with it.
[00:54:08] Hannah Clark: Don't
[00:54:10] Laura Klein: take them away from us.
[00:54:11] Hannah Clark: Don't take away the hotkeys. So we got a great response as far as questions have been asked and I'm so deeply sorry that we're not able to get to all of them because I am personally interested in a lot of them.
If you didn't manage to get your question answered today, all the panelists are available on LinkedIn. I'm sorry to volunteer everybody's LinkedIn's here, but I'm hoping that everybody's open to, uh, The folks reaching out after the session, um, but, uh, but thank you guys so much for the engaged participation.
Everybody who's been here with us today. This has been a really fun session. Um, so we're gonna start wrapping up here. Um, if you wouldn't mind, we are going to submit a link to a feedback survey in the chat. Uh, we would really love to hear your insights about how the session went, anything that you think we could do better next time.
We're constantly iterating on these panel events, and we think that they're getting better every time. And a lot of that is due to [00:55:00] your feedback. So thank you so much for, first of all, your attendance, and second of all, in advance for your feedback. And of course, a huge thank you to our panelists for volunteering their time today.
Um, so panelists, this was exactly as fun as I knew it would be. Uh, so thank you so much for, uh, being here, for giving your insights and your great personalities. We really appreciate your expertise and your time. Um, and, uh, for everybody else, uh, first of all, thank you for attending. And second of all, uh, I hope everybody has a great long weekend to come up with, uh, if you're in an area that has a Labor Day weekend coming up.
Um, and we really appreciate you being here and we hope to see you next
time.