Research has often suffered from shortcuts, assumptions, and poorly conducted user interviews, long before AI entered the picture. While there are concerns about AI exacerbating these issues, today we’re exploring how AI can actually improve research practices by standardizing and democratizing good research at scale.
Cori Widen, User Research Lead at Photoroom, joins us to share how AI is being leveraged to transform research practices at her company. She discusses the cultural mindset that encourages innovation and provides practical insights on how teams can use AI to elevate the quality and impact of their research.
Interview Highlights
- Cori’s Background and Transition to User Research [01:31]
- Cori has 13 years of experience in the tech industry.
- She worked mainly in product marketing, which involved user research methods.
- She realized she enjoyed user research the most and shifted her focus.
- Cori became a full-time user researcher, starting at Lightrix.
- She now leads user research at Photoroom.
- Using AI for Qualitative Analysis [02:02]
- Initially, Cori was skeptical about using AI for qualitative research due to trust issues and her enjoyment of manual analysis.
- She gradually adopted AI, recognizing the need to adapt or fall behind.
- Her approach started with comparing AI results to her traditional manual process.
- Currently, she uses Dovetail to store interview transcripts and Dust to query AI.
- AI helps extract quotes related to research questions, replacing manual tagging.
- She then manually creates affinity diagrams from those quotes to derive insights.
- Finally, she uses AI to challenge her insights, checking for biases or missed perspectives.
- Her process is a blend of AI tools and manual methods, continuously evolving.
- Evolving Opinions on AI in Research [05:24]
- Cori initially felt pressured and unsure how to effectively use AI in her work.
- Early AI tools didn’t perform well in the research process, leading to skepticism.
- As AI improved, especially with tools like ChatGPT, she began using it for simple tasks like summarizing interviews.
- Joining Photoroom marked a turning point due to the company’s enthusiastic and innovative AI culture.
- The supportive and curious environment at Photoroom made AI adoption feel exciting rather than obligatory.
- AI has now become a fun and interesting part of her job.
- AI Assistance Tools at Photoroom [07:18]
- Cori built two main AI assistants to support research at Photoroom.
- “Mining User Interviews” helps analyze interview transcripts stored in Dovetail via API queries.
- Researchers use it to find relevant quotes instead of manual tagging.
- Stakeholders use it to directly access user insights without needing reports.
- Interview Guide Generator creates interview guides based on user input (goals, audience, etc.).
- Ensures guides follow best practices and avoid biased or ineffective questions.
- Helps team members of all experience levels prepare quality interviews efficiently.
- Prompt Engineering Techniques [09:45]
- Cori uses multiple variations of the same question to get diverse, relevant AI outputs.
- She always asks follow-up prompts to surface more examples, as initial responses are rarely exhaustive.
- Prompts are kept to one clear question at a time—multi-part questions reduce accuracy.
- She’s cautious about adding too much context, as it can confuse the AI or cause it to prioritize certain inputs over others.
- Overall, she experiments constantly to improve prompt clarity and effectiveness.
- Validating AI Insights & Sharing Knowledge [11:54]
- Cori uses AI to surface relevant data, but relies on her own judgment to generate insights.
- For broader questions, she ensures AI-cited insights include references to specific transcripts.
- If few sources are cited, she checks them manually to assess validity.
- She always asks AI for outliers to avoid overgeneralization and capture nuance.
- A guide for using the AI assistant was created for stakeholders, but adoption varies.
- She’s still exploring how to ensure consistent understanding and use of AI-generated data across the company.
When analyzing a project, I rarely ask the AI for insights. I ask it to find relevant information, and then I use my own judgment to draw the actual insights.
Cori Widen
- Photoroom’s Culture and Values [14:54]
- Photoroom has a strong user-centric culture, with leadership and teams regularly engaging with users and qualitative data.
- This culture existed before Cori joined and set a foundation for valuing UX research deeply.
- Initially skeptical about democratizing research, Cori became open to it at Photoroom.
- She found that widespread user interaction leads to greater appreciation and use of research insights.
- A common fear—that non-researchers might misuse research methods—was mitigated by stakeholders’ eagerness to learn and apply best practices.
- Cori now supports skill-building and guidance for teams conducting their own research.
- Future of UX Research with AI [19:24]
- AI is making UX researchers more efficient, freeing up time for deeper collaboration with stakeholders.
- Cori predicts the researcher role will evolve to focus less on execution and more on strategic collaboration.
- Researchers may spend more time brainstorming and contributing in the solution space.
- Faster access to credible insights through AI supports this shift in responsibilities.
- Practical Tips for Incorporating AI [20:32]
- Shift your mindset and accept AI as a permanent part of research workflows.
- Instead of starting small, dive into using AI for qualitative analysis—the most time-consuming task.
- Use AI to replace manual tagging and help categorize user data (e.g., from interviews or usability sessions).
- Significant time and energy savings can help make AI adoption feel impactful and worthwhile.
To make the transition to someone excited about using AI in research, you need to do something that significantly impacts your workflow. The time and energy saved in the process can be monumental.
Cori Widen
- Surprising Discoveries with AI [21:58]
- Cori’s breakthrough moment was realizing the value of combining her human expertise with AI to improve qualitative analysis.
- She learned that AI doesn’t replace her role, but enhances it by handling tasks AI excels at, freeing her to focus on what she does best.
- Feedback from a colleague revealed that her speed in delivering insights was largely due to embracing AI in the research process.
- This realization made her understand the added value AI brings to her role and the team.
Meet Our Guest
Cori Widen is the User Research Lead at Photoroom, the world’s leading AI photo-editing app, where she spearheads the development and implementation of user research frameworks to inform product design and strategy. With over a decade of experience in the tech industry, Cori has held various product marketing roles before focusing on user research. She has authored numerous articles on user experience and research methodologies, sharing her insights with the broader UX community. At Photoroom, Cori conducts qualitative and quantitative research, including user interviews and feedback analysis, to guide product development decisions. Her work ensures that Photoroom’s tools meet the evolving needs of its global user base, contributing to the app’s success in processing over 5 billion images annually and being downloaded more than 150 million times worldwide.

The breakthrough moment for me was realizing how I, as a human, could still play an impactful role in the process. Using AI didn’t mean outsourcing everything to AI.
Cori Widen
Resources from this Episode:
- Subscribe to The Product Manager newsletter
- Connect with Cori on LinkedIn
- Check out Photoroom
Related Articles and Podcasts:
Read The Transcript:
We're trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn't correct 100% of the time.
Hannah Clark: For time immemorial, people have been performing research badly. We didn't need AI to take poorly calculated shortcuts or make broad assumptions based on scant data, or develop awkward personas shaped by assumptions and painfully unhelpful user interviews. So now that AI is here, it comes as no surprise that the research community is concerned that the technology might wreak even more havoc on research quality. But, on today's show, we're exploring ways that AI can actually help to standardize and democratize good research at scale.
My guest today is Cori Widen, User Research Lead at Photoroom. If you're not familiar with Photoroom already, the company is getting a lot of attention for the innovative ways they're using AI, and Cori gave me a fascinating run-down of how all of that starts at the cultural level. We talk about how the mindset of the company fosters an excitement for exploring the possibilities, as well as some practical ways that teams can use AI to appreciate, perform, and use great research. Let's jump in.
Oh, by the way, we hold conversations like this every week. So if this sounds interesting to you, why not subscribe? Okay, now let's jump in.
Welcome back to the Product Manager podcast. I'm here today with Cori Widen. She's the User Research Lead at Photoroom.
Cori, thank you so much for making some time in your schedule to meet with me.
Cori Widen: Thank you so much. I'm so happy to be here.
Hannah Clark: Me too. And Cori has actually been working with me for a long time, and this is one of the first times that we've actually gotten to speak and not just communicate via email. So this is very exciting. I feel like I'm talking to an old friend or an old pen pal.
Cori Widen: Yes. Very exciting. I agree.
Hannah Clark: We'll start off the way that we always do. Can you tell me a little bit about your background and how we got to where you are now?
Cori Widen: Yeah, sure. So I've been in the tech industry for about 13 years.
For most of that time I was actually working in product marketing, but as most people in product marketing research methods were a part of my job, right? Like interviewing users and things like that. And at some point I just made the call and said, actually, that's the part I like the most, the user research.
So I transitioned to being a full-time researcher first at Lightrix, and now at Photoroom. I'm leading user research at Photoroom.
Hannah Clark: Awesome. So today we're gonna be focusing on researcher approved ways to use AI for research purposes, which is a hot button issue right now. So we'll kick it off on a hot button topic, using AI for qualitative analysis.
What's your current methodology for combining AI and manual work in a way that you feel good about, they can put your own stamp of approval on. And how has that evolved over time?
Cori Widen: Definitely controversial and I would say just a bit about how it evolved before I go into, like where I landed now is that, I was also reluctant.
I think the whole research community. One, because I didn't trust the AI to do a good job, and at the beginning it really didn't. That was very legitimate. And the other thing that I don't hear people talk about enough is just that I loved my job. I actually wasn't waiting for AI to come and say, let me make it better.
Let me solve this pain point. I liked things like qualitative analysis, the process of doing it myself. So I wasn't super excited about it at the beginning, but it did become clear that it was figure out how to utilize it or get left behind. So when I started using AI for qualitative analysis, my first process was to do my exact manual process that I had always been doing alongside trying to utilize AI and comparing the experiences and seeing what the AI did well, what it didn't do well, and like how it could hit this sweet spot of reliability.
It's been a lot of trial and error and I think this process will always be evolving because AI is always evolving. But, as of right now, I'll describe to you where I am at this moment. Okay, so I think it helps to have a concrete example. So let's say I am analyzing a strategic research project, and most of the data is user interview data.
So what I will do is I'll have all of the transcripts within a project. They're stored, we use Dovetail, and then we use a tool called Dust to query the AI based on all the transcripts that are in Dovetail. Okay, so basically I'm using AI to ask questions about a set of transcripts that I have for a project.
So what I do is I basically, I have my research questions in front of me, which I made manually. I start prompting the AI and asking for quotes related to our research questions, right? So let's say, for example, I want to know about pain points in a specific flow or something like that, right? So I'll ask the AI to find me quotes from the user transcripts.
That relate to that research question. And what that is doing is it's basically replacing the process of manually tagging interviews. I think many researchers would use various tools to manually tag interviews based on topics according to your research questions. And now I use AI for that. And what I then do is I take the quotes on each topic and I put them into a mirror board and do affinity diagramming, which has always been my process for analyzing interviews.
And I still do that manually. And based on the affinity diagram, I come up with my insights for the project. And the other place where I use AI here is I actually run my insights through and I say these are the things that I came up with when I ask the AI to disagree with me or define things like in the transcripts that are contrary to the insights that I brought forward, just to see if I'm missing anything, to check for biases and things like that.
And that is my Frankenstein process of manual and AI for analysis.
Hannah Clark: So, you kind of mentioned that your overall opinion of AI has shifted. I think a lot of us have had that sort of journey of a little apprehension and then that's evolved over time. So how's that evolved for you?
Cori Widen: Yeah, for sure. So at the beginning I felt like there was a lot of pressure within companies to figure out how to utilize AI.
And it was a bit directionless, right? This is gonna make you more efficient, it's gonna do help you do a better job, et cetera, et cetera. And I wasn't sure exactly how to apply it and most of my attempts initially weren't that great. It wasn't that great at replacing me in any part of the research process.
However, that pressure didn't go away. So gradually, as ChatGPT got better at handling larger chunks of data, I relented and I started making like custom scripts to do very basic things like, maybe summarizing an interview or something like that. That opened my mind a little bit, but I think actually the big turning point for me was relatively recent, like when I came to Photoroom, because Photoroom is super unique in its approach to AI, both user facing AI and also internally as a company.
Is there pressure to incorporate it? Sure. But I would describe that pressure as like excited pressure. Everyone is very excited about all the different kind of use cases that they are finding with their specific field and profession using AI.
So there's an atmosphere of, people always sharing what they've accomplished. People being really interested in how you're utilizing AI, et cetera. And that environment turned it in from this Ugh, oh no, I'm a researcher and I have to figure out how to use AI to something that's actually, I would say like a fun and interesting part of my job.
Hannah Clark: Cool. Okay. I wanna dig a little bit more into kind of culture at Photoroom in a little while. 'cause this is really interesting and I think this is it's a big factor that can influence the adoption within companies of AI overall, which is that's a huge issue on its own.
But, let's talk a little bit about AI assistance because you mentioned that sort of like ways that you've used AI have changed, being able to use it more effectively is, has been a real game changer and like I think everyone knows now AI assistance are trending and I know that you've built a few that you found to be really helpful in your process. So what specific types of assistance have you built and what problems have they been solving for you lately?
Cori Widen: Okay, so two main ones come to mind. So one is called Mining User Interviews, not a very creative title, however, that is exactly what it does. And essentially every user interview that is done at Photoroom is stored in Dovetail, and then we have the transcripts via the API and we can ask questions. So that is utilized.
It's solved different problems for myself and Becky, who's my co-researcher, and other stakeholders. So for us as researchers, as I mentioned, it's a big part of our analysis process. It's how we find quotes instead of tagging things, et cetera, when we're analyzing a project and for stakeholders, it's a great way without going through us and us digging up reports and all of that stuff to just mine all the interviews done to date.
To ask basically anything about users or users of competitors, right? Please give me a list of all the challenges people have had with, I don't know, AI backgrounds or some feature in Photoroom. So it saves us time and it also brings the stakeholders closer to the user by looking for that information and interacting with it themselves.
So that's one. And another one is more recent and it's an interview guide generator. So I know we're gonna talk about it later, but a lot of people at Photoroom do user interviews and there's just a large degree of available time to prepare a great interview guide before interviews, and also knowledge about best practices, like how do you ask users things to generate the insights you want.
So the interview guide generator, essentially the input is what do you wanna learn? Who are the users that you are interviewing and what do you want to learn from them? And then it generates an interview guide, mostly according to best practices. The effect of that has been that like, okay, so regardless of how much knowledge you have, you can generate an interview guide that doesn't ask leading questions and doesn't ask users to predict their future behavior and things like that.
Hannah Clark: That's really powerful 'cause I'm seeing a kind of a trend not just AI expanding productivity among in individuals, but also expanding the skillset of individuals by being able to transfer own knowledge and that kinda thing, which is such a cool way of using the technology. So it's like really interesting to, to hear like a more nuanced approach directly.
Okay, let's talk about prompt engineering. This is another huge thing. Everyone's trying to do better at it, and I know that, everyone talks about, just give it lots of context, but it's a little bit more nuanced than that.
So what are some specific techniques that you've discovered that have really improved the quality of the outputs that you're getting when you're doing analysis on user research?
Cori Widen: First of all, I am, just like everyone else, I'm always trying to figure this out and sometimes I'm like pulling out my hair, like why doesn't it understand me?
But there are a few things that have come up that I find really helpful specifically when doing analysis. So first of all, any question that I ask the AI, I ask it in at least two or three different ways. Because it's not a human and it's impossible to know how it's interpreting my question, and I often find that I get different user quotes or different transcripts coming up when I ask the question slightly differently.
So particularly when I'm analyzing a project, and I wanna be careful that I'm finding like every relevant point on a particular topic in order to do the analysis, I use a few different prompts asked differently each time. Also, I always, I call it nagging the AI, but I always nag it and ask if they can find more examples, because even though I always ask for an exhaustive list of quotes or an exhaustive list of examples, it's never actually exhaustive.
There's always more. So I always ask for more. The other thing that I have learned is that, when I speak, if I'm speaking through a human, I can ask four or five questions at once and get really excited about a topic and that does not work well with AI. So I am trying really hard to ask prompts that only contain one question.
And it's interesting because you mentioned that a lot of people talk about give it as much context as possible, but I feel like with context, it just anecdotally based on my own prompt experiences, I think there's such a thing as too much context. And it starts to develop a hierarchy and prioritize the things that you're saying and ignore others.
So I'm trying to find that sweet spot, which it to me is tending toward even less context.
Hannah Clark: Oh, interesting. Yeah, I just, it depends also, where are you using context? Okay. This is good to know because this is the first time I've heard someone say, oh, I don't use all the context.
Speaking of missing nuances, so one of the things that you mentioned before is this kind of pitfall of AI, sometimes missing nuances in data or presenting limited data as a trend. There's like kind of some sort of things that I think most researchers might catch on to. Like you said before, as you're transferring skillset and knowledge to other people, it doesn't always come through.
So how do you validate AI's analysis and ensure that you're not missing important outliers, and more importantly, if you're sharing some of this, these skills with other stakeholders, how do you ensure that transfer of knowledge also gets carried forward?
Cori Widen: Yeah, for sure. These are really important questions, I think.
So when I'm analyzing a project, I'm rarely asking the AI for insights. I'm asking it to find me relevant things, and then I'm using my human self to, to make the actual insights. But the mining user interviews assistant that we talked about before is often used for things that are outside of strategic projects.
So we do ask it questions about users, right? What are the things users struggle with most about X or whatever it is. And I think that first of all, this is really challenging, and I have kind of two rules that I always follow when I'm doing this. First of all is that when I ask the AI for an insight, I ask it to cite every single transcript that it is using in order to create that insight.
So sometimes it will cite, two transcripts out of like hundreds and I will see, I can go into the transcripts and look and see that okay, this is actually not a thing. I can use my human sensibilities to decide because it's told me where from, where it is drawing the insight. And the second thing is that I always ask for outliers.
So if it says if it gives me an insight, I say, okay, and now please give me as many examples as possible of users who actually are contrary to this particular insight. And that helps me look at all the data and then draw my own conclusion in a more nuanced way. I will say that in terms of stakeholders using the AI agent.
We did create a guide for using the AI agent just in notion that everyone can access. Those are resources that I think some people use and some people don't. And so this is something I'm still figuring out. Like how do we, when we have something that is supposed to unlock the availability of user data to everyone at the company, how do we make sure that everyone is aligned?
And I don't have an exact answer yet.
Hannah Clark: Yeah. I think this is an ongoing thing because, this is I think, points to the value of still having a human overseer no matter what we're using AI for. You still need to be an expert in that domain in order to check the work and make sure that, it is kinda treated it more as an employee that you're leading rather than just you know, someone with an equal skillset to you.
Yeah, I think it is a issue that a lot of folks are bumping up against. Like how do we ensure that our internal owned knowledge is aligned with whoever is trying to use the AI for this purpose.
Now that we've talked so much about like the values and the way that the company functions, I think we can dive right into some of the values of Photoroom that's enabled and empowered you to act like this with with AI. So let's chat about Photoroom's values as they pertain to the use of AI.
Because you've pointed a lot around this idea of the culture at Photoroom being a big driver of your adoption and engagement using the technology. So what does it mean to democratize research? And how has it changed the way that stakeholders interact with and value your UX research findings?
Cori Widen: Yeah, for sure. The thing that appealed to me most about Photoroom at the beginning was how user-centric they really were. Like the leadership was spending time interviewing users. And that was pretty much like I realized in the interview process that it was an expectation that people at Photoroom are talking to users and looking at qualitative data in addition to quantitative data.
Which is not every company, let's put it that way. And, utilizing kind of their interactions with users or their exposure to qualitative data and their decision making. That was something that was established way before I got to Photoroom. And, I was a little bit nervous about it. 'cause I have to admit that my approach to democratization had always been like on the other end of the spectrum, which was like, research should be done by researchers.
And it's great if PMs and designers, et cetera, speak with users. But I didn't see like full on democratization as the best path forward. And I think I really got an education at Photo ru. I decided, I was like open to trying something new. And what I have seen and what I've learned is that the bottom line is that in a company where people interact with users and are expected to take qualitative data into account.
They just value user research more and they're more likely to utilize the insights. And this is like one of the pain points that I'm sure you hear all the time from user researchers is that, they're constantly trying to advocate for the user and for their findings based on users.
Whereas at Photoroom, this culture, what it facilitates is that everyone's very hungry for those insights. That's one thing. And I think that the big fear of researchers in an environment where democratization is so huge. Is that, it won't be done right. Okay. And I think that was one of my fears as well because, there are actual research skills and what I have also found is that in an environment where people are consistently interacting with users, they're actually hungry for best practices, right?
Like everyone wants to do a good interview when they interview users on a regular basis. And, everyone wants their usability testing to be accurate. So one of the cool things about. Our job in user research is that, we get to help people facilitate those skills, like when there are gaps.
Hannah Clark: And I think that it's one of those kinds of things, especially when it comes to attitude around best practices for research. Maybe it's just me, but I feel like once you get a sense of the amount of skill and nuance that's involved in performing effective user research and effective user interviews, it's like, oh, you didn't know how much you didn't know.
Cori Widen: Yes, I totally agree.
Hannah Clark: Yeah, we did, we did an awesome episode with Steve Portigal a couple of years ago on conducting user interviews, and I had not, I've never done them prior to interviewing him other than just interviewing on the podcast and he was, just like in 20 minutes, it was so illuminating how many, even just your own behavior, how it can impact the outcome of an interview.
So I think it's really cool that there's this like hunger and to transfer knowledge and getting people better at talking to users. It's very interesting to me.
Cori Widen: It's funny you say that because Steve wrote a book called Interviewing Users.
He literally wrote a book about it, and it's a book that I have given the stakeholders and it's happened more than once. And people are like, there's a whole book about it, and it's yes, absolutely. There's that much to know about it.
Hannah Clark: Yeah. And it's such a cool thing because like you said, I think every company is always looking to incorporate their user's real feedback and what they really want into products. You need to have enough feedback and you also have to have high quality feedback in order to use that effectively. So it's yeah, I'm really excited to see some of the ways that AI is helping to gain buy-in for a researcher and kind of help people wave that flag. That's really cool. I'm very pro researcher. You can't tell.
So looking ahead though, how else do you think that UX research is gonna continue to benefit from AI? Do you see any other kinds of capabilities that are just around the horizon that we're not quite there yet?
Cori Widen: Yeah, so I don't have enough technological knowledge to tell you like how good the technology will be and when, but when I think about how the, a researcher's role will evolve as AI evolves and as we adopt it more, one of the things that I've realized even today is that I have a lot more time to collaborate with stakeholders because of, how much more efficient I am when I use AI in the process.
I guess if I had to make one kind of loose prediction about this, it would be that the kind of research role morphs from just being about executing research and sharing insights and yes, executing research where needed, but spending a lot more time with stakeholders doing things like brainstorming and spending time in the solution space.
Because we'll have the time to do it and because, we will have the credibility and information to bring to the table faster.
Hannah Clark: And as far as right now for researchers that are just incorporating AI into their workflow, obviously you've shared a lot of really practical tips and I love how accessible and pick up and use them right now. A lot of the recommendations you've made are. But do you have any other recommendations as far as a good starting point for those of us who are working without a best practices manual?
Cori Widen: I honestly think that the best thing to do is to, first of all, change your mindset. Just accept that this is happening.
And I think that's really hard to do, but once you do that, I think a lot of people say that it's effective to like start using AI for kind of small mini like tasks. And I I think the opposite. I think it's a good place to start is to jump into using AI for your qualitative analysis.
Figure out, that's like the meatiest part of the job and the most time consuming part of the job for the most part, and figure out where it can help you. My recommendation is definitely based on my own process. Which is to try and use AI to replace manual tagging, to pull the places in your user data, whether it's usability sessions, interviews, whatever, to categorize information and do your analysis from there.
The reason is because I think that in order for you to make the crossover to someone who is excited about using AI in research, you have to do something that's really going to impact your workflow and the amount of kind of like time and energy you save doing that is pretty monumental.
Hannah Clark: To end us off, what has been your most surprising discovery as you've use AI up for your own purposes, like what has been like the breakthrough moment for you?
Cori Widen: Two things here. So the breakthrough moment for me, which might have been more obvious to other people was just how I as a human could still have an impactful part of this process that I, it wasn't about using AI didn't mean outsourcing everything to AI.
It means like understanding what I am best at and what AI is best at, and putting it together for the best type of qualitative analysis. That was one thing. The other thing that it was like a real moment for me was that I got feedback from someone at Photoroom who told me that like when they were hiring the first researcher, which is me, that the biggest fear they had was that it would be like, research is known to be slow.
Okay? So they were like, oh, this person's gonna come and like they're gonna give us the insights to slowly, we won't be able to utilize them. And this person told me like, it turns out you're pretty fast. I realized that the reason I'm fast is because of the fact that I've embraced AI at these kind of like crucial parts of the research process.
I wouldn't be fast without it. And understanding that makes me more valuable on the team was definitely an aha moment for me.
Hannah Clark: That's really cool. And it's must feel good to feel like, yeah. You got a bit of a nace in the hole now.
Thank you so much for joining us, Cori. I always love conversations about research. I always learn so much and today has been like so much knowledge digested into the same 25 minutes. So thank you so much for all of this. Where can listeners follow your work online?
Cori Widen: Yeah, so I'm not super online. However, I am always happy to connect with other researchers on LinkedIn, so feel free.
Hannah Clark: Cool. Thank you so much for being here.
Cori Widen: Thank you.
Hannah Clark: Thanks for listening in. For more great insights, how-to guides and tool reviews, subscribe to our newsletter at theproductmanager.com/subscribe. You can hear more conversations like this by subscribing to the Product Manager wherever you get your podcasts.