Isn’t it annoying to fast-forward dozens of your interview recordings for hours to find a single quote that will help you make a sound product decision? I hate it! It wastes the precious time that you could have spent building amazing products.
Often, it's these tedious situations that lead us to the conclusion that we need to keep our interview data organized. And to do this, we need to create a process for continuously materializing these learnings, thereby turning them into features and solutions.
If this hits home, but you're not sure how to start organizing your heaps of user research, no worries. I’m here to help!
The Power Of Qualitative Interviews
In my opinion, user interviews (and qualitative analysis in general) are the most valuable types of work that product managers do.
You can’t make amazing products without knowing who your users are and what they want. With the help of interviews, on the other hand, you are diving into the reality these people live in and experiencing the pains and inconveniences they go through in their daily lives when performing certain tasks.
This insight gives you the right knowledge base for formulating solutions that are actually useful to them and can make their lives better. So, it is of utmost importance for product managers to optimize their user interview processes and make sure that they are getting the most out of the limited time they talk to their users.
How To Organize Interview Data
The moment your team exceeds 20 user interviews, things get messy. To find a specific quote or insight, you might have to find the person who conducted the interview and ask them about it. Inevitably, they won’t remember the exact details of that interview.
In the end, you end up with a bunch of insights that are not tied to any specific assumptions. A mess like this makes it nearly impossible to get any useful takeaways from the research.
Hence, the value of meticulously organizing your user interviews.
Before we dive into the more intricate processes, let us first go over the easy ways you can keep your interviews organized.
Interview Naming Conventions
If you do interviews constantly, at some point, you'll end up with hundreds of recordings in your cloud storage account (or ideation tool). Just imagine how painful it would be to search through large amounts of these files if their names look something like this (“Kudos” to Zoom for their terrible naming logic):
There is no way you can find a specific interview here.
So, consider adopting a standard naming convention that will make your interview files searchable. Maybe something like this:
Now, you have structured interviews with searchable names, topics, and dates. You can also add the name of the PM that ran the interview. But, if you are using a specialized interview tool, both the name and the interview date will already be searchable, and you can remove them from your file names.
Scheduling the date and time for the interview can be a time-consuming process, too. There are lengthy email chains with your respondents trying to figure out the right time and day for the call. There is also the extra hassle when you have a no-show and need to reschedule the meeting.
With a scheduling tool, on the other hand, you can easily set available time slots for your calls and let the interviewees choose the one that fits them best.
You can use any scheduling tool out there for this purpose, including Calendly, Google Workspace, and Hubspot.
This one is especially useful when there are multiple people in your team conducting interviews. With a shared calendar, you can “pass” the interview scheduled for you to a colleague in case you are unable to attend it.
Moreover, you can share this calendar with the entire company and encourage people from different departments to join and hear what the users are saying about your product. This tactic is a great way to keep everyone “user-aware” and break silos between departments.
Transcribing and Summarizing
Product teams traditionally do an audio and video recording of their interviews. We then use these recordings to analyze the learnings and find specific quotes that we want to highlight in our findings.
There’s also the record-keeping use case for it. Sometimes, you want to come back to your old interviews and look at the user’s answers from a different perspective.
The problem with audio/video recordings, however, is that they are slow to consume and hard to navigate.
For this reason, I recommend you use a transcription and summarization tool.
Transcripts are scrollable and searchable text. If there is a specific part of an interview you want to find, just hit Command-F and search for it. Apart from searching inside the specific interview, many tools also let you run queries across all of the interviews you have conducted.
Summaries are the next step in the evolution. You can ask the AI LLM model (usually integrated into the transcription tool) to build a summary of the most important findings for you.
This way, you will also save a ton of time on the analysis of your transcripts. Given the right context, modern LLMs (especially GPT-4) are pretty good at analyzing the interview for you.
In terms of the tools, I am using Krisp now because I lead its Transcription and Summarization product. But I also recommend trying Dovetail, Otter.ai, Fireflies.ai, and others. Alternatively, you can consider the more scientific and in-depth qualitative data analysis software, such as Atlas.ti, Nvivo, Maxqda, and others.
Grouping Interview Transcript Content By Hypotheses
Many interview management tools let you highlight certain parts of the transcript and tag them. This type of fantastic functionality lets you organize your customer interviews (as well as your case studies, memos, and other research projects) based on:
- The general theme (e.g. monetization discovery, activation features, etc.)
- Persona characteristics (e.g. geography, profession, etc.)
- User type in terms of engagement (e.g. dead users, power users, etc.)
But, probably the single most valuable application of this functionality is to highlight user answers and tag them based on the hypothesis it can validate.
What you usually do is transfer all of your active hypotheses into your user interview tool in the form of tags and start tagging the relevant parts of your interviews. When you click on one of these tags, you will see a compilation of all relevant quotes from across all of your interviews.
Here’s what it looks like in Dovetail:
As you can see, it's pretty handy—you can take a glance at everything related to that hypothesis on a single screen and make a quick decision with it.
Now that we know how to organize your user interview data, let’s move on to how you can analyze it.
How To Interpret User Interviews
I usually abstain from talking about theory, but this is one of the rare cases when learning the scientific-theoretical approach is valuable (and applicable to real-life scenarios). So, let’s go over the three classical qualitative data analysis methods.
This type of analysis is the most unstructured and qualitative method among the bunch. It focuses purely on the stories of your users and tries to uncover interesting insights based on how they feel about your product and how they perceive it.
Unlike other types of analyses, it is your users' emotions and the way they see your product in their day-to-day lives that matter.
Let's use Slack as an example.
Imagine you were one of the original product managers at Slack and wanted to revolutionize the way people communicate at work. You know that people mostly use email as their primary channel, and you want to bring them something that looks more like social media platforms (i.e. posts with comments).
When conducting a narrative analysis, you might ask people to describe their day-to-day communications over email, hear their complaints, and get an overall understanding of how people feel about this aspect of their day.
You would then ask the same research questions about their experience with social media post formats, observe their emotional response, and understand the typical user journey of creating posts and commenting on them.
Using all of these insights (and applying deductive reasoning), you can then create your user persona with their goals and challenges. Moreover, you'd also be able to visualize the different user journeys and habits that you need to consider when building your product.
Unlike narrative analysis, this one is slightly more “quantitative” and lets you bring a bit of structure to your insights. Thematic analysis is about the process of reading your interview transcripts and identifying different themes that repeat across multiple interviews.
Back to the Slack example...
When analyzing the responses of your interviewees, you might soon find that many of them complain about the inconvenience of managing “view permissions” (who can see your message and who can’t) in a typical long email communication chain.
Great, now you've identified a common theme (and a potential unique value proposition for your 'channel permissions' feature)!
You might also identify another common complaint—the inconvenience of reading previous messages in an email chain. Voilà! Yet another common theme and another opportunity to shine with your next feature.
The tagging feature I mentioned a moment ago is a great tool for conducting thematic analysis. Whenever you see something interesting in the interview transcript, you can highlight it and add a 'theme' tag to it. Then, you can open the 'theme' tag page and see all of the quotes related to it.
Content analysis of interviews is the most structured and quantitative research methodology among these three. In this case, your goal is to make your insights measurable.
If you have already identified a theme (i.e. inconvenience of managing permissions in an email chain) in your interviews, you could try to calculate the percentage of people who think that it is super inconvenient and compare it to the percentage of people who are not really bothered by it.
By quantifying your data, you will be able to understand the importance of your findings. If only 20% of interviewees think that view permissions in email chains are a pain, then the value proposition of your channel permissions feature would be weak.
This is, of course, quite handy. But, from my experience, the single most valuable application of content analysis is validating (or invalidating) your hypotheses.
With content analysis, you are using the same technique you'd be using for thematic analysis. But, in this case, you are measuring the percentage of responses that validate your hypothesis. Imagine you are building a transcription app (just like Dovetail) with the following hypothesis:
I believe that decreasing transcript processing time from 10 minutes to 10 seconds will drastically increase transcript view rate
Now, imagine you analyze your interview responses and discover that 80% of people say that they have back-to-back interviews and usually read the transcripts at the end of the working day.
This means that your hypothesis is invalid and you should not waste time and resources on optimizing your processing engine.
Bonus Analysis Method: The Grounded Theory
The traditional scientific method (which is widely used in product management) starts with formulating hypotheses and then validating them with empirical evidence.
But the problem is that you can’t always formulate these hypotheses. Sometimes, especially when you are in the early ideation phase of your startup, you know nothing about your market and users. So, there is no way you can write down any meaningful hypotheses.
This is where grounded theory comes in to help.
Instead of creating hypotheses and then validating them with data, the research process here directs you to analyze data first, and then create hypotheses based on the insights you have found in your data.
For an early startup, it would mean starting to interview users without any hypotheses and understanding their needs and journeys. Only when you have a basic understanding of what they want can you start formulating your hypotheses.
How To Act Upon Your Qualitative Research
Your user interviews are meaningless (and your decisions are full of research bias) if you're not able to get something actionable out of them. In this section, let’s go over the basic steps of acting upon your learnings and see how you can apply them in real life.
Step 1: Validate or Invalidate Your Hypotheses
Unless you use the grounded theory, every single user interview should have the goal of validating one or more hypotheses. Here's how to validate your hypotheses:
- Use the tagging and annotation features of your interview tool to gather all the necessary insights related to that hypothesis on a single screen.
- Invite your product team (including UX Designers and stakeholders) and evaluate these findings.
- Decide whether you consider that hypothesis validated or not.
Sometimes, the data set you have may not be sufficient to have a conclusive answer to your question. In that case, simply go and conduct more interviews until you gather enough data.
Step 2: Act Upon Invalidated Hypotheses
If your interview findings invalidate a hypothesis (which is more common than validating one), then you can act upon it in two possible ways:
- Revise your hypothesis: Maybe part of the hypothesis was valid, and you just need to create a new modified hypothesis based on the old one.
For example, let's say you've assumed that people would prefer to manage the permissions for every single post in Slack. The interview data shows that people do want permissions, but they should not be at a post level. In this case, you change your hypothesis to this:
We think that people will prefer managing permissions on a channel level
- Consider it invalidated and discard it: If you don’t see any viable revisions for your hypothesis, you can simply discard it and start formulating new ones.
Step 3: Turn Validated Hypotheses Into Product Solutions
If you do end up validating your hypothesis, then you can start working on a product solution for it.
I know it’s obvious, but most of us end up procrastinating and eventually forgetting about these opportunities. So, to mitigate this lost value, I suggest setting up regular product ideation workshops where you can turn them into iterative solutions.
The main stakeholders in this meeting are your product designers (or engineers if the solution involves a heavy coding process). You can usually start these workshops by presenting slides or a collaborative board that contains the insights you have gathered related to that hypothesis.
(Interview tools have this feature, by the way, so don't bother with PowerPoint and use these features instead.)
Then, you start coming up with different features and design ideas that will cover the case in the hypothesis. Finally, you can use any prioritization tool to find the most valuable features from the list and mark them “backlog-ready.”
Step 4: Add The Product Solutions Into Your Roadmap
Well, this step is obvious. As soon as your UX team has the designs and you are done with your requirements for the solution, add it to the backlog and prioritize it.
Qualitative Data Analysis Is Your Product Superpower!
Product managers are the voice of their users in every company. Thanks to our knowledge of our users’ needs and pains, product managers’ opinions carry significant weight, and people tend to agree with us. (Or, at least, they should!)
PMs can achieve this with rigorous quantitative data collection and analysis processes. But, user interviews have one big advantage. During the meetings with your leadership team, your opinions become irrefutable the moment you quote your users.
So, make sure that you have all of your interview data well-organized and very well-analyzed!
Don't forget to subscribe to our newsletter for more product management resources and guides, plus the latest podcasts, interviews, and other insights from industry leaders and experts.