In today’s data-driven landscape, organizations often find themselves drowning in a sea of data, yet struggling to glean actionable insights from it. Many companies are eager to label themselves as data-centric, but the reality is that not everyone is equally adept at interpreting and utilizing data effectively. Often, insights are fragmented, and the analytics presented do not provide the full picture needed for informed decision-making.
In this episode, Hannah Clark is joined by Mo Hallaba—CEO of Datawisp—to discuss the gaps between the vast amount of data collected by organizations and the efficacy of its use, while also offering practical solutions for better data management and visualization.
Interview Highlights
- Meet Mo Hallaba [01:00]
- Mo started his career in finance, specifically in equity research.
- His job involved recommending stocks and performing extensive manual data work.
- He later transitioned to corporate M&A, doing acquisitions and more data research.
- Tired of this, he pursued his passion for video games and started a gaming startup.
- The startup failed, but he met people working on an e-sports team using data and analytics for training players.
- This concept intrigued him, and it evolved into a broader data analytics company.
- Eventually, they expanded beyond gaming to focus on general analytics.
- Understanding Data-Driven Decision Making [02:40]
- Many organizations expect data to make decisions for them, which is a misconception.
- Highly technical fields may use data-driven decision-making, but that’s not the goal for everyone.
- Datawisp focuses on “data-informed” decision-making.
- This approach involves using data to support decisions you would already make, offering more insight.
- An example: deciding where to place a button in a user interface based on user behavior data.
- Data helps inform adjustments rather than relying on guesses.
- The Power of Data Visualization [04:02]
- Not everyone is comfortable with numbers, especially non-technical people.
- Data visualization helps those in creative fields or product management better understand data.
- Seeing data in a visual form, like graphs, makes it easier to grasp trends and patterns.
- For example, showing how KPIs trend over time visually is more effective than explaining it with numbers.
- Visualizations help people quickly identify key changes, such as when data diverges, and prompt better questions.
- Safeguarding Against Data Misinterpretation [05:12]
- AI is not meant to replace people but to reduce busy work for data scientists.
- Organizations should hire at least one or two data scientists, even with good software.
- The goal is to eliminate the need for technical skills, like writing SQL, to analyze data.
- Removing technical barriers allows more people to spend time understanding data.
- Decision-makers should collaborate with data scientists to avoid misinterpreting data.
- Keeping a person involved in the data process helps ensure correct understanding, especially around complex relationships like causality.
Many people view AI as a replacement for human roles, but we don’t see it that way. We’re focused on using AI to eliminate busywork and bottlenecks that hinder data scientists. A great way to safeguard against these issues is to have a skilled data scientist on your team or to hire someone who truly knows what they’re doing.
Mo Hallaba
- Playbook for Slicing Data into Contextual Chunks [06:45]
- Start by identifying business goals, not just analyzing data in isolation.
- Data analysis should help reveal new insights about the business.
- For example, to evaluate a tutorial’s effectiveness, compare data between users who completed it and those who didn’t.
- Analyze churn rates and retention differences between these groups.
- If no difference is found, either the tutorial isn’t needed, or the product is easy to use.
- If users who complete the tutorial have better outcomes, it suggests the tutorial is valuable.
- Data slicing depends on the specific business question being addressed.
- Actionable vs. Non-Actionable Data [08:05]
- Actionable data allows you to take clear, specific actions based on insights.
- Non-actionable data provides insights but doesn’t offer clear steps for action.
- The difference is in what you can do with the information, not the data itself.
- Example: In a marketing funnel, if people aren’t booking demos, you can adjust the button size, landing page, or graphics (actionable).
- If users try the product but then churn, it’s harder to pinpoint why (non-actionable).
- Higher up the funnel, there are clearer actions to take, while further down, the data becomes less actionable.
- For non-actionable insights, customer interviews and usage observations are helpful.
- Tools for Collecting and Parsing User Data [09:54]
- The tools product teams use depend on the type of product being built (e.g., games, websites, ecommerce platforms).
- Teams need software that collects events (tracking user actions) and stores them for easy analysis.
- For games, Unity plugins can track in-game events and store them in platforms like Snowflake for analysis.
- Ecommerce and other sectors have specific solutions tailored to their needs.
- The key is using tools that align with the product’s requirements for data collection and analysis.
- Leveraging AI for Data Analysis [10:52]
- AI helps non-technical people parse and transform data, a task previously requiring data scientists.
- AI handles the technical aspects, allowing users to ask questions in plain English.
- Non-technical users can explore data with basic business knowledge and curiosity.
- This approach encourages more people to engage with data.
- Increased interest in data leads companies to invest more in clean, accessible data.
- This creates a beneficial cycle where better data leads to better insights and decisions.
The more people value data, the more the company will invest in maintaining clean data and making it accessible to everyone.
Mo Hallaba
- Iterating with Data: Real-World Examples [12:12]
- Datawisp focused on user wait times for their AI feature, Wispy.
- They tested two versions: a fast AI with decent answers and a slow AI with potentially better answers.
- Data revealed users preferred faster responses, leading to a focus on speed.
- They adjusted Wispy’s performance based on user patience, optimizing for quick responses.
- Implemented data-driven design changes, such as timing of status messages, based on user wait times.
- Used real-time data to refine and iterate on the feature for better user experience.
- Training users is less about changing behavior and more about setting proper expectations.
- Example: Steve Jobs’ “holding it wrong” approach was ineffective.
- The key is managing user expectations through education and clear communication.
- Users expect quick answers, and setting realistic timeframes is crucial.
- Comparing Datawisp’s response time to traditional methods (e.g., waiting a week) helps highlight benefits.
- Effective communication involves honest and accurate representation of response times.
- Good marketing and clear messaging are essential in setting these expectations.
- Users are accustomed to instant responses from search engines and AI, creating unrealistic expectations.
- ChatGPT has contributed to the belief that AI can provide instant answers for all tasks.
- In reality, processing time is influenced by database queries, not just AI performance.
- Wispy now informs users of its status during processing, like “I’m thinking” or “waiting for your database,” to manage expectations.
- Users often assume a system is broken if it takes too long, leading them to refresh or resend requests.
- Data tracking and user testing reveal that clear communication about processing times is crucial for user satisfaction.
- Triumphant Data-Driven Success Stories [19:10]
- In esports, data analysis led to winning a tournament.
- The software provided detailed heat maps by breaking down data by different in-game situations and conditions.
- This allowed precise identification of opponent positions, contributing to victory despite having the lowest budget.
- At Datawisp, helped a game triple its user base quickly.
- Analyzed user acquisition sources and in-game behaviors to focus on effective acquisition channels.
- The success story involved a case study with the company Honeyland, featured on Datawisp’s website.
Meet Our Guest
Mo Hallaba is the CEO of Datawisp, a platform that helps game developers and software companies better leverage the data that they collect so they can improve their product, lower user acquisition costs, and increase retention. Datawisp was built from the ground up specifically for non-technical people, and designed so they can just ask the questions that are important to them and get answers immediately.
Mo’s journey began when he was working in the banking industry as an equity research analyst. His work was incredibly repetitive, spending a lot of time doing work that felt like what a computer should do (deep dive data analysis for business people).
The turning point came when he left finance and started a gaming company, working with triple A game publishers as a consultant. These companies had the same exact data access issues that he experienced in finance: people at the mercy of data requests, and science teams busy churning out data requests for business people. The realization that pure consultancy would only “band-aid” the problem, and that the solution would be to create a product led to the creation of Datawisp.
Track everything your users do in your product, as it will be useful at some point.
Mo Hallaba
Resources From This Episode:
- Check out this episode’s sponsor Sprig
- Subscribe to The Product Manager newsletter
- Connect with Mo on X and LinkedIn
- Check out Datawisp
Related Articles And Podcasts:
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Hannah Clark: Can we be honest about something? This industry is a little obsessed with being data-driven. No, that's not the honest part, you already knew that. The honest part is that not all of us are actually all that great with data. And if you are the kind of person who's pretty handy with an analytics dashboard, you might not always be working with the right kinds of insights to show the whole picture.
My guest today is Mo Hallaba, CEO of Datawisp, which if you're not familiar, is an AI-powered data visualization tool. Given this position, Mo has a unique perspective on the disconnect between the amount of data most organizations are collecting and how effectively they're actually using it. Plus, I've got to say, I rarely speak to people who are so smart they're able to take highly complex matters and make them accessible to anyone. But lucky for you and for me, that's what this talk is all about. Let's jump in.
Welcome back to The Product Manager Podcast. Mo, thank you so much for making the time to join us today.
Mo Hallaba: Thanks for having me. I'm excited to be here and talk about AI and data and all these other buzzwords as well, so.
Hannah Clark: We're looking forward to it as well.
Can you tell us a little bit about your background and tell us how did you get to where you are today at Datawisp?
Mo Hallaba: Absolutely. So it's long, but I guess bear with me. I started my career in finance, in equity research, and just telling people what stocks to buy and stuff. And that involves a lot of research and like manual data work that is an absolute pain in the butt.
I ended up moving on from that and doing corporate M&A for a while. And that was more of the same, but just from a different perspective. So instead of telling people what to buy, we were doing acquisitions. And so for these acquisitions, doing a lot of our own research as well. And again, like digging through mountains of data.
And eventually I got tired of that and I decided, Hey, I want to go out and do something on my own. I want to start a startup. I've always been really passionate about video games. So I thought I would do something in that field. And so I started this gaming startup that absolutely failed like miserably, but in the process met some people that were working on something really cool, which is an esports team. They were like buying players from unknown like regions and stuff but they were using software to train them using data and analytics.
I found that really interesting. So it was like, money ball for video games. And then from there that kind of grew into its own things because some of the other teams wanted it and some of the other tournament organizers wanted to put it on the screen and show stats and stuff. And so that's how I went from working in finance to working like at a data analytics company.
Eventually we decided to grow outside of gaming. So now we're just full-blown, analytics.
Hannah Clark: Awesome. We'll dive deeply into that in a moment. Actually, it's like perfect segue.
So today we're going to be talking about how to really use data to inform your decision-making. And to start us off, I'm curious, what do you think a lot of orgs get wrong about so-called "data-driven" decision-making? I know that term is a little bit of a hot button one.
Mo Hallaba: Yeah, I think there's a lot of folks out there who expect data to make decisions for them, like data-driven decision-making. And yeah, it's sure some fields are highly technical and you have data scientists and you decisions that are highly driven by data, but that's not what we're here for.
That's not what Datawisp is for. We're much more about like data-informed decision-making, and it's taking decisions that you were going to make without data and simply just giving you access to some data to look at so that when you make that decision, you're more informed. And so an example of that might be, Hey, where should we put this button in our user interface?
Should we put it up here? Should we put it down there? Should we move it at all? And so if you're seeing that your conversion rates are like people coming into your website and then they're like getting lost on your main page, you might want to move the button, right? But if you're finding that 90% of people make it onto your page and click the button like you don't need to worry about that. It's just little things like that, where instead of guessing, you are now educated by some data. So data-informed.
Hannah Clark: Yeah, I also prefer data, sorry, data-informed. My Canadian is showing.
Can you tell us a little bit about the ways that data visualization can help us to democratize the understanding and usage of data within organizations?
Mo Hallaba: Absolutely. I think not everyone is like a numbers person, especially when you go to like non-technical people.
When you look at more creative fields or product managers or whatever, some people just understand data better, like when they see it visualized in like picture form in front of them. And they may not like, it's one thing to say, like this pattern is changing over time. It's another thing to just show someone a graph where they can see like how the KPI is trending over time.
And then, if you have a main branch and like a new branch and they're like moving together and then at what point the data like forks in a direction you could, it's so easy to just look at the graph and be like, oh, what happened in July? As opposed to looking at a table or quarterly statements or something like that. Visualizing it, especially for non-technical people, makes it so much easier to know what questions to ask next, in my opinion.
Hannah Clark: Yeah, as a self professed non-numbers person, I very much endorse visualization of data.
What are some of the ways that orgs can safeguard against incorrect interpretation of data, now speaking of which, so that decision makers have a clear understanding of the data points?
Mo Hallaba: So a lot of people look at AI and they're like, AI is going to replace people. And so we're not about that. Like we're all about a kind of removing some of the busy work and stuff that like bottlenecks, data scientists. Like a great way to safeguard against that is to have a data scientist on your team or to hire someone who actually knows what they're doing. The difference is that you can get away with one or two data scientists instead of 10 if you have good software that allows people to, basically what you want to eliminate is the need for technical skill to be able to analyze data.
So like I shouldn't have to write sequel to be able to get a chart, but that doesn't mean that I should just make decisions off of that chart without understanding what the data means. Like when you remove that technical barrier, you allow people to spend more time with data and they can become more educated about what are the right ways to use data.
And so they have more time to spend talking to a data scientist to understand what that data shows and doesn't show, just as an example. There might be variables that are correlated with each other because like causality or whatever. But basically keeping a person in the loop is the best way to avoid that.
So whether that's a data scientist or like your best engineer, I still think that if your choices are do nothing or do something, you should still go and do something, but.
Hannah Clark: So what's your playbook for slicing data into contextual chunks? Can you share an anecdote for what that looks like in practice?
Mo Hallaba: I can. The best way to think about it, in my opinion, is you want to look at what your business goals are. So it's not just about working with data in a vacuum. The reason that we do all this stuff with data is because we're trying to understand something about our business that we didn't know before we looked at the data, right?
So usually depends on the business question, but if it's yeah, is our tutorial useful? If we're trying to answer how good is our tutorial one way to slice the data might be people who have used our tutorial and people who haven't completed our tutorial. And then looking at like churn rates or looking at, dividing the whole data set by people who've done the tutorial and people who haven't done the tutorial and seeing how they differ and maybe there's no difference.
And right, and then we say, okay either the tutorial is not effective or the app is really easy to use and they don't need the tutorial, right? But if you see that people who complete the tutorial, on average, like use the product way more and are way more effective with it and are retained for longer, then you can go, Oh maybe there's something to this tutorial thing, so it depends on what you're trying to solve for in the first place.
Hannah Clark: That's a really clear example. I really appreciate that.
You've spoken before about the difference between actionable and non-actionable data. Can you just talk a little bit about what you mean by each of those things? What's actionable data? What is non-actionable data? And how do we identify and leverage both of those types effectively?
Mo Hallaba: Yeah, so it's more about how you are able to act based on the insights that you get. So it's not that the data itself is like different in any way. It's just more what are your options? So I'll try to give you an example, right?
So if you are looking at, let's say for Datawisp, right? We have a funnel, like we do marketing. We try to attract people. They come in, they onboard, they use the app, they like it or they don't like it. And then they churn or they don't. The higher you go up the funnel, the kind of like more there is that you can do.
So it's if no one is booking a demo, you go maybe the button's not big enough, or maybe this landing page is not compelling, or maybe we need a new graphic. But if somebody is like logging in, booking a demo, talking to us, trying the product, like asking a bunch of questions and then churning, what's maybe they didn't like it.
Or like it, maybe it didn't solve their need. And that's like way less actionable than some of the stuff like towards the top. So it's just, it's what can you do based on this information? Right? Certain things are more clear than others. So if it's if it's like a funnel where you're just going from step one to step two if you see a 50% drop off in step three, for example, you go, okay you can make a list of things that what effects to drop off from here to here and then you can get after them. Whereas it's like when you get all the way to the end and they're just like using your product and all that stuff, it becomes a lot harder to go.
Okay they churned because X? Because at that point, who knows? The way to do that then is to try to do more customer interviews and watch people using your product. And there's other tools for that.
Hannah Clark: What are some of the tools that product teams can use to do a better job of collecting and parsing some of their user data to inform the decisions they make?
Mo Hallaba: Yeah, so it depends. There's a lot of different software that you can use to and it really depends on what kind of product you're building. If you're building a game, it's very different than if you're building like a website or like an ecommerce platform or whatever. But you need something that collects events or like the tracks when things happens and stores them in a place that makes it easy for analytics.
Ideally, if you're building a game, there's like unity plugins that can take events that happen in game and record them and store them in snowflake, right? If you can get something like that makes it way easier for you to start actually analyzing that data. That's like the basic requirement. For ecommerce and stuff, there's this ton of like vertical specific solutions that do stuff like that. So I would say just depends on what you're building.
Hannah Clark: As far as AI and the role of AI in assisting with some of just the parsing of data, you'd mentioned having a data scientist on staff is your best option or several.
Is there a place for AI in terms of helping to make some of the data make more sense to folks, especially non-technical folks?
Mo Hallaba: Absolutely. I mean, that's our whole business at Datawisp, right? So like in terms of what the AI can help with is the kind of like technical skill that is required to be able to parse data, to be able to transform and shape it how you want to get the answers that you want.
The AI is really good at that. And so this is something that only data scientists could do before. But now like anyone like me, who's completely non-technical can use one of these AI tools and the AI just takes care of the code part of it. And you can just ask questions in plain English, right? So anybody with a bit of curiosity, and like a basic knowledge of the business or like the product can start digging into data.
And I think that's cool because like it creates this cool flywheel effect where the more people care about data, the more the company is going to invest resources into having nice, clean data and making it available to people and all that stuff. And so, and then they benefit more from having that.
Hannah Clark: So let's talk a little bit about leveraging data for iteration. So once you've applied your data and applied learnings to a specific feature, for example, how do you create a feedback loop to inform future iterations of that feature?
Mo Hallaba: Absolutely. I'll give you an example of something we did at Datawisp, actually.
So we, with Wispy, our AI, we care very much about the number of people who are using it, and more importantly, like, how long they're willing to wait for an answer. So we had the option early on of having a really fast Wispy that would return okay answers, and having a much slower Wispy that would return way better answers, but not always way better, but sometimes way better.
And then we like quickly find out by looking at data, like how long people are willing to wait because you see after a certain point that people just drop off like they're not willing to wait more than 30 seconds to a minute, right? And like our smart version of Wispy would take sometimes two minutes, but it would return like stuff that was really cool and nobody would sit around and wait for that.
So we decided we wanted to focus on the speed one. And so we tried to make Datawisp as fast as possible. And you can see the metrics go up. Like, when we switched from, so at first it was, like, the slow one. And then it was a 50-50 split where we gave half one and half the other. And then we just went to the fast one.
And you see the numbers go up as we go through this iteration. And this is the sort of thing where like Moritz and I, Moritz, our CTO, were on a call with each other and we were designing these little messages that come up like as Wispy is working on the answer. So if it takes 30 seconds, you can't just like spin for 30 seconds, right?
So it had these like messages, Hey, I'm still working on it or data science is hard or whatever. And we were trying to figure out like how long the timing should be between these messages. And then we were like, wait, we could just look at data. And we just looked at how long people were willing to wait and we just spaced it out according to that and we rolled the feature out.
That's like a concrete example of how you iterate on that stuff using data.
Hannah Clark: So that's really cool and I kind of wonder because we often talk about training AI models and like what that looks like. Is there an element of training users that sort of, because when you say that there's an element of impatience when it comes to users waiting for a higher quality answer, that seems like it's a trainable behavior, like what's, how do you approach that decision?
Mo Hallaba: You're holding it wrong. That's not how you do it. You don't remember that? When, was it Steve Jobs? It was like, you're holding the phone wrong. It was the famous thing, people like the new iPhone came out and like it wasn't getting good reception because like they put the antenna somewhere where your hand was and they just came out and said, Hey, you're holding it wrong.
You should hold your phone like this. That's an example of not the right way to do it. That's the absolute worst way to do it. In general, it's about expectation setting. So it's a mix of like education and also expectation setting. They go hand in hand, but at the end of the day, like the user does have an idea of what they want.
And so you have, so it ends up being a compromise. We understand that what Datawisp is doing is for non-technical people. And the existing behavior is I'm going to email Joe and Joe is going to get me the answers, right? So these are already people who, they don't have to put in an awful lot of work to get something, right?
Now, it sucks for Joe, but that's the reality of this person, right? So we can't ask them to do a whole lot of work. We can ask them to do a little bit of work. But if it's more than that there, at the same time, it is some marketing and some expectation setting. What do you want? A good answer? When you want to allows the answer.
How long does Joe take to get back to you? A week? Is it worth waiting a minute to get an answer from Datawisp? 30 seconds would be nice, but a minute is still shorter than a week. And so it's, you drop little hints and you need like a good copywriter for this, like on your website, setting expectations right from the beginning.
Hey, get your answer in minutes, not weeks. And you don't put seconds because if you put seconds, they expect it like this, right? So if you say get an answer in seconds, that causes an expectation mismatch. So it's funny. I say all that and then our website probably still sends seconds. But I think like in sales calls, you just have to be really upfront with the customer and say, look, if this is something that takes you how long now, and they tell you.
And you go, okay we can do it in a fraction of that time. And so you set expectations a bit.
Hannah Clark: Yeah, it's interesting. I feel like there's an existing behavior from folks who are so used to using search engines that answers should come instantaneously when the technology is very different.
Mo Hallaba: Also ChatGPT has like spoiled people because people just think AI is magic. They don't understand how it works at all. And also, they've seen how fast it can produce a lot of text, for example. So if I say, hey, ChatGPT, write me a travel itinerary for a five day trip to Milan. It'll just go, boom, done.
So people, regular people just assume this is magic and assume that it can do anything quickly. But there's a difference. I mean, like when you're connecting to a database, you've got a billion rows of data in that database. The AI is not even the part that takes a while. Like you, you have to run that query and that query may take some time to process.
And there's absolutely nothing we can do with that about that. But because someone saw ChatGPT be really fast, they go this should take two seconds. And so you have to, one of the things that Wispy does now that it didn't used to do is when it's spinning, it actually tells you what state it's in now.
So now it goes, I'm thinking, and then when it's done thinking, and it's just waiting for your database, it goes waiting for your database. So it like communicates what's happening at all times to set expectations throughout the process.
Hannah Clark: I can see that being really helpful. I think that also users are really trained to when something is spinning for what they perceive to be too long, the assumption is that it's broken or, you need to resend the, very interesting.
Mo Hallaba: So we looked at a lot of data of this. We looked at how long, and first of all, track everything. So as soon as you have a live product, start tracking things. So we tracked everything, like every button that you can press in Datawisp, we track every time anyone presses it.
And so we have a really good idea of how all this stuff works. And so we also did user tests and yeah, we noticed that if something was spinning for a while, people just refresh the F5, the, so you like you really, the product has to communicate what it's doing if it's making people wait for more than five seconds.
Hannah Clark: That makes a lot of sense. And yeah, I think that track everything could be the tagline of this episode.
Mo Hallaba: Yeah, track everything that your users do in your product because it will be useful at some point.
Hannah Clark: So I know you already shared an example of data informing a decision. I think that was a really good concrete example. I'm curious what the most triumphant example of data informing a decision, whether it's, it's a customer, whether it's your own team.
Mo Hallaba: This isn't at Datawisp, but the most triumphant, I told you I worked in esports before, the most triumphant is we literally won a tournament based on this stuff, based on data.
So the way that this software worked is actually really cool. And I can give you a Datawisp example as well if you want. The way the software works is really cool. It allowed, like you said, to filter data based on certain different conditions or whatever. And so the most kind of tools would give you heat maps.
So this is the whole map and this is like where the other players on the other team usually hang out. Right? And if you look at a heat map over an one hour long game, it just looks like green everywhere. But that's not helpful. If I'm in a game and the whole map is green, that doesn't tell me where they are.
But what was really cool is when you started breaking that down by situation. So on the first round of the game, where are they? When they have this much money, where are they? When they don't have money, where are they? When they have these guns, like where do they stand versus when they have different guns?
And so you can start to create these scenarios where like all of a sudden the heat map is like five red dots and now you know exactly where your opponent is. And so you can use that stuff in competition and like they were using it and winning with the lowest salary budget of any team in the competition.
So it was incredible.
Hannah Clark: That's awesome. I'm sure that there's some esports folks salivating over that right now.
Mo Hallaba: The best Datawisp example I'll give you is we helped a game basically triple their user base in a very short period of time. And the way they did that was they segmented the users based on what like acquisition source they were using.
And then they looked for specific in-game behaviors that they like valued more than others. And so they were able to find when we acquire users through this channel, they behave in game the way that we like, and so they're able to focus their efforts on certain acquisition channels that they preferred.
The company's called Honeyland. We did a case study with them. It's on our website. So that was really cool.
Hannah Clark: Yeah, that's really interesting. I'd like to check that out.
Mo, thank you so much for joining us today. Where can people follow you online?
Mo Hallaba: So, if you're on X, it's ElectronicMo. Otherwise, you can follow me on LinkedIn. It's just my name. Datawisp is the same, so you can follow Datawisp on X and on LinkedIn as well.
Hannah Clark: Awesome, thank you so much for joining us and yeah, for the crash course, it was very accessible for non-data folks like myself.
Mo Hallaba: I try to make it simple. Thanks so much for having me, Hannah.
Hannah Clark: Thanks for listening in. For more great insights, how-to guides and tool reviews, subscribe to our newsletter at theproductmanager.com/subscribe. You can hear more conversations like this by subscribing to The Product Manager wherever you get your podcasts.