Michael Luchen is joined by Dan Erickson, CEO and Founder of Viable. Listen as they dive deep into the topic of Natural Language Processing, a powerful tool that will aid the future of product management.
Dan has been in startups on the engineering side of the house for about 15 years now. In the beginning of his career, Dan was very focused on helping early-stage companies build their very first product. [1:42]
Dan moved into some more growth companies like Yammer and Eaze. He was a VP of Engineering at Eaze and was the CTO at a company called Getable. And then just a couple of years ago, he founded the Viable to help companies understand and connect with their customers. [2:06]
When Dan first started the company, they called themselves the Viable Fit. And they were actually focused on helping early stage companies find product market fit using the superhuman product market fit framework. [2:34]
Natural Language Processing (NLP) is basically a way of helping a computer understand human language. [4:02]
“At a high level, NLP is all about understanding human language in a way that computers can use to extract the right stuff.” — Dan Erickson
At the very core of Viable sits their qualitative analysis engine, which consists of a combination of in-house and third-party models, including GPT-3. [5:38]
One of Viable’s feature is theme analysis. This is basically taking in a large set of data from a bunch of different sources and then clustering those things together into themes. [6:19]
One of the original problems that Dan encountered at Viable is helping companies find product-market fit using the superhuman product-market fit process. Which is basically send out a survey, collect the responses, read through every response, group it into features or pain points, and then use that to help guide the roadmap. [9:02]
“Every language model has some bias baked in, and that is because it is all trained on text that was once written by humans.” — Dan Erickson
Humans are inherently biased. We’re not fully rational beings. And so because we’ve got this corpus of data that these things are trained on that is very human, we’re going to end up introducing those human biases into these NLP models because of that. [11:14]
At Viable, one of the biggest things that they do is, they actually do a lot of training of models on their own with datasets that they’ve curated themselves. [12:38]
At Viable, they don’t train on their customer’s data. They usually have their own datasets that they’ve built up. [13:47]
Refinement exists within Viable’s roadmap. Model refinement is definitely an ongoing thing that they are doing. [15:02]
“When talking about NLP bias, I think it’s important to also think about bias that’s present in any current process that the NLP is going to be replacing.” — Dan Erickson
It’s easy for humans to get sidetracked into a specific thing that becomes their pet project. NLP doesn’t suffer from that particular form of bias. Basically, what this allows you to do is, instead of having that limited scope of the bias that’s present in those anecdotes, you can then use NLP to broaden your perspective and see more of the data. [17:09]
In the future, Viable will end up with much larger context windows. They will start to be able to do tens of thousands of words in there instead of just a few thousand, which will greatly increase the model’s ability to tie further and further concepts together. [22:12]
For product managers who can see an opportunity on their product to integrate an NLP model, the first place to start is the natural language interfaces. [23:01]
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Natural language processing can be an incredibly powerful tool. It can be used to analyze thousands of data points and provide fast, reliable insights yet, is there a risk to bias cropping up in these results? How do we know for sure that the results are as objective as possible? Keep listening as today, we’re going to dive deep into the topic of Natural Language Processing, a powerful tool that will aid the future of product management.
This is the Product Manager podcast, the voices of the community that’s writing the playbook for product management, development, and strategy. We’re sponsored by Crema, a digital product agency that helps individuals and companies thrive through creativity, technology, and culture. Learn more at crema.us.
Keep listening for practical, authentic insights to help you succeed in the world of product management.
All right. So joining us today to talk about our topic of bias in Natural Language Processing or NLP, we have Dan Erickson — the CEO and Founder of Viable. Viable helps companies learn from their customers by aggregating customer feedback across channels, identifying themes, and analyzing the feedback to produce actual insights and recommendations.
Viable supports many channels, work tickets, live chat, social media, surveys, product and app reviews, and call transcripts. Dan and his team have also raised a total of $9 million from Craft Ventures, Javelin Venture Partners, Streamlined Ventures, Meyers Capital, many amazing angel investors, and solo GPs.
Hey, Dan! Welcome to the show.
Thanks for having me, Michael. It’s great to be here.
It’s great to have you here.
And for our audience, I’m curious, it’s just kinda start and open up can you please share a little bit about your background and how you got to where you’re at today?
Yeah, for sure. So my background is actually in engineering.
I’ve been in startups on the engineering side of the house for about 15 years now. And really had been focused on the early stage and then into growth as well. So, on the on the, in the beginning of the career, I was very focused on helping early-stage companies build their very first product and really understand that first sort of product-market fits stage.
And then I moved into a, into some more growth companies like Yammer and Eaze. And I was a VP of Engineering at Eaze. I was CTO at a company called Getable, which was in the construction rental space. And then just a couple of years ago, I founded the Viable to help companies really understand and connect with their customers.
And how did you come across the idea and the need for Viable?
Yeah. So Viable actually we arrived at the current state of Viable through a sort of a winding path a little bit. We actually followed the market here. So when we first started the company, we called ourselves Viable Fit. And we were actually focused on helping early-stage companies find product-market fit using the superhuman product-market fit framework.
We had a prototype version of that, that used NLP to help them understand the responses that they were getting back in that survey that that superhuman put together. We quickly, however, realized that the big value that we were providing was in that analysis layer, not necessarily in the collection of feedback or in the measurement of product-market fit.
So we ended up moving upmarket. Away from the early stage startups and in towards the high growth and enterprise companies that have a huge amount of data coming in, that they just don’t have any way to analyze right now.
Yeah. Fascinating. Especially like where you started and then where you pivoted towards in terms of your market segments.
Yeah, that was all about listening to the market for that one.
So, before we go deeper into Viable, I’m curious just for our audience. Everyone’s listening, can you talk a little bit about what is Natural Language Processing, or NLP for short at a high level?
Yeah, definitely. So Natural Language Processing is basically a way of helping a computer understand human language.
It’s it is in its infancy was used for pretty simple things, like extracting keyphrases from texts or doing sentiment analysis on specific topics within a text so on and so forth. Over the last, NLP has been around for decades. But over the last few years, we’ve seen a huge Renaissance in NLP. It’s specifically around transformer models and large language models. And these have enabled is a much deeper understanding of human language in context.
So instead of just being able to pull out key phrases or telling you that this is a positive or negative sentiment we, you can start to actually understand what you mean by things. And really start to understand the similarities in different topics for instance.
So instead of pulling out a, you know, a key phrase of keyboard shortcuts, keyboard commands, and hotkeys as all three separate ideas NLP is starting to become it’s starting to become possible to actually understand that those three things are actually the same thing. And that’s just due to the sort of depth of understanding that these new NLP models have.
So at a high level, NLP is all about understanding human language in a way that computers can sort of use to extract the right stuff.
And how does Viable use NLP?
Yeah so at the very core of our product sits our qualitative analysis engine, which consists of a combination of in-house third-party models or an in-house and third-party models, including GPT-3.
So GPT-3 kind of provides the sort of foundation for what we’re, for what we’re building here. But we’ve actually built up a ton of custom models on top of that, as well as some fine-tuned GPT-3 models themselves. Working really closely with OpenAI and the folks over at OpenAI over there to do that. So we use NLP and natural or NLG, which is Natural Language Generation for a wide range of our features.
One is theme analysis. So this is basically taking in a large set of data from a bunch of different sources and then clustering those things together into themes. So we can identify things like what is your top, you know, what’s your top complaints? Or what’s your top compliments? Or what are the most frequently asked questions?
And each of those things ends up being a cluster which we call a theme. And so we do a theme analysis. Not only are we clustering those things together, but we’re actually then digging in and summarize that cluster for you to understand exactly what people are talking about within that cluster of text.
So that could be a great example here would be, say you’re building a video game controller and people are posting on Reddit that the left joystick doesn’t work super well. And you’ve got like, you know, thousands of comments on this on this subreddit but only, you know, a hundred or so of them have talked about that, the left joystick.
Now it would take you hours to go through and sort of manually curate all of that stuff into different groups. We can do that in minutes and then actually summarize all 100 of those comments about the left joystick too, so that you can then forward that over to your hardware team and they’ll know exactly where to start.
We also do use it for topic extraction. So specifically pulling out you know, what features are they talking about? What products are they talking about? That kind of thing. And then we do the theme analysis summarization, which is where the natural language generation comes in. That’s where we’re generating an actual human-readable text.
In fact, our average report has over over 30 paragraphs of actual written analysis that is written by our computer system, no humans. And then we also use it for our open-ended QA where you can type in a question in plain English. We will then go query your data, find all of the stuff that could possibly answer that question, and then run it through a series of analysis models to actually answer that question for you.
And, you know, I’m curious, just listening to you talk about this and also knowing a little bit about the history of Viable as a company. This is a pretty kind of high-tech product solution.
And when you were building this and kind of figuring out your product-market fit was this something where you saw the technology and you’re like therein, for some of the, kind of the trends that you’re talking about, there’s an opportunity to apply this to a product to solve a problem?
Or was it coming from the fact of like, we see a problem and let’s go figure out a technology and NLP happened to be in a good state where like this can solve our problem?
Yeah. So it started out as a, as the second thing they are actually. It was to go solve a real problem. And this was actually originally the problem of helping companies find product-market fit using the superhuman product-market fit process. Which is basically send out a survey, collect the responses, read through every response, group it into features or pain points, and then use that to help guide your roadmap.
Now the part that people were struggling within that process was having the time to actually read through every response and properly categorize it. But we quickly realized that NLP would be able to help us actually get there and could rival humans at that particular task. And so we started off with a pretty simple NLP system that was mostly just a topic extraction with a little bit of sentiment analysis built around it to help you understand what you need to build and where.
But we quickly found that the people were really loving that analysis side. And so we decided to start doubling down on that and moved away from the original problem that we were tackling.
But we moved towards another problem that we discovered during this process. So we started to work with companies that actually already had product-market fit when we were early on in this. And they were talking about piping in other kinds of data, other than just the survey that we were sending. So we started to build out a more generalized system, and then we got access to GPT-3 back in June of 2020.
And that helped us realize that there was actually a way to automate this. It’s not just for a single survey, but for literally every bit of text that you’re collecting as a company.
So we were collecting, that text as well. There could be potentially an opportunity for bias to come up.
And our topic is about bias in NLP and managing that. So, you know, before we dive deep into that, how could bias be potentially introduced into NLP if you’re using that in your product?
Yeah, absolutely. So by default, actually every language model has some bias baked in, and that is because it is all trained on text that was once written by humans.
And humans are inherently biased. You know, we’re not fully rational beings. And so because we’ve got this sort of corpus of data that these things are trained on, that is you know, very human we’re going to end up introducing those human biases into these NLP models because of that.
Now the companies that are out here providing those are doing a ton of work to try to cut down on that bias as much as possible. And I actually think that OpenAI is leading the pack with that. They have whole teams inside of OpenAI that’s focused specifically on the bias problem.
Now, if you’re just using something like GPT-3 out of the box yes, you can get it to produce some questionable stuff. But, the ways that you can kind of go through and kind of solve for some of that is doing things like you know, fine-tuning and training on a specific use case. The more specific use case you can get to, the better it’s going to go.
But like I said, all NLP models have to have inherent biases in there. And you’ve got to really kind of focus to make sure that you’re training it in a way that’s going to reduce that as much as possible.
So, at Viable, on the front end, how are you working to reduce this bias? Or even can you?
Yeah, so as I mentioned earlier the I, the biggest thing that we do here is so one we actually do a lot of training of models on our own with datasets that we’ve curated ourselves. And that’s, that helps us, like I said, get very, get to that very specific use case. So instead of using the big generic form of the model, you’re using something that’s trained to do something very specific.
And when you’re doing something more specific, it’s easier to avoid that bias. So we do that both with GPT-3 itself using their fine-tuning endpoint along with all of the in-house models that we’ve developed.
We then benchmark all of those against our own sort of intuition around bias as well. And so we actually have humans in the loop when we’re doing the training to understand like, okay, this is, it looks like we’re biasing towards X or biasing against Y. And from there we can add more training data to offset biases.
Is there any considerations when it comes to like your client’s customer datasets that you’re analyzing?
So yeah, so the datasets that we analyze or not actually so we don’t actually sort of train on our customer’s data. We usually have, you know, our own datasets that we’ve built up.
Now, you know, sometimes we use like anonymized versions with permission and things like that. But for the most part, we’re not actually using customer data to train. We are analyzing that customer data with pre-trained models that we’ve kind of gone through to ensure that the bias has been decreased for that.
So, so yeah, we I, for the most part are not training on the customer data itself. We’re really just sort of analyzing it using models that we’ve already started vetting for, for bias problems.
So one of the themes is kind of coming out for me is just this continued investment in refining your models, especially off of the, kind of the base models that might be provided by GPT-3 or OpenAI.
And so when it comes to your product’s roadmap, are these refinement kind of periods or like, how are you approaching that refinement work when it comes to prioritizing that within your product roadmap?
Yeah. So refinement exists within our roadmap as is almost its own track. There’s a, you know, we’ve got like enterprise features we’re adding. We’ve got integrations we’re building, and then we’ve got like model refinement, basically, that goes through. We’ve got a bunch of other stuff that we’re doing as well.
But model refinement is a is definitely sort of an ongoing thing that we’re doing constantly. In fact I can’t remember a week that that has gone by where we weren’t doing some form of retraining.
So we and we’ve got, you know, over a dozen models internally at this point that we’re working with. So I’m not saying we’re sort of like retraining all of those models all the time, but at least one of those models is getting reframed on any given week or retrained on any given week.
How much of your team’s time would you say is dedicated towards that?
I would say about 20% of the NL team’s time is focused on this sort of a continuous refinement and retraining.
Very interesting. So, you know, kind of looking at a potential for bias in NLP, how does using NLP reduce potential bias and maybe traditionally manual processes?
Yeah, definitely. So when talking about NLP bias I think it’s important to also think about bias that’s present in any current process that the NLP is going to be replacing.
So for example product managers who are trying to understand what customers are complaining about often rely on anecdotes which, from whichever support rep or sales, salesperson or whatever. They happened to be in contact with at the time. You know, I’m sure as a PM you’ve gone to a, you know, a support person or a salesperson and said, Hey, what are the big problems that we should be tackling right now?
And they’re going to spout something off the top of their head, which is usually accurate. But not usually, not usually complete, right? So they might be, there, they usually have recency bias. They usually have bias towards the loudest customers versus that you know, the most, the majority of them.
So it’s easy for humans to get sort of sidetracked into a specific thing that’s sort of becomes their pet project. NLP doesn’t suffer from that particular form of bias. And so, basically what this allows you to do is instead of having that limited scope of the bias that’s present in those anecdotes you can then use NLP to sort of broaden your perspective and see more of the data.
So for example like I was saying, we have over 30 themes that we’re producing on average. And actually, that’s increasing as we’re doing that retraining that I’m talking about. So, I think the last report we just did had over 300 themes that we identified. And humans are not going to be able to do that level of granularity or that complete sort of, completeness of analysis.
And this is specifically focused on so when I’m going through and I’m manually going through these the pieces of feedback, I’m probably just going to sort each piece of feedback into like maybe one or two themes, like personally. But usually what we see is there’s actually three to five themes present in most pieces of feedback.
It’s just that people only really record the one that it’s sort of the central problem. And so you end up with a much less complete view when you’re doing the manual processing and you do with something like NLP.
Interesting. It’s really interesting. So is, is a human review kind of, or a human input really always going to be needed at some level to mitigate potential bias?
Oh, yeah. So while we should always be striving to reduce bias in these systems, especially in like large language models who are trained on, you know, terabytes of of human language there we shouldn’t really never lose sight of the real-world improvements to these, these systems can provide today.
And what that really means though, is that we really should have you know, they these models can do a lot of that sort of groundwork for you, but you’re always going to need somebody to go through and like read through and make sure things are great.
Right? So we try to do as much of that on the model training side and our sort of internal review as possible. But we also provide ways for our customers to give us feedback, to help, and to help in that retraining effort. So that, there’s actually a thumbs up and thumbs downs by nearly all of the analysis that we do that helps our customers actually tell us whether or not these things are good or bad or it’s working, or not.
Hmm. Interesting. And then based on that, are you able to, like, how do you get that detail, that feedback from the customers beyond, you know, thumbs up or thumbs down so you can figure out how to adjust the models?
Yeah, so this isn’t a, this isn’t implemented yet. But the thumbs down is actually soon going to have a text box that pops up right after you you hit thumbs down. And we will actually be piping all of that text right back into Viable, our own instance of it.
So we can then identify all the themes in that feedback to try to tackle.
That’s awesome. So, I’m really curious. I’m always thinking about the future of product. And so to you, what does the future of NLP look like to you today? And how might this shape the future of product management? You originally talked about this originating from your expertise and finding product-market fit.
How do these kind of worlds intersect?
Yeah, definitely. So NLP is like, like I was saying earlier, we’re in a bit of a Renaissance right now. And things are really, you know, kind of pushing forward. You can see that with a multimodal model, a, like DALL-E 2, for instance, where you can type in a description of an image you want to generate.
And it pops an image out at the other side. So you can type in something like a, you know, a duck and a teacup and it will lag, you know, it’ll actually make an image of that. Which is amazing to see but where we’re actually seeing the, you know, the bigger NLP side going is larger models that will have more depth of understanding.
So GPT-4 will probably come out some time you know, in, in the next few months or year or however long it’s going to be. And that will likely have some significant upgrades on GPT-3. We’re also seeing things like PaLM come out at, over at Google, and a handful of other places at models from the likes of Microsoft and Facebook as well.
These are all sort of getting larger but also getting allowing you to pack in more context. So right now we’re, you know, we’re limited by the amount of texts that uh, that GPT-3 can analyze all at once. It’s a, you know, it’s about 2 to 4,000 tokens which comes out to roughly 4,000 words or so.
And in the future though, I think that we will end up with much larger context windows. We’ll start to be able to do tens of thousands of words in there instead of just, you know, a few thousand which will greatly increase the model’s ability to tie sort of far, further, and further concepts together.
So if I’m a product manager and I have seen opportunity on my product to integrate an NLP model, like GPT-3, maybe 4 later this year where do I start? Is it something like, I should ask my developers about it? Is it something that is a massive implementation and undertaking and should we even really consider it until maybe a few years from now?
What’s that approach recommendation like for product managers who are listening?
Yeah. So, so I think that the first place to start is a, is something that I’m starting to call natural language interfaces. So, you know, you’ve got to, you’ve got command-line interfaces, you’ve got graphical user interfaces.
You know, these are all different ways of interacting with the system. Natural language interfaces like our Q&A system or like, you know, GPT-3 or like a GitHub Copilot or any of these kinds of things that, where you’re typing in some commands to do, and then it does the thing.
I think that is probably where we’re going to see the most innovation coming forward and the easiest thing for almost any product to add to their to their product. So it would that it will enable is more natural ways of interacting with your product. And so instead of forcing your users through, you know, some wizard and a modal somewhere, or something like that, you can just ask them questions, get them, you can have them ask questions.
And then your system can just be interacted with in a plain English way or a plain language way instead of forcing your users to sort of guess and click. So that one is actually that sort of side of things is pretty easy to do out of the box with something like GPT-3.
So I actually recommend playing around with it. Go hop in on the playground and in GPT-3 over there with OpenAI and test out to see what it could do and what it can possibly help your product with.
Awesome. Awesome. So before we wrap up, I’d love to ask some personal lightning round questions, if that’s okay.
Yeah, sure. Sounds great.
So first, which of your personal habits has contributed most to your success?
Yeah, so for me, it’s it’s getting eight hours of sleep. I have really hard to do it as a founder. But I find that sleep is the sort of bedrock habit for all of the other things. You when you’re low on sleep, you you’re not going to perform as well.
You’re going to start you know, skipping meals. You might not do your workout for the day. And so sleep, for me at least tends to be that, that one habit that can’t be sacrificed and in service to sort of other things.
Now, you know, that’s not to say I don’t have crunch mode every once in a while and definitely burn the midnight oil, but for the most part, I try to get real consistent on that.
Awesome. What’s your favorite tool that you use regularly?
Right now it is definitely GitHub Copilot. It’s I use Visual Studio Code for all of my development side of things. I’m, you know, we’re a small team, so we’re, you know, roughly 10 people at uh at Viable right now.
So I’m still doing some coding. But, but yeah, so, I’m hopping in and it’s actually been amazing to work with. So GitHub copilot is basically NLP for code instead of English. And what it allows you to do is just start typing some code in your editor and it will try to figure out what you’re doing and write more code for you.
It’s like an AI pair programmer and, it’s gotten to the point now where roughly 50% of the code that I write is actually written by a GitHub Copilot now.
Well, that’s incredible.
Lastly, for someone at the start of their product journey, what is one piece of advice that you’d give them?
Yeah. Don’t get stuck in a rut. And what I mean there is, you’re going to have an idea and that idea is going to be really cool. But it might not be the thing that’s actually going to propel your company to success. It’s usually, the initial idea is different than where you end up later on.
And it’s key to continue to listen to your users and let your users pull you in the right direction instead of just focusing on your own intuition.
That is so, so well said and such a great note to end on.
Thank you Dan so much for joining today. I’ve certainly learned a lot about NLP and have a few things that I’ll be taking back to the products that I’ve worked on.
Awesome. Well, thanks for having me. It was great, great being here.
You can find more out about Dan on LinkedIn, and you can also learn more about his company, Viable at askviable.com.
And thank you, Dan so much for joining.
Thanks everyone for listening. And be sure to leave a review of the podcast, let us know what you think. Also if you haven’t already, please be sure to follow and join our community over at theproductmanager.com