In an era where AI is transforming industries at a rapid pace, product roles infused with AI capabilities are among the most sought-after in tech. If you’re looking to break into this sphere or enhance your existing skills, understanding the journey and strategies of those leading the charge can be incredibly beneficial.
In this episode, Hannah Clark is joined by Lorilyn McCue—AI Product Manager at Superhuman—to share her insights on navigating the evolving landscape of AI product management.
Interview Highlights
- Lorilyn’s Career Journey and Key Skills [01:06]
- Lorilyn began her career as an Apache helicopter pilot in the army for six years.
- She then taught physical education at West Point.
- After leaving the army, she attended business school.
- She joined Slack as a product manager, first on the growth team, then on the platform team.
- Later, she worked at a machine learning startup, Impira.
- Currently, Lorilyn is a lead AI product manager at Superhuman.
- AI at Superhuman: Enhancing User Experience [02:08]
- Lorilyn values AI’s ability to save users time, allowing them to focus on important tasks.
- She shares a story from Iraq, where flying an Apache helicopter on a dull mission highlighted the inefficiency of repetitive tasks.
- A more engaging mission showed her the value of using technology for complex tasks rather than mundane ones.
- She relates this to AI, which handles routine tasks so users can focus on meaningful work.
- Superhuman’s “instant reply” feature, for example, saves users two minutes per email, automating routine replies.
- This “aha” moment affirmed AI’s role in removing drudgery and enhancing productivity.
AI can take the drudgery off your plate. Assign all the rote tasks to AI, let it circle the base for three hours, and go do the cool stuff yourself.
Lorilyn McCue
- Career Progression and AI Integration [04:19]
- Lorilyn prioritizes optimizing for learning, a lesson reinforced at Slack and crucial in AI.
- Superhuman’s “Ask AI” feature lets users ask questions directly, saving time on searches.
- Originally planned to take three months, the team launched a preliminary version in one month to gather feedback.
- This phased release—first to the AI team, then internally, and finally to beta users—offered insights that shaped the final product.
- Iterative development was key, allowing the team to adapt and create a more effective product based on user behavior.
- AI’s unpredictable nature requires repeated learning cycles to align with user needs.
AI is such an unknown territory that you often have no idea how people are going to use it. Therefore, you need to optimize for learning. You have to launch a little, then iterate, and repeat this process.
Lorilyn McCue
- The Importance of Seamless AI Integration [06:46]
- Lorilyn emphasizes integrating AI deeply into product design, avoiding an “add-on” feel.
- Ideal AI integration makes features feel natural and unnoticed as “AI” by users.
- Superhuman’s “Instant Event” feature illustrates this approach, allowing users to create calendar events from emails with a single click.
- Initially, the feature was in a less accessible location for testing purposes, using Superhuman’s command menu, “Command K.”
- After testing and learning from feedback, the team relocated it to more intuitive spots, like a button in emails and a calendar shortcut.
- This seamless integration ensures AI features feel useful and accessible when needed.
- The “Instant Event” feature now allows users to create events by simply clicking on dates in emails, with the cursor changing to a hand icon to signal this action.
- Initially tested in the command menu, the feature is now fully integrated for ease of use.
- Users naturally discover and use the feature without explicit instructions, which aligns with Lorilyn’s goal for seamless AI integration.
- Optimizing AI for Speed and Accuracy [10:54]
- Superhuman prioritizes proactive features like auto-summary and instant replies, displaying them instantly for user convenience.
- The auto-summary feature provides a one-line summary of every email, enabling faster information access without requiring user action.
- For instant replies, the team faced a trade-off between a “turtle bot” (smart but slow) and a “fast, awkward uncle” model (quick but less accurate).
- Despite the turtle bot’s higher quality, the team chose the faster model, prioritizing speed and user experience.
- They fine-tuned the faster model to improve accuracy, ensuring responses are quick and reasonably accurate.
- Balancing quality with speed is essential in deciding whether features should be proactive or user-initiated.
- Technical Decision-Making in AI [14:24]
- Lorilyn uses a tool to compare multiple prompts and models side by side, assessing performance with real data.
- She creates a dataset from internal user feedback (thumbs up/down) to reflect real-world scenarios.
- Evaluations focus on latency, aiming for responses within 1–2 seconds to meet user needs.
- Through repeated testing, she identifies issues (e.g., misinterpreting context in replies) and refines prompts to improve accuracy.
- Iterative testing helps establish success criteria, balancing speed with accuracy and learning the specific types of accuracy needed.
- Mastering Prompt Engineering [16:38]
- Lorilyn developed prompt engineering skills through on-the-job learning and experimentation.
- Occasionally, she researched online for specific solutions.
- She also received tips from OpenAI, like using techniques such as all-caps commands, repeating instructions, and including one-shot examples.
- Fine-tuning involved adding examples to address challenging cases and improve accuracy.
- Lorilyn used a technique where she had the LLM explain its reasoning to refine prompts.
- For example, she would ask the LLM who it was replying to and why.
- This process revealed misunderstandings in the LLM’s responses, allowing her to make specific adjustments to the prompts.
- The “explain yourself” method helped clarify the LLM’s thought process, aiding in more effective prompt tuning.
- Business Challenges in AI Integration at Superhuman [18:54]
- Integrating AI into Superhuman involved balancing complexities and challenges.
- Lorilyn compared AI models: GPT-4 (slow but high quality) vs. GPT-3.5 (faster but less smart).
- Initially, GPT-4’s slowness posed significant challenges in scaling email summarization.
- Superhuman prioritized quality and user needs, considering speed and correctness.
- They chose GPT-3.5 for better latency while employing prompt engineering to meet quality standards.
- The business focus was on ensuring user needs were met, regardless of costs.
- Hands-On Learning and Practical AI Skills [20:54]
- Hands-on learning is essential for AI product management.
- Staying updated on weekly trends and releases is crucial.
- Experiment with AI tools in daily life for practical understanding.
- Example: Using ChatGPT to create customized coloring pages for kids.
- Apply AI in professional settings, like generating concise user tooltips.
- Engaging with new technologies enhances knowledge of AI possibilities.
- No specific course is necessary; practical application is key.
Meet Our Guest
Lorilyn McCue is an AI Product Manager at Superhuman, the most productive email app ever made. Together with Superhuman CEO Rahul Vohra, Lorilyn has helped to create and bring the company’s AI product roadmap to life, churning out sleek and practical features that allow users to do everything from draft emails in their voice and tone to summarize lengthy threads to use natural language search to easily find things in their inbox. Before joining Superhuman, Lorilyn worked on product teams at Slack and Impira, an AI startup that was acquired by Figma.
Lorilyn has taken a fascinating and unconventional route to working in product management.
She served in the military for a decade, flew Apache helicopters in Iraq and taught cadets at West Point before deciding to make a career pivot and becoming a product intern at Google in her early 30s. She holds a BA from West Point and an MBA from Stanford.
There’s something about using AI in your daily life that gives you this knowledge and understanding of what’s possible.
Lorilyn McCue
Resources From This Episode:
- Subscribe to The Product Manager newsletter
- Check out this episode’s sponsor: Wix Studio
- Connect with Lorilyn on LinkedIn
- Check out Superhuman
Related Articles And Podcasts:
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Hannah Clark: At this moment in time, the most coveted product jobs are all about AI. And historically speaking, it's a particularly unusual moment because the core competencies required to compete in this arena have only existed for a couple of years. Stranger still, the field has evolved so rapidly that any experience you might have gained when ChatGPT first hit the mainstream, barely two years ago by the way, may already be outdated. So how can PMs looking for jobs in this space get up to speed right now? And maybe more importantly, what habits can you implement today to stay current as the technology evolves?
My guest today is Lorilyn McCue, Lead AI Product Manager at Superhuman. Lorilyn's career journey is pretty fricking incredible. So I'll let her fill you in on that shortly, but it's also a testament to the real key skill you need to be successful as an AI PM—adventurousness. So as you listen to Lorilyn's tips, I suggest getting in the mindset of an adventurer because this is a field of uncharted territory. And the first thing you'll need to succeed is a willingness to explore. Let's jump in.
Welcome back listeners to The Product Manager podcast. And I am here today with Lorilyn McCue.
Lorilyn, thank you so much for joining us today.
Lorilyn McCue: Oh, thanks for having me.
Hannah Clark: So you have had a really interesting journey to becoming a lead AI product manager at Superhuman. And I think that we usually try and linger just a little bit on this section.
But I would really love if you could tell us a little bit more about your background because it is so cool.
Lorilyn McCue: My background is very strange. So I started as an Apache helicopter pilot, which is obviously a very normal entry route into tech. I was in the army. I did that for about six years.
And then I went to teach physical education at West Point for the last part of my time in the army. Then I went to business school and ended up at Slack as a product manager, first on their growth team and then on their platform team. And then I went to a small machine learning startup called Impira.
And now I'm here at Superhuman working in AI.
Hannah Clark: Such a cool career trajectory. I'm just obsessed with it.
We'll get right into things because today we're going to be focusing a little bit more on just AI and how a Superhuman came to be and that whole journey that you've had to becoming a lead AI product manager.
So if we talk about AI in the context of Superhuman, can you tell us a little bit about a moment when you saw AI significantly evolve and elevate the user experience for Superhuman users?
Lorilyn McCue: Yeah. So let me tell you a little story. For me, the most important thing is to save users time and to let their brain do more important things.
So for me, Superhuman AI lets your brain do more important things. Okay, I'm going to tell you a little story. When I was in Iraq, I was given this one mission to fly in circles around a base for three hours. I just flew in circles around a base for three hours, doing nothing. But then eventually we got a new mission, which was to go fly really low, find these bomb caches, and then blow them up, which was way more interesting and a much better use of the most technologically advanced helicopter the army has ever produced.
And in a really weird way, I think that I fell in love with the concept of AI before it was even a thing. Because I could see how much better use of our helicopter was through the second mission versus the first mission. So AI is like that. And that AI takes care of the circling around the base for three hours at a time and lets you take your own human brain to do the more interesting tasks.
So AI is a huge time saver for users. When we looked at one of our features called instant reply, which is a little quick reply that you use to send a response to someone, it saves users two minutes an email to reply. You don't need your brain to spend two minutes to compose a reply when probably something that's boilerplate would work for most of your emails.
To me, that was like this big aha moment that like, okay, AI can take the drudgery off your plate. Give AI all the rote tasks, let it circle the base for three hours, and you go do the cool stuff. You go find the bombs and blow them up.
Hannah Clark: No one has ever used that analogy on this show before, but I'm here for it.
Lorilyn McCue: Glad to be the first.
Hannah Clark: Okay, well, we'll come back to AI in a moment because there's just so much to unpack there. There's so many different use cases that we can really get into.
But I want to talk a little bit about your career. So when we talk a little bit about your career progression going from Slack and along the line into AI product management, how have you come to wrap your head around AI as part of your career, like going from like your initial role to where you are now?
And how is the integration of AI into kind of the fold affected your approach to product strategy?
Lorilyn McCue: Yeah, that's a really good question. So I think the most important part for me has been to optimize for learning. That is something I learned at Slack, but is something that is, it's maybe 10 times more important for AI.
You have to optimize for learning. So, a good example of this is we launched a product called Superhuman Ask AI recently. And this feature is so cool, instead of searching a specific search query, you can just type in a question. You can say, hey, can you please summarize the most recent feedback on instant event?
Can you give me the top three most positive quotes and a couple of negative quotes? Something that would take you 20 minutes to do now takes just seconds, which is amazing. Or when am I meeting with Hannah? Or when did I last email so and so? So we were initially going to take three or so months to build this product.
But what we decided to do is put out a version after one month, which did not feel ready for us. And we put this version out to a small group of people. First of all, we put it out to our AI team, then we launched it internally, and then we launched it to an increasingly larger amount of beta users. And what we found is that each phase, we learned something new that completely changed the way we thought about the product and the way we were going to build it.
So by the time three months have passed, a lot of people had used this, and we finally built the right thing, which we might not have done from the beginning. AI is such an unknown territory that you really have no idea how people are going to use it sometimes. And so you have to optimize for learning.
You have to launch a little bit and then iterate and then launch and then iterate and then launch and then iterate.
Hannah Clark: Yeah, I can see how that accelerated cycle, it gives you so much more almost failure data to work from, which is going to be oftentimes so much more useful than just, if you have nothing at all, right? And you've just taken such a long time to exhaust your runway. That's really interesting.
One of the things that we mentioned, we had a conversation before, and you really emphasize the importance of really baking AI into the product strategy, rather than slapping it on top, which we see often with some of the AI features that are integrated into current products.
So what were some of the decisions that you made to make sure that the AI in the product felt really seamless, fully integrated, and not just a big bolted on?
Lorilyn McCue: Yeah, I think today a lot of people are putting AI in a sidebar as a little chat area. You have to integrate AI seamlessly into your product.
My dream user interview goes like this, Hey, have you used this AI feature? Oh, I didn't know that was an AI feature. I think that's so exciting because when you integrate AI seamlessly into the product, it just feels natural. It feels like what you need it's there when you need it. So a good example of this is we just yesterday launched a feature called Instant Event, which I'm super excited about personally, especially.
You get an email, okay, we're both parents. So like you get an email that's Hey, don't forget to bring this to school on this day. And you're like, oh, now I have to go open my calendar, see what I'm doing that day, create an event, it's really frustrating. But now in Superhuman, with one click, you can just have an event created with the right date, and the right title, and all the details that are there, the right people in there, it's so useful.
And when we first launched this feature, we had it in not a seamless location. So we had it in command K, which is like your command menu for Superhuman. So you had to know to hit command K and then type what was at the time create event with AI. Obviously, like very few people are going to find this.
But because we had it there, we were able to test it and do that first principle of optimizing for learning. So we were able to do this on a bunch of emails ourselves, figure out what was working, what are the weird edge cases, but that wasn't the final step. We had to put it somewhere that was seamlessly integrated into the product.
So now it's in multiple places. The first place is you can use the normal calendar shortcut, which is B in Superhuman. And it'll automatically create an event with AI instead of just a blank event, which is super cool. It also has a little button on the bottom of an email if a date is detected that says, Hey, do you want to create an event on this date?
You click that button and it pops up. It does not make that noise, by the way. I did not win that battle.
Hannah Clark: I like the noise.
Lorilyn McCue: Or when you hover over date, the little cursor will change into a hand. And when you click on it, you'll notice that an event is created, which is so cool. And so we wanted to give the functionality a try, put it in command K, but the final version was so much more seamlessly integrated into the product.
And we're just finding that users are using some of the ways we didn't even announce in our announcement email. They're just finding it naturally, which is exactly what I want to happen. Like you see a date, you click on it, an event happens. It's perfect.
Hannah Clark: Web designers, this one's for you. I've got 30 seconds to tell you about Wix Studio, the web platform for agencies and enterprises. So, here are four things you can do in 30 seconds or less on Studio. Adapt your designs for every device with responsive AI. Reuse assets like templates, widgets, sections, and design libraries across sites and share them with your team. Add no code animations and gradient backgrounds right in the editor, and export your designs from Figma to Wix Studio in just a click. Time's up, but the list keeps going. Step into Wix Studio and see for yourself.
We recently had an episode about empathy, kind of part of the design process, and it's interesting how those kinds of intuitive features, I think that's really the key to creating something that's easily adopted. It's like making that those kinds of features so natural and so seamless that people just can't help but discover these really amazing features.
So I want to talk a little bit about data and like how we use metrics and kind of shape our decisions around them. So let's use example of Superhumans, summary processing, like automatically so that users don't need to request things. So what metrics or insights would have shaped this choice in this product?
You might want to give a little bit more background about that feature and how do you decide which feature should be proactive versus user initiated?
Lorilyn McCue: Yeah, so there's actually two features I'll mention with this because there's a really interesting story about the second feature. So, when you receive an email before you even open it, we calculate the summary and we calculate some suggested instant replies for that email.
So the automatic summary is a one line summary, which is present in every single email. So you open up the email, it's there. A lot of products have chosen to give a summary upon click, which makes sense. It costs a lot of money to summarize every single email that you get. Some of them you don't even open.
But for us, it was so important that those two things were available the moment you opened the email. Because maybe you're in a rush, like I said. Like you were saying earlier, you have to have empathy for the user. They may not have time to click on that and wait the couple of seconds that it would take to come in.
The second feature is instant reply. And you asked about what metrics shaped our choice? We really wanted to use a more advanced model for this feature, this instant reply feature. And the more advanced model was like, it's just, it was just too slow. It was like a turtle. It was like the turtle bot.
It had these brilliant answers. It was like a turtle genius. There's this turtle genius bot and okay, maybe it makes sense to have the user request them and then they get this turtle bot genius answer and it really makes sense and it's great and the summary is amazing and the instant reply is amazing. But they're gonna have to wait a few seconds and it may not be ready by the time that they actually click open that email.
Or we have another option, which is I don't know, it's kinda like your fast, awkward uncle. It gets the job done. It makes the mistakes every once in a while. A great example, before we were able to figure out how to solve this problem for instant replies, there was this one use case that drove me absolutely crazy.
I would stay up until midnight every single night trying to figure out edge cases like this. So this particular one was somebody would send a goodbye email I'm so happy being at this company, but it's time for me to go. And then a bunch of people would say Oh, I'm so sorry we miss you so much.
This is great. You've been wonderful. And then you're like, so stupid you're like fast, but not so smart uncle would be like, "Dear Johnny, thank you for saying goodbye to Sarah." And you were like, No, that's not what I'm going to do. That's not an email. I'm going to say goodbye to Sarah.
That's what makes sense. The like super smart turtle is over there yeah, I know I got this of course. But it takes a while to figure it out. So we were like, okay, how can we do this? At some point, we realized we have to use your little bit stupider uncle. We chose that, but we figured out how to make it smarter.
Okay, we can fine tune the model. We can send in more examples of that particular case when we're doing our fine tuning. We can also decide what kind of like parts of the thread we send in. We can change our prompt a little bit. But the bottom line is it had to a) be good enough, but b) it also had to be fast enough.
That was like the most important thing.
Hannah Clark: Okay, I want to dig into that a little bit further because this is like a skill that's more unique to AI product management that I think is relevant for so many people who are interested in deepening their skill set here.
So when we talk about like these technical aspects of balancing, fine tuning models, trying to balance like latency and cost effectiveness, like all of these decisions, what's like the nuts and bolts process that you have to evaluate when you're making these kinds of decisions? And what kind of resources do you need in order to understand better, like how to make really informed decisions when you're looking at all these factors?
Lorilyn McCue: I like to use a tool that allows you to put multiple prompts side by side, along with multiple models. So, you'll have prompt A with your super smart turtle on the left, and you'll have the same prompt with your kind of stupid fast uncle on the right, and then you'll run them with a data set. And just to throw back for a second, how did you get that data set?
Well, you launched earlier and you had your internal users do thumbs down or thumbs up on the examples that worked well for them. And that becomes your own data set that you have to test on, which, yeah, your brain could probably come up with a data set of examples, but having real live examples from your own internal users is so helpful.
You take that data set and you run those two prompts. You look at the time that it takes. Maybe you figure out how long it takes for users to open an email. Okay, that's my benchmark. I gotta get this ready within one to two seconds because sometimes people click on emails right away. Okay, which side is winning the two second rule?
Okay, this side is winning the two second rule. Okay, let's now look down every single one of the responses. Which one's janky? Ooh, that one's really janky. Why are you thanking, so and so for saying goodbye to Sarah? Okay, one more column. Let's open up a new column. Now let's change your prompt a little bit.
Okay, determine the focus of the conversation. Figure out who is the best person to respond to. You run it again. Okay, you look at the examples. Okay, iterate again. It's just like you have to iterate, iterate, iterate. And you may not even know what you're looking for in the beginning. But as you do it, slowly you start to figure out what is your criteria for success.
You may already have some criteria for success. This fast. It has to have this amount of accuracy, but like what kind of accuracy on what kinds of problems? You get an intuition for that as you actually do it and actually play with it.
Hannah Clark: Okay, so this sounds like a huge emphasis on prompt engineering is like a key skill set to really master in order to be effective in this role.
How did you hone that skill set is what just such a new practice. Did you have to take a course or was this just like you learn on the job?
Lorilyn McCue: So two things, definitely learning on the job. I mean, literally it was me late at night trying a new thing. Every once in a while I would go like search the internet.
Is there something I can do here? What is going to solve this problem? We actually had a close relationship with OpenAI and they were able to give us some tips. Some of them are funny say this in all caps, really? I need to like, yell at the LLM, okay, I could do that, or repeat this a second time or make sure you put it at the end as well.
A great tip was, make sure there's an example there, a one shot example there. Make sure your fine tuning has examples of all the different options that you're having trouble with.
Hannah Clark: Okay, these are really interesting tips. I think that prompt engineering is an interesting field because we interact with these interfaces so much with plain language, it seems like it should be a very natural process in order to create a really effective prompt.
Do you have any other unexpected things that you've learned as you've refined and honed your ability to zero in on the most effective prompts in order to get the result that you're looking for?
Lorilyn McCue: Yeah. One of the tricks that I used that didn't usually end up in the final prompt, but I used while I was making the prompt was to have the LLM explain itself.
So let's take that instant reply as an example. I would say, okay, who are you replying to? And then I would say, why are you replying to that person? And then it would say, I am replying to that person because of X, Y, Z. And I'd be like, Oh, that's what is throwing you off. You very silly, slow uncle. Okay, now I can change the prompt a little bit to get a little bit more specific about that.
Okay, explain yourself again. And having the LLM explain itself helped me understand the thought process a little bit more so that I was able to make tweaks to the prompt. Yes, the explain yourself option was pretty helpful.
Hannah Clark: Yeah, I think that can be really helpful because sometimes the temptation is just to go back to the drawing board like I messed up the prompt somehow. We got to just rewrite the prompt, but okay, well, this is really practical stuff.
I want to come back to sort of like the business elements of AI. Obviously right now, AI features are becoming table stakes with regards to most digital products, but there's also a lot of complexities you have to balance as you're integrating AI. What are some of the challenges that Superhuman faced in a business context when deciding which AI models or tools to implement?
Lorilyn McCue: Yeah, so those who, know AI, you probably understand that I was referencing GPT-4 versus GPT-3.5. So GPT-4 is the like, slow turtle and GPT-3.5 was the fast, not quite smart uncle.
I will say now this is not so much of a problem because GPT-4.0 and GPT-4.0 Mini, they're so fast and they're really good. Which is interesting, because a problem that vexed me a year ago is ah, it's not as big of a deal now, which is so interesting. But back then it was incredibly important, especially when you are talking about a scale on the magnitude of summarize every single email that someone is getting.
That is a huge scale. Like that costs a lot of money. And we're really fortunate and that our CEO Rahul, he is all about quality. Pick the best option, get the best option. But what's interesting is the best option isn't always what gives the best response. It's, also the latency, like what meets the criteria for our users speed needs.
Speed, correctness. So for us, would we have been willing to pay for GPT-4? Yeah, we would have been willing to pay for GPT-4, but it just wasn't going to meet the needs of the users at that time. So, okay, well, let's use GPT-3.5 because it's available at the time. That is good enough and our latency requirements.
And with a lot of prompt engineering, and, PM time and QA time, we're able to get it to the right quality bar that we need to meet the needs of the user. The business aspect is really important, but we were willing to pay the cost, whatever it did, whatever it cost to meet the needs of the users was the most important thing to us.
Hannah Clark: We'll switch gears a little bit. I want to talk a little bit more about your career development into AI product management, because this is just I think I've told you before, everybody is really clamoring to just develop the skill set and understand how to enrich themselves. So their resumes stand out in this kind of, it's like an AI free for all right now.
So, what are some of the skills, or now that you've worked in the field, now that you've led a team in this field, what are some practical skills or areas of knowledge, or even designations or courses that you would recommend for folks who are looking to develop AI-centric products?
Lorilyn McCue: I would say, I think hands-on learning is the most effective thing right now, getting into it, keeping on top of the different trends, the different things that are being released, there's something new being released every single week.
It feels like once that comes out, play with it. Use it. I would especially say find ways to use AI in your daily life, figure out what works. So a good example of this is I have a very cute couple of boys and they love coming into my office while I'm working. Guys, like just give me like a couple more minutes.
I'm almost done. I'm going to help you. Well, that never works. So what they really want is they want me to print them some coloring pages and then they'll go off and like color it. Okay, but at this point I have exhausted every single monster truck and World War 2 tank and battle scene on coloringpages.com.
So I like no longer have anything left. They're like no. Okay. Well, let's go to ChatGPT. Let's see what we can do. Make me a coloring page of a monster truck. Okay. Are you interested in this? So does this meet your requirements? No, I want 10. Okay. Make it 10 monster trucks. Does this meet your requirements?
No, I want them to be racing. Okay. And then you start like going around and then at some point, you know what your kids want, and you're able to say, make me a kid appropriate coloring page of 10 monster trucks on a track, make sure not to include any race cars. You figure out like what works, and then you're able to, you're able to print it. But I would say like by finding these use cases in your everyday life, that's a personal example.
Let me find a professional example. So I do this with copy sometimes okay, we need to explain to users in one tool tip how this feature works. Okay, it can only be so many words. It has to use this language. Let me put it in a rough example and then give me like 10 different variants. Okay, cool. Let me try another one.
There's something about using AI in your daily life that is, that just, I don't know, it gives you this knowledge, this understanding of what's possible. A new feature comes out, okay, give it a try. A new technology is available, a new model is available. Another company comes out okay, give it a try.
And a good example of this is I went into Perplexity and said, okay, list all of the different models that are out there and how much they cost per million tokens, make it a table. There it goes. Thank you. That's great. I don't know. You have to just do it. I think that's honestly the most useful thing.
Is that like a helpful answer? It's not a course you can take, but it's definitely something that you can do in your everyday life.
Hannah Clark: Very useful. And I also, I wasn't expecting the parenting tip, but I will definitely take that one.
Lorilyn McCue: It's a really good one.
Hannah Clark: Yeah. And my son is interested in the same stuff. So that'll be a fun little treat for the weekend.
Well, thank you so much for joining us today, Lorilyn. I really appreciate you coming and sharing some knowledge about AI product management. Hottest tip of the decade right now. Before we wrap up, where can listeners find your work and learn a little bit more about what you guys are building at Superhuman?
Lorilyn McCue: You can look at our Superhuman work at our Superhuman blog, superhuman.com. Click on blog. And then for me, I post on LinkedIn. So you can go to my LinkedIn, Lorilyn McCue.
Hannah Clark: Cool. Thank you so much for being here.
Lorilyn McCue: Thanks.
Hannah Clark: Thanks for listening in. For more great insights, how-to guides, and tool reviews, subscribe to our newsletter at theproductmanager.com/subscribe. You can hear more conversations like this by subscribing to The Product Manager, wherever you get your podcasts.