In the fast-paced world of Agile organizations, experimentation is the lifeblood of innovation and progress. However, the reality is that not all experiments yield positive outcomes.
In this episode, Hannah Clark is joined by Manuel Da Costa—Founder of Effective Experiments—to shed light on the phenomenon known as the product-process gap and discuss ways organizations can foster better experimentation practices.
Interview Highlights
- Meet Manuel Da Costa [01:07]
- Manuel is the founder of Effective Experiments, a company that helps businesses collaborate and make better product decisions through software.
- Manuel’s background is in the lean startup environment, where he learned about validating ideas through testing.
- He then transitioned into the conversion optimization field.
- This experience led him to create his own product focused on improving collaboration between product and customer experience (CX) teams.
- Understanding the Product-Process Gap [01:53]
- The product-process gap refers to the disconnect between what leadership expects from product management and the reality of what product managers and owners can deliver.
- This gap was identified in McKinsey research on product management practices.
- Challenges in Experimentation [02:50]
- The root of the product-process gap stems from the pressure on product teams to validate decisions through experimentation.
- Many product teams lack the experience or deep knowledge to run rigorous experiments.
- This leads to poorly designed experiments that don’t provide reliable results.
- Adding experimentation to the workload creates challenges with cross-team collaboration and prioritization.
- Product teams are tasked with integrating experimentation but lack the proper support to do so effectively.
- The unpreparedness of product managers for experimentation stems from the handoff of this skillset from marketing/CRO teams.
- There’s a lack of ongoing support beyond basic training, leaving product managers without proper resources to assess their experimentation effectiveness.
- Product managers may run flawed experiments due to a lack of deep knowledge (e.g., simplistic experiments, incorrect instrumentation).
- There’s an absence of oversight from leadership, who trust product teams’ results without verification.
- Senior leadership (VP of product, CPO) isn’t holding teams accountable while also providing the space and resources for them to improve their experimentation skills and decision-making.
- The pressure to deliver results and lack of time to properly implement experimentation create a gap between expectations and reality.
- Leadership’s Role in Bridging the Gap [07:49]
- Leaders should set KPIs that incentivize quality experimentation over simply launching a certain number of features.
- Allocate enough resources to coach and improve product teams on experimentation, as it’s an ongoing process.
- Equip product teams with the time, resources, and permission to experiment effectively.
- Foster a culture where experimentation is encouraged and failure is seen as a learning opportunity. This will lead to more innovative products.
- Lack of trust in experiment results: Leaders may disregard data and rely on intuition due to poorly designed experiments.
- Gut-based decision making: If data isn’t trusted, companies may revert to making decisions based on instinct.
- Stagnation and lack of innovation: Companies that avoid experimentation may struggle to keep up with the market.
- Focus on buzzwords over practices: Companies may say they’re data-driven or customer-focused, but their actions don’t reflect this.
- Feature factory mentality: Companies churn out features that aren’t aligned with customer needs or don’t deliver real value.
No one starts off knowing how to experiment correctly. It takes time, and it can’t be achieved through just a single training workshop. Teams need to be coached and mentored over time.
Manuel Da Costa
- Standardizing Experimentation Processes [12:30]
- Standardize experiment processes, data classification, terminology, and analysis practices.
- Implement a RACI model to clarify roles and communication.
- Ensure experiments are aligned with business objectives and KPIs.
- Encourage a more mindful approach to experiment prioritization, execution, and evaluation.
- Establish clear oversight for coaching and improvement, not punishment.
It’s about ensuring there is clear oversight. When I say clear oversight, I mean that anyone can look back at a team or a person and analyze how well they’re doing. This isn’t about finding fault; it’s about understanding where the gaps are and how to coach them better.
Manuel Da Costa
- Fostering Trust for Better Experimentation [16:12]
- Foster psychological safety by emphasizing learnings over wins and losses in experiments.
- Shift experiment language to focus on validating customer needs or saving development costs.
- Move away from “experimenting for the sake of it” and focus on learning and improvement.
- Address misaligned KPIs that hinder experimentation by getting leadership involved.
- Overseeing and Scaling Product Teams [18:21]
- Implement an “orchestrator” role to oversee and coach smaller teams of product managers.
- Orchestrators are responsible for onboarding, coaching, and mentoring on experimentation best practices.
- The number of teams an orchestrator oversees depends on individual team size.
- Governance Frameworks and Oversight [20:20]
- Create a governance framework that outlines required and optional data points for experiment creation.
- Implement a mandatory QA process for experiments before launch.
- Develop a health scorecard to track experiment progress through a defined process.
- Use the health scorecard to assess experiment integrity and identify areas for coaching.
- Success Stories of Closing the Product-Process Gap [24:56]
- Used SWOT analysis to identify teams receptive to experimentation.
- Onboarded receptive teams in 3-month sprints, creating a “social proof” effect.
- Fostered a community of practice among these teams for ongoing support.
- Achieved significant improvement in product team experimentation skills within 2 years.
- New team members were easily integrated due to established frameworks and processes.
Meet Our Guest
Manuel is the founder of Effective Experiments, a company that helps global enterprise organizations drive innovation through well-orchestrated experimentation practice growth across the business.
If product owners and product managers are not given a sense of safety, they will choose what they feel is the safest option to meet their KPIs. This leads to safe product decisions but never innovative ones.
Manuel Da Costa
Resources From This episode:
- Subscribe to The Product Manager newsletter
- Connect with Manuel on LinkedIn
- Check out Effective Experiments
Related Articles And Podcasts:
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Hannah Clark: One of the wonderful things about Agile organizations is that we're constantly experimenting and learning to improve our orgs and the products we offer. Every breakthrough in tech is the result of some form of experimentation, but here's the flip side: experiments don't always lead to positive outcomes. In fact, many experiments are highly flawed—and when hundreds of thousands of dollars are riding on an experiment's outcome, the last thing we want is to make decisions with faulty data. And here's the scary part: most of the time, we don't even realize the data is faulty until it's too late.
My guest today is Manuel Da Costa, the Founder of Effective Experiments. As you can probably deduce from the name of his organization, Manuel is principally concerned with helping orgs conduct better experiments, leading to better product decision making. While the right tools can certainly help us improve experimentation and data analysis, he's also identified a phenomenon that originates in the margins of the org chart called the product-process gap—and it's quietly costing tech companies millions. Let's jump in.
Welcome back to The Product Manager podcast. I'm here today with Manuel Da Costa.
Manuel, thank you so much for joining us today.
Manuel Da Costa: Hey everyone! Hannah, thanks for having me on.
Hannah Clark: So can you tell us a little bit about your background and how you got to where you are today?
Manuel Da Costa: Sure. So I'm the founder of Effective Experiments. We're a company that provide software to help companies collaborate better and make better product decisions. My background, well, started way back, like in the lean startup environment, where I kind of learned about this test and learn approach of validating. And kind of made my way into the conversion optimization world and gradually into my own product where we're really helping product and CX teams work better together.
And that brings me to this point in my career now where I need to tell this story, which is what we're here for on this podcast.
Hannah Clark: I'm excited to dig into it.
What we're going to be focusing on today is a phenomenon that you've called the product-process gap. So can you start us off by giving us a rundown of what this refers to and the circumstances that surround it?
Manuel Da Costa: Yeah, sure. So McKinsey has done a bit of research recently where they looked at the gap between good and bad product management practices, and it really comes down to this expectation from leadership on how product should be executed. And then what is really happening on the ground level with product managers executing and there's this disconnect that we've spotted. And I gave this the term called product-process gap, but it's really the gap between the expectations and reality.
And there's a lot of reasons why this happens and we'll go into this, but it's ultimately management wants better product and then product managers and product owners are not delivering on that. And there's a raft of reasons why that happens.
Hannah Clark: Let's go a little bit further into that. So you said there's a lot of factors for why that could happen. So what's sort of the journey or kind of the origin of this gap sort of occurring and how do we kind of get there?
Manuel Da Costa: Sure. So let's look at what product teams are being asked to do these days, which is validate every product decision that they make. And experimentation has become this vehicle for product teams to essentially validate their hypotheses, validate whether they're putting out the right product.
For a lot of these product teams that have been tasked with running experiments, they haven't had the experience or the know-how, or even if they do have the knowledge, it's more surface level knowledge on how to experiment. So as a result, you're getting decisions made through experimentation, but the actual experimentation itself may be faulty. It may not be rigorous enough. And there's also additional elements around that.
So we're talking about cross team collaboration, the awareness of what to do, and also how to prioritize efficiently. Because there's so many conflicting priorities in product, especially what they should be doing and with experimentation being thrown into the mix now, it's how to effectively prioritize that backlog for maximum output.
These are some of the symptoms we've seen on the ground level, where, as I said, product teams are being asked to execute, but also embed experimentation into their practices, but they're not given the support to help them with it.
Hannah Clark: So what's missing in this equation? It sounds like there's almost like a UXR skill set that's just underdeveloped in product managers who are placed in this position. And that kind of, at least it is domino effect of faulty output, what do you kind of see is sort of the main contributors to the unpreparedness that product managers find themselves in?
Manuel Da Costa: So let's look at where experimentation is being pulled into. So for a lot of organizations, experimentation was done primarily by a marketing team of CRO team, the conversion optimization team.
Maybe it was a few people, maybe it was a small center of excellence team. And gradually, they've been asked to now pass on the skills to product teams so that they can be self-reliant in running their own experiments. The first challenge starts with that lack of knowledge. And you may see organizations where they will run some training workshops, they may run some guided sessions where they can show them how to do it.
But that's where it stops. It's like they've been given some access to some tools, they've been given surface level training. But there's no resources to help them really understand whether they are improving or not improving. So as a result, they go by the numbers. So they just say, we've run X amount of experiments is the output, but there's no one kind of monitoring and reviewing whether they've executed that experiment correctly, they planned it correctly, and if they've analyzed it correctly.
That skill set itself takes quite a lot of time to develop. Think of it like a muscle. It's not something that you can just do once and then you're good at it. There are people out there, product managers that will run good experiments, but they may be too simplistic. They may run complex experiments, but they may be instrumented in the wrong way. So as a result of this, when they're passing on the information to leadership to say, here's what we believe should be done as the next steps, it's based on faulty information, or it may be based on faulty information, but they don't know that because there's no oversight on that.
Which brings me to the next point, that oversight element is missing. And quite often there's a level of trust given to these teams to say, right, go and run the experiments and we'll believe whatever you pass on back to us. Because of the lack of resources, no one actually spends that time in verifying those results to see if they're actually accurate.
If those results are because of good practices or bad practices, so that's one factor. The other factor in all of this is the absence of that leadership level. And when I say the leadership level, I'm not talking about like the product owner. I'm talking about the VP of product, the CPOs. They're the ones that need to really hold those teams accountable.
And not just hold them accountable, but also give them some kind of leeway where they can spend the time exercising that muscle and learning how to experiment better, learning how to make better decisions and also prioritize better. And because of these two factors, you get this gulf forming between those expectations and the reality of it. Because product managers and the product specialists are struggling because they don't have the bandwidth to slow down and get these things in place.
So, one part of the responsibility lies with leadership because they need to give these teams the remit, but also the bandwidth to be able to get better at this.
Hannah Clark: So in practical terms, what does that look like from a leadership perspective? Like what are some of the steps that leaders can be taking to circumvent opening that product-process gap?
Manuel Da Costa: Sure. The first thing is setting the right KPIs. The main thing we see is when there are perverse incentives put in place. And when I say perverse incentives, what I wanted to highlight is the fact that they may put, let's say you need to launch X amount of features this quarter or this year. We need to hit that number.
So it becomes a quantity game. It's not about the quality aspect. It's now, you could launch a complex feature or launch multiple simple features as long as you're hitting that number. So it's about setting the right KPIs for these teams so they incentivize in the right way rather than just being that they could do anything just to hit that milestone and get away with it.
The second thing is providing the teams and the people that oversee them with enough bandwidth again to coach and monitor and improve these product teams to make those better decisions over time. As I said, no one starts off knowing how to experiment correctly. That takes time and it can't be that you just do a training workshop and that's the end of it.
These teams need to be coached and mentored over time. What we find is that lack of bandwidth is because they need to hit some numbers, so they don't have the bandwidth. And because they don't have bandwidth, they don't improve. So it's a vicious cycle that just keeps happening. It starts with leadership saying, we want to improve as an organization, making better product decisions.
How do we make better product decisions? That comes from experimenting and validating our assumptions. That's the first step. And in order to do that, we need to equip our product teams to make those assumptions. So we give them the time, we give them the bandwidth, we give them the remit, but we also give them a level of safety to be wrong, because the main thing about experimentation is you can be wrong. There are instances where if you set the KPIs in the form of that quantity aspect, or even that we want you to launch experiments that have a success rate of a certain amount, you will get people gamifying the system to hit that number.
Experimentation is really all about, we have an idea that something might work, but we don't know until the rubber hits the road. We don't know whether this will work or not. And in doing so, when we put it out there, we get feedback from our customers. We get feedback from the market to say, this is exactly how it works.
And then we can make the decision to continue down that path or not. And that safety element is quite important because if product owners and product managers are not given that safety, they will go with what they feel is the safest thing to help them hit their KPIs. And so you get safe product decisions, but never innovative product decisions.
Hannah Clark: I can kind of see a world in which this product-process gap sort of flies under the radar in which KPIs are being met. People are generally happy. The business is staying afloat, but there's something missing. What are some of the symptoms that an organization is kind of quietly suffering from this gap that there could be some significant improvements and that there might be some faulty decisions that are being made somewhere along the line due to faulty experimentation?
Manuel Da Costa: Yeah, I mean, if there's faulty experimentation happening, we've already seen this where you get CPOs and VP saying, we don't actually trust these experiment results. So gut-based decision making happens, right? So people are making decisions based on gut, even though there's some data out there, but because they don't trust that data, they're like, well, that's just disregarded straight away.
And that is one thing that if a company's safe right now, let's emphasize on those words for now. They don't experiment. They don't innovate and they play it safe. Sooner or later, the market forces will make them either work harder to improve or it will decide the fate of that company in that sense. But the lack of trust in experiment results, or even the decision making is what we need to improve first of all.
That's one of the things. The other thing is companies may talk about the fact that they're data-driven, product focused, customer focused, but these are just buzzwords and you will see that in the practices rather than in, you know, what they talk about out in public as well. And you will see, as I say, features and product updates that are not really aligned with what customers want.
And it's just that feature factory at the end of the day. So it's just that they're pushing things out for the sake of it, rather than actually delivering value for the customer or for the business as a whole.
Hannah Clark: So do you have any frameworks that you would prescribe in order to either run better experiments or what kind of action should we be taking in a very practical sense?
Because it sounds like this is an issue that's sort of there's many contributors to sort of the breakdown that causes this kind of chasm. So what's kind of step one as far as you see it, if we kind of are recognizing that this phenomena is happening within our organization?
Manuel Da Costa: Absolutely. And I think the first step is about standardizing things. I've talked about when product teams are given the tools to run experiments, what we've noticed is you look at two different teams in the same organization and they suddenly start deviating in the ways they run experiments or how they classify the data, how they capture the data. So the first step is understanding what the process should be.
And once you know what that process should be, you standardize it. And when I say standardized, people get quite worked up because they feel like, Oh, we have to be really rigid in our ways. There's no room for flexibility. The challenge with too flexible approach on the other end of the spectrum is that chaos ensues over time that data becomes almost useless to anyone looking at it in the future.
Standardizing things means you standardize the process, you standardize the data classification, the nomenclature that you use for that, you standardize the practices in how things are analyzed, who signs off on it, right? Implementing things like a RACI model, like who's responsible, accountable, who needs to be communicated and informed.
Those things seem like they're rigid, but essentially what they're doing is they're creating guardrails for people. And it doesn't matter if the team then changes over time, new people join that team, someone leaves and goes to another team or leaves the company. It means that that practice is embedded in every experiment that's carried out, every decision that's made on the back of that.
The other thing is really drilling down the connection between those decisions with business objectives. So it's rather than just doing spray and pray for our decisions. They are aligned with not just your KPIs, but also with business objectives and business goals. Once you get those two things in place, you have a much better foundation to go off. And it will be slow going to start with because it will take people time to get used to the fact that they need to be more mindful about how they prioritize their backlog, how they decide what to experiment on, how they launch those experiments, and then evaluate whether that is going to be a full feature that they move forward with, or they cap. So those aspects form the foundation.
After that, it's about ensuring that there is clear oversight. And when I say clear oversight, it means that anyone can then look back at a team or a person and analyze how well they're doing. And that's not from a point of view of finding fault. It's about understanding where the gaps are and how you can coach them better.
It's about saying, for example, you've launched five experiments this quarter, but some of them don't have a strong hypothesis or they don't have, you haven't picked the right metrics when you're evaluating it. Here's how you can improve. And that's the only way people improve. You give them feedback and you monitor them, not from a point of view of giving them negative feedback, but you give them positive feedback from where they can then improve on.
Those two things in tandem will then improve that organization's base level of closing this gap.
Hannah Clark: What's interesting about this is that it really sounds like there's an element of emotional intelligence that's really required at the leadership level in order to be effective at managing in this way. Because a lot relies on the ability of the leadership team and the product team to have that kind of a secure relationship in which there is that sense of psychological safety in order to make mistakes and in order to have experiments that don't pan out as expected.
So what kinds of tips would you recommend in order to foster stronger relationships in between teams and especially across the org chart? How do we get a little bit better at trusting one another when we're working closely, especially in a new process like this?
Manuel Da Costa: Absolutely. It's easier said than done because there are humans involved, there are egos involved, and people have to make themselves vulnerable, right? That's the hardest thing. But what I find is, again, from the leadership, providing some level of psychological safety to say, and kind of shifting the language as well slightly.
So we don't say this test succeeded or this test failed. You focus mainly on the learnings. If you shift the conversation from wins and losses to more by running an experiment, we validated the need for this feature or we validated the need from the customer, and this is how we have helped the company.
This is the potential of this experiment. Or by doing this, we actually invalidated. We did not see the need to build this feature. In doing so, we saved our development teams whatever cost, and this is what we've learned, which we're now taking it further to learn more. It's really shifting the mindset from this experimenting for the sake of it or experimenting to win, which is why KPIs are so important and how you shape those KPIs are so important.
And whilst product managers and product specialists and owners might be listening to this and saying, yeah, we would love to do this, but we can't because our KPIs are different. And this is why it's really important that we get leadership to pay attention to this. Because if we don't do this, there's all sorts of side effects in the fact that you are going to get organizations that become feature factories at the end of the day, and it might seem things are taking a long time.
And we're getting stuff out, but really to what extent?
Hannah Clark: Yeah, that's a good point.
I'm also a little bit curious. So while we're speaking about some of the practical elements of the leadership level, naturally KPIs will vary from team to team, but also team sizes will vary team to team. How do we effectively oversee a product team as we start to scale as you know, new hires come on that have a different level of knowledge or internal owned knowledge versus someone who's more senior?
How do we kind of standardize and ensure that we're providing the correct level of oversight and especially when we have, you know, we're quite outnumbered as leaders with the team members that are under our guidance?
Manuel Da Costa: In our world, we've kind of come up with this role of an orchestrator. And depending on the number of teams, the number of people in those teams, you create this role of an orchestrator that oversees a smaller number of teams or a certain number of people. And they can be responsible for onboarding, coaching and mentoring these teams and ensuring that everything within their world proceeds as it should, and they can then report to the next level up.
So rather than having one leader oversee everyone, you want to have one person that's responsible for a team or a number of teams, and their role specifically is about the onboarding element, the coaching element, and the mentoring element to ensure that the people that are under them run better experiments and make the right product decisions.
And perform within the guardrails that have been set in the process and the structures itself. I think that's really important when you look at how the organization is set up. So an orchestrator can oversee multiple teams, but again, it depends on the team size itself.
Hannah Clark: Yeah, I'm really with you also with the concept of guardrails. I think that it's one of those things that makes it sound disempowering. But in practice, I've found that having firm guardrails in place tends to reduce decision fatigue, and it makes people's jobs much clearer when people don't feel like they're continuously having to iterate on their processes in addition to the products that they're building. So I do think that there's a real strong case for that.
I am curious, though, because what we're kind of talking about now is also the need to standardize oversight. And that I think is a little bit more abstract in terms of how do we standardize the process that we're using to oversee a team, even if we have a smaller team under our wing.
What kind of recommendations do you have for standardizing the oversight approach, especially if you as a leader don't necessarily have a very strong background in experimentation yourself?
Manuel Da Costa: So it all starts with creating a governance framework. And if you look at what that involves, it's creating the rules for how an experiment is created.
When I say how an experiment is created is, what are the key things you need to capture when you create an experiment? And some of it might be fairly basic. Some of them might be required, like a hypothesis, a good hypothesis, right? There's a difference between a hypothesis and a really strong hypothesis.
That's one aspect, but then also what other details do you capture? And when I say that, I'm looking at information that is useful for the creation of that experiment, like the technical details, who's the audience and things like that. But other aspects that is relevant and important for the business. So when we create a governance framework, we say that when anyone in the organization creates an experiment, there are certain fields or certain data points that need to be captured.
There's no ifs or buts on this that has to be captured. That's the bare minimum. And then there are other optional aspects of it, right? So that's one aspect which can be, again, I hate to use the word enforced in the nicest possible way. But there are guardrails again that protect the fact that the organization then doesn't end up sitting on a pile of useless data because that can also happen.
If you give too much flexibility, you'll have variances in different teams having their own interpretation of what an experiment looks like. So you have a governance framework that dictates that the creation of an experiment requires you to capture business relevant information and the technical relevant information.
Then there's the process that follows. You can't have an experiment being designed, executed, and analyzed without it going through certain steps. For example, you wouldn't launch a feature without doing QA on it. The same thing for an experiment. You wouldn't launch an experiment without it being QA. So having a governance framework that says that an experiment can't pass through certain stages without it passing through QA, for example.
That's a non-negotiable. It has to go through that. And that means every experiment follows a path and it can't deviate from that path. And if it does deviate, then that's where it can be monitored and we can find out why that is happening. Now, this is where this concept that we've put together of a health scorecard comes into play.
And the health scorecard, it's a checklist of certain points on which an experiment can be evaluated. For example, was there a hypothesis created? Yes or no? The second aspect, when was it created? Was it created when you came up with the experiment? Or was it further down the line once you're trying to justify the experiment?
Because that also happens. We've seen a trend, it's not a word we developed. We've come across this. It's called harking, which is hypothesizing after the results are known. That's saying if you strangle data long enough, it'll tell you what you want to know from it. And that's pretty much what happens sometimes where you have experiments that don't really succeed.
They don't win. But then you can shift the metrics so you can shift the hypothesis. And lo and behold, now the experiment has won. Governance frameworks and guardrails are important because it prevents these things from happening. And I'm not saying people set out to do this willingly. But it goes back to that point of their KPIs are to hit a certain number of product features or run certain number of experiments or hit a certain success rate.
Now they're shifting everything to hit that number. So this is where those governance frameworks and going back to the health score, it can keep track of these things. So did it follow the process that we set out? Were there any changes made to the critical pieces of information on that experiment when that experiment was completed?
And we can then start monitoring. So the health score can give you a picture of whether that was executed correctly or not. And then ultimately you can say, we trust this experiment and the outcomes and the decisions we made from it, or we can't trust this experiment. And here's the reasons why, but again, it's an opportunity to coach.
It's an opportunity to get the organization ultimately making better product decisions, but also running better experiments to get there.
Hannah Clark: I really appreciate the frameworks that you provided here. This is really helpful.
Do you have any success stories that are top of mind of an organization that was able to transform their outcomes by eliminating the product-process gap?
Manuel Da Costa: I can't give the name of the company, but we did exactly the same with a company that was moving from the center of excellence team to teams that were product teams, but also in market teams. So they had a wide range. They were a global ecommerce company, wide range of teams that were the market teams, but also product teams that again, were just introduced to testing at an early stage.
With that, what was important was we didn't introduce every single product team and every single market team to experimentation straightforward. What we did was we created a SWOT analysis of every team and we scored them on how well their knowledge was about experimentation. And the other factor was how keen and willing were they to be involved in it.
And then we started onboarding the teams that scored highly on those and gradually use them as social proof. So once we got a few teams onboarded and they were going well, so like in three months, we call them three months sprints and each three months sprint had a plan in place to get these teams coached and onboarded to a point where they were self reliant by the end of that three month program.
But then using them as social proof to get the next level of people on, but also then providing that community support between those teams. So we solidified them over a two year period. It was quite a long process. It wasn't again, something that is going to happen overnight, especially if it's a larger company.
Culture change doesn't happen overnight. This is a process that we went through. So at the end of that two year period, we had these product teams that were much more well-versed in how to run these experiments. But even when they had that turnover and employees, people leaving the company and new people joining in the ramp time for a new employee to get embedded in this process was less than a month because they had all the frameworks in place.
They had the systems in place and they knew exactly how to do it. They had everything in place where they could do no wrong. Let's put it that way, because it was much easier for them to follow stuff than try and bring their own spin on things.
Hannah Clark: I really appreciate the examples that you've shared and this has been really informative.
So thank you so much Manuel for joining us. Where can people follow you online if they're curious to learn more about your work and effective experiments?
Manuel Da Costa: You can find me on LinkedIn. I'm quite active on LinkedIn. You can also visit EffectiveExperiments.com and learn more about what we do on there. And I also post quite a lot of content on our blog as well. Please feel free to follow me on those two resources.
Hannah Clark: Fantastic. Thank you so much.
Thanks for listening in. For more great insights, how-to guides, and tool reviews, subscribe to our newsletter at theproductmanager.com/subscribe. You can hear more conversations like this by subscribing to The Product Manager, wherever you get your podcasts.