“I bet the people who are in the auto industry right now have more than 10,000 good ideas about what might work and what we need to do is not come up with more good ideas. We need to go and test as many of those good ideas as possible.” – Eric Ries
In the previous chapter, we discussed why actionable metrics are needed by product teams to inform how their bets are performing, and why connecting outcomes to metrics helps understand if we are moving the right direction. In this chapter, we will move to the next step of the Three-Stream Loop – Validating Learning. Eric Ries’ quote above perfectly illustrates the importance of testing our assumptions.
The importance of data democratisation
Before we dive into antipatterns and patterns of validating learning, we need to discuss one key foundational topic. A topic which is sometimes glossed over in organisations or just not considered – the importance of data democratisation. According to Arpit Choudhury, “data democratisation is the ongoing process of enabling everybody in an organisation, irrespective of their technical know-how, to work with data comfortably, to feel confident talking about it, and, as a result, make data-informed decisions and build customer experiences powered by data.” Amplitude, 2022
The bottom line is this; for the patterns we will suggest in this chapter, product usage data needs to be accessible to everyone on a product team, regardless of their job title, and they must have sufficient foundational data fluency to help Product Managers make decisions. From our research and experience, we have learned that diversity of inputs plays the biggest role in informing success for decisions. Not the tooling, not the designs, not the user testing – diversity of inputs. This means developers, testers and designers are facilitated with the ability to instrument and read product analytics and interpret how their users are behaving with their product, and not having this analysis siloed into a Data Analytics organisation. We are huge fans of Data Analysts and Data Scientists, but we have found they can have even more impact when they mentor and train cross-functional team members to interpret data, thus freeing themselves up to focus more strategically.
Therefore, when we discuss patterns such as Hypothesis Driven Development, we are discussing it from a data democratisation lens – empowered product teams not just designing experiments, but actually discussing how they will instrument success metrics, read the metrics and providing their diversity of inputs to Product Managers to help inform their next decisions. Patterns such as OKR Reviews and outcome visualisation also feed into this.
Okay, so we’ve got the basics in place, time to explore how we might go about validating learning!
Antipattern: The fortune teller
Antipattern: Betting all the chips on the prototype
Antipattern: Throw it over to the Support team
Antipattern: How much did it cost?
Summary
In this chapter, we built upon our earlier learning on metrics, by discussing why testing assumptions is key, and having regular feedback cycles to review our hypotheses helps us validate learning quicker.
We explored:
- Why data democratisation is a key foundational element to validating learning.
- What Hypothesis Driven Development is and how it helps us break free from the fortune teller trap.
- Different approaches to A/B testing, and why techniques such as multivariate testing might be more appropriate at certain times.
- How to kill Zombie OKRs using regular OKR Reviews.
- Why long-term outcome ownership is key to reducing learning loss and handovers.
- A technique to visualise and review actual outcome realisation.
In the next chapter, we will discuss how now that we have validated our learning using actionable metrics, we can move towards Taking Action.
Next Reading: Chapter 4: Taking Action