In this spotlight session at #mtpcon London 2023, David Carlile, Sr Director, Product Strategy at Optimizely shared insights on how we can reduce risk and fail faster by building experimentation into the product development lifecycle. Watch the video in full, or read on for his key points.
Product development and experimentation
David walks us through the product development process to fail effectively and consequently build better features. Here are some examples of questions to ask at each stage:
- Discover: Which features should be added to our roadmap and why?
- Design: What should these features look like, and how should they be built?
- Build: What is the most efficient and cost-effective way to build this feature?
- Validate: How can we validate that this feature works?
- Roll-out: How can we roll out this feature in the most efficient way, whilst reducing risk?
- Iterate: How can we optimise this existing feature?
This is a very similar process to the experimentation side of product management, he explains:
- Discover: Test to learn: Painted door tests to validate demand
- Design: Test to decide: User studies and rapid prototyping to validate feature design
- Build: Use feature flags to limit blast radius
- Validate: Test to measure: roll out test to prove behavioural change and business impact
- Roll-out: Staged rollouts to monitor the quality and performance of the experience
- Iterate: Test to learn: Experiments on iterations of features to optimise user experience and business impact
Common challenges
When going through this process for both the product development process and experimentation, David shares a few common challenges we may face.
Ship-first mentality: This would mean not asking enough questions about whether we’re shipping the right thing. David explains that we can use testing to validate our assumptions before we invest in building products.
Productivity threat: Often, when we’re looking to build a rigid case for a product or feature, it can be seen as a blocker to productivity. He encourages us to push back on the productivity threat to ensure the validation of new products and features.
Org-wide buy-in: It can be difficult to influence the whole business to invest in this new process, especially if a business has a built-in product, feature, or way of working.
Validation only: David adds how this development cycle can be perceived as the final stage to simply validate the product or feature that we were already going to proceed with. It’s important to integrate this validation stage earlier into the product development lifecycle, he explains.
Build the right ‘thing’
To ensure that we’re building the right products, David breaks down three product test types. Also, he includes examples to showcase what successful companies have been doing to experiment and validate well, such as media platform, Medium.
- Test to learn: Learn about user demand and interest in new features. Learn how users respond to your feature and why they respond the way they do.
- Test to decide: Decide on feature design, build and roll-out strategy.
- Test to measure: Validate the impact of the feature on user experience and/or key business metrics.
In terms of collaboration, David breaks down the key responsibilities of each team.
- Product teams – Testing to learn: Exploratory testing, validates the demand.
- Engineering – Testing to decide: Feature flagging, easy rollouts, decouples code release from deployment.
- Marketing – Testing to measure: Message validation, personalisation, quantifying the impact.
This showcases how businesses can optimise experiences, and consistently test across various teams and channels from end-to-end.
Measuring continuous optimisation
David says that constant iteration and repetition are key. He lists some metrics we can consider to validate program performance:
Program metrics
- Test velocity
- Conclusive rate
- Win rate / Lose rate
- Roll-out and roll-back rate
- Test set-up time
- Tests per initiative
- % product discovery tests
- Test duration
Although these metrics are important, David mentions that value metrics are equally critical to provide an estimation of what effect a new product or feature is having.
Value metrics
- Positive impact
- Negative impact
- Annual estimated impact
- ROI
Key takeaways
Wrapping up his talk, David shared some key takeaways from using metrics to drive value and validate business ideas
- Program metrics should evolve over time: they should reflect cultural challenges and evolve with your experimentation maturity
- Program metrics can be harder to collect: Extra work may be required to capture and catalogue project data and operational metrics
- Program metrics serve as a directional compass for culture change: Establish baselines, implement cultural practices, and give time to remeasure
- Experimentation value includes achieved gains and avoided losses: Experimentation should lead to improvement and catch potential mistakes.
Try Optimizely Feature Experimentation for free and start shipping confidently with
Rollouts – Optimizely’s free feature flagging and experimentation solution. Create a free account here