Experiments are a quick and inexpensive way for product managers to validate and prioritise their ideas. In Experimentation for product teams: Part 1 we explore the types of experimentation product managers can use and at some of the pitfalls and challenges.
But, to really understand experimentation, we want war stories! So, in this second part, we take a look at some real-life experimentation experiences as members of the product community share their tales – including what went right and what went wrong.
Sports streaming user sign-up
Terry Lee is an acquisition and onboarding specialist, currently Senior Product Manager at Flo Health, and his example comes from one of his previous roles at sports streaming platform, Dazn.
He says: “The hypothesis was good, the tests were good, but the vision was so short-term that the impact was negative.
“We were doing a three-step onboarding process, a landing page, then sign-up page, and then the payment page. The landing page was designed to give the user information about the product. The sign-up page was pretty blank because we didn’t want to distract users from the information they needed to enter. And then the payment page was also very blank.
“Our short-term goal was to improve the numbers of users going from the landing page to the sign-up page, only one step. Our hypothesis was that ‘if we change the CTA button to yellow we will improve the numbers of users going from the landing page to the sign-up page’. And we did. We saw something like a 60% improvement in users going from the landing page to the next screen. Fantastic, we all patted ourselves on the back, we’d achieved our goal. We also got a good improvement on signups which was also great.
“But we got a massive reduction in the performance of our payment page. Why had changing one button on the front end changed the performance of a page to two pages later? We talked to a few of our users and found that the landing page had been where users understood what the product was, what platforms it worked on and so on. All that information on the landing page was being skipped, the sign-up page didn’t present a problem because it was just an email, but when users got to the payment page they’d ask themselves, ‘what am I paying for, what device does this work on?’.”
All of that initial information had stayed on the landing page, but by putting a big yellow Next button on the page Terry and his team were essentially pushing users on quickly to the next page so that they didn’t take it in.
Complex food delivery AI
Graham Paterson is a long-time product leader. Currently Product Partner at seed stage venture capital firm Connect Ventures, he’s spent time managing product at Deliveroo and at Wise, among other companies. Says Graham: “What makes me nervous about experimentation is that there are lots of things that can go wrong.” He says that it’s easy to set an experiment up incorrectly so that it doesn’t address the question you’re trying to answer. He also cautions against what he calls experimentation theatre – where you want to show the CEO that you’re doing something and you run experiments that don’t really move your understanding forward. “You should always talk to your users and analyse what people actually do,” he says.
His experiences of building an AI system to work out which order to place with which rider at food delivery business Deliveroo highlight how complex experimentation can become. “It surprised me how important it was to test each part of the system we built,” he says. When testing with users – whether they were people ordering the food or Deliveroo riders – it was very hard to predict what they would do. “People are unique and emotional,” he adds.
It was hard to know whether an algorithm would definitely improve things because it might cause something strange to happen somewhere else in the system. “For example, if we were to allocate riders and do it too well that would mean the restaurants won’t prepare the food on time and the riders would be left waiting. So even if the algorithm theoretically chooses better riders, because of the restaurant lag that improvement doesn’t work out.”
“We made another improvement, one that was incredibly powerful, and we thought it would improve the system efficiency by 50%. But we found it was pretty much only picking scooter riders rather than cyclists.
“We weren’t just building for London, we were building for a whole bunch of cities around the world. Sydney harbour cuts the city in half, there are lots of hills in Hong Kong, all these types of factors played a part. We had to make sure our improvements made sense everywhere rather than just in this mathematically perfect place we modelled. There were also other factors that could skew the results of our experiments – maybe it’s a Saturday night, in which case there are more orders and more riders and so the whole system is more efficient, but that would skew the results.”
Keep talking to customers
An experiment that shows the importance of continuing to talk to customers comes from Graham’s time at online money transfer business Wise (then TransferWise). One of his colleagues decided to run an experiment to move the user sign-in point to much earlier in the flow.
Says Graham: “I couldn’t understand why they did this. All the UX design rules state you should ask people for commitment as late as possible. But he tested moving it right to the front of the flow, because he’d had some insight from some customers that this might be a good thing to do.
“To my surprise it drastically improved conversion. Because people were inputting quite sensitive information, they felt a lot more comfortable doing it behind a password. It showed me that you may think your intuition is right most of the time but it can definitely be wrong.”
Graham concludes with some advice about how important it is to have a holistic view of your product. “People will pick a metric they want to test and prove it’s the one. They might get 50% more customers clicking through a button and decide their experiment is a resounding success. But if they don’t look at the real measure of success, which is how many people actually bought the product in the end, then they’ll have no idea of the impact they’re having on the business.”
Share your insights
There’s nothing better than hearing real-world examples from other product managers and for that reason, we are always looking for case studies from our community.
To help us to share best practice and advice on all aspects of product management, all you need to do is speak to us, we’ll do the rest!
If you’re interested in sharing your case study, simply let us know. Complete our quick form
Comments
Join the community
Sign up for free to share your thoughts