When you launch a product, you rarely ever run just a single customer test of the capabilities and features – it tends to be a series of tests that follow each other (much like Stage-Gate for the development of new products). This “gated” approach allows you to balance the risks of widening your audience against your confidence that customers love your product.
To show this more tangibly, I’d like to talk about a recent project I led that involved the launch of a new broadband service offering for customers. During this initiative, we split the customer testing into three stages:
Gate 1: Prove that the product and systems work
We chose a small number of trial users who were heavily incentivised and motivated by being the first onto the product. Because of the nature of the new service, which involved doing installations in customer homes, it was practically impossible to do an entirely realistic lab test. As such, these users were the first to experience an order traversing end-to-end between a new operations support system (OSS), business support systems (BSS) and interfaces to the physical installation via our engineer field force.
We asked these alpha customers to test the priority scenarios that we felt were critical to the product and customer experience; in effect we were having to lab-test the product in the live environments with real customers in their own homes. Considering that this was a test to identify issues rather than get an impression of the service, there was no issue with incentivising customers – effectively we were paying them to help find faults with our service.
Gate 2: Test the “real” customer experience via a soft launch
Up to Gate 1, the audience had been warmed up to expect faults, as we did not have full confidence that the service could be installed and run correctly. For Gate 2, we were looking to prove that the service met the needs of real customers and that all sales, installation and support elements worked well.
We therefore progressed onto a larger (but restricted) volume of customers who had no incentives. This group of customers were used to establish how real customers paying for the services perceived the product and how well the business teams were able to support the issues these customers experienced.
This meant we had the ability to get views on the product and what improvements needed to be made without a major product launch or the risk of uncontrolled volumes of users overwhelming our provisioning and support teams.
Gate 3: Testing take-up of the product
Our business case was focused on how many customers in the target geographic base took up the product. Once we’d proved that the service met the needs of the customers, we encouraged take-up – without any limitation on volume – to see whether customers really wanted it. We also experimented with channels and marketing approaches so that we could achieve the business case targets.
Getting real customers to pay for the product might be considered as a launch in itself. However, without proving the business case in this one geographic area, the business was not going to expand its investment into other regions. This decision to progress to a national scale was the real “launch” we were working towards.
Each gate in this process had clear metrics and approval groups who needed to agree that they were comfortable before the customer test could progress to the next stage. In some of the stages, we even broke the approval into “sub stage” gates so that the project team could gain confidence that all issues were being addressed. This is a particularly rigorous gating system, but it is a very effective way to ensure that you’re getting all the data and confidence you need.
Using your trial feedback
The big question that comes with any such customer test is how you then make decisions based on the information you’ve received. Extremely positive or negative results are relatively easy situations to interpret if, that is, you can trust the results.
An example of this that I’ve faced recently is with regard to aerial installations; we were finding that one of the biggest reasons for customers cancelling a sale of our TV platform was because when the engineer arrived the user didn’t have an aerial. We trialled a discounted aerial installation service, but even when we gave the installation away for free take-up was practically zero. Customers simply weren’t interested. This was such a counter-intuitive conclusion that we re-ran the test with a different audience. It became clear that this product just wouldn’t be successful. Instead, we focused our effort on preventing sales taking place to customers without a working aerial.
In reality you usually find yourself somewhere in between a wholehearted yes and an emphatic no. It’s at this point that the skill and experience of a product manager comes to bear, ensuring that an organisation makes the right decision about how to progress.
A staged approach lets you manage the risks
Going big bang for any activity does have its advantages. However, when it comes to testing with customers, the speed benefits of a big-bang customer test or launch needs to be balanced against risks of failing to get the feedback you need, alienating your future advocates, and potentially damaging your brand. I’d therefore argue that, in most circumstances, you’re better off balancing out the risks with a series of gated review points, in exactly the same way that Stage-Gate lets you balance out the development of a product with a series of commercial and technical validation checkpoints.
Comments
Join the community
Sign up for free to share your thoughts