Are you tracking the right product metrics?

12 min read
Share on

Metrics can be a minefield for the unwary – navigating your way to find the right metrics is one of the hardest parts of a product manager’s job. What do you need to know to make a go of it?


TL;DR

  • Don’t beat yourself up about it, everyone finds metrics hard
  • Do you fully understand the problem you’re addressing?
  • Think about what bespoke metrics you can develop, they may serve you better than standard ones
  • Always consider the wider context of your product – maybe your product has hit its natural limits
  • Use any lack of data and data structure to advocate for in-house data management roles
  • Don’t give up too early
  • You don’t need to measure everything

It’s a product manager’s  job to define the right metrics and KPIs for their products and businesses. As Martin Eriksson says in this post, Top product metrics talks from Mind the Product, this “is critical if we want to measure our progress and know if we’re successfully driving the right outcomes”.

There are a number of different pieces to the metrics puzzle, says Arfan Ismail, Senior Product Manager at plagiarism detection service Turnitin, but fundamentally you need to use them to be able to answer the question: ‘are we being successful?’. He adds: “There are a lot of different measures you can use. Some people use KPIs, some use North stars, some OKRs. Essentially, you’re saying you will deliver something to customers that will benefit them and the business.”

It’s not easy to put the pieces together, so we’ve gathered together some expert opinion and cautionary tales to help you find what you need to look out for and stop it all from going wrong.

Stop thinking it’s straightforward

Quantifying whether you deliver value to the business is easier because the business has goals, says Arfan, like a revenue metric such as ARR or MRR and a growth target. Then a product manager can work out what they can do within the product that will affect those metrics – increase revenues or numbers of active subscribers. You might put together some straightforward metrics – number of subscriptions, number of people completing a purchase.

When you’re working on an established product and have to refine one part of it, it’s tempting, says Arfan, people usually fall back on metrics like NPS (Net Promoter Score), but it’s very difficult to link a metric like that to the work you do when you’re one of a number of people working on the product. He adds:” One of the hardest things for a product manager to do is to come up with really good metrics.”

Building the right mix of metrics is not very different from building a product, says Eva Rio, Product Manager at storage management software provider Tuxera. You must understand the problem you’re addressing, prioritise, and make choices.

Perhaps you have a given OKR and a set of metrics to inform your decisions, but can you make do with less? Can the knowledge you want from a certain metric be obtained in a different way? Eva adds: “What would be your level of confidence if you were to make a choice without having all the numbers or qualitative input for all the metrics you’ve set in the first place? How much risk and ambiguity can you tolerate?.”

Have you thought about bespoke metrics?

Sean Gabriel, Delivery Director at Red Badger Consulting, also comments that many product managers miss a trick by not considering bespoke metrics – metrics that fit their unique business problem or challenge. He says there’s nothing wrong with well-worn measurements around traffic, user activity, purchase values and the like – but teams rarely get together to shape their metrics with the same energy and effort they would apply to product challenge problems, or use design thinking methods to brainstorm novel measures. He gives an example: “I once led a project team where our insights analyst challenged us to use the ‘crazy 8s’ technique to define our success measures, and we ultimately got to a series of interesting and relevant metrics that everyone could rally behind as we carried forward in the project.”

What if your metrics come from the top?

Eva points out that in some organisations, you’ll be given out metrics that supposedly “move the needle” and this inevitably ends up being revenue. She suggests challenging this top-down approach, and bringing metrics that go beyond the obvious into the discussion as well as a more holistic approach to measurements. Echoing Sean’s point about bespoke metrics, she says: “ For metrics that ‘go beyond the obvious’ the bad news is that there’s not a one-size-fits-all recommendation. These metrics will be specific to your company/product/project.”

She says you can get started by asking how you create value – which will be specific to your company. “From there, identify and track the drivers that affect how value is created. Keep in mind that these drivers will change over time, so indicators also need to be evaluated from time to time – the frequency of this review will also depend on your business and market changes you are experiencing.”

This may be hard to be consistent and build cadence, she says: “How can you reliably measure something over a period of time to see trends if the indicator keeps changing? My advice is not to obsess with trends (nobody knows what the future will hold)– but rather context. Accept uncertainty and provide context about the indicator: why did you select it, what information does it provide, how is it affected by other factors, how does it affect other factors, which OKR and company goal is it related to?.”

She suggests that you also ask yourself how well you can correlate cause and effect, and how much control you have over the indicator. “In the case of revenue, this is very challenging to do – revenue can be affected by quite a lot of factors. Select your indicators so that you can take corrective action if needed.”

Are you considering the wider context?

Teams often overlook the wider context in which their measurements are taken, and set unrealistic goals, says Sean. He comments that user acquisition always hits a natural limit at the total addressable market and revenue doesn’t increase forever. This is also where teams can run astray with productivity metrics like throughput/velocity, he feels, because it’s too easy to make judgments on them out of context.

He says: “I once worked on a data-driven client project where we created a workback schedule for five weeks, complete with an average requisite throughput. When challenges inevitably crept in and we missed the number for the second week, the immediate client reaction wasn’t to think about what can we trim from scope to hold the date, it was how can we increase our velocity to “make up” for being behind schedule. Over subsequent milestones, this led to a recurring pressure to always increase velocity without recognising a system-level constraint (velocity can’t increase forever!) and all sorts of unintended consequences around splitting and sizing tickets – ultimately a lot of numerical busywork that wasn’t contributing to delivering working software for the customer.”

Becoming more data-driven or data-informed cannot be a goal in itself, adds Eva, you need to know what knowledge gaps you’re trying to address with metrics. She says: “What are we aiming to improve? This high-level alignment usually starts in the form of OKRs or leading indicators, then drills down into specific metrics –the level of granularity will be different depending on the objectives and what we want to learn.” Metrics are meant to enable discussion, she adds.

Can you find the data?

If the data in your organisation is scattered and unstructured, you won’t be alone, but that’s scant consolation when you’re struggling to pull together and make sense of disconnected data points. As Eva says: “I’ve come across companies with different levels of data-readiness, but never a place with a “perfect” dashboard or data collection and analysis mechanism; there are always caveats. Identifying, capturing, and classifying data is a challenging endeavour.”

Inevitably there’s a business cost to this lack of structure – and it may enable you to make the case for in-house data management and ops roles.  “Regardless of the level of data-readiness of your company, my advice as a product manager that needs to get things done is to timebox the time you spend working with data and set a “good enough” level of confidence, then show a bias towards action and make decisions: what’s the most reasonable course of action to do with the information you managed to collect? Plan out a few scenarios and paths based on the reliability of the data you have – best case, worst case, in-between,” she says.

Are you giving up too soon?

It’s not always obvious what you should be measuring, and it’s not always easy to find the data you need to produce your metrics. And because it’s hard, often teams just give up.

Sten Pittet is CEO of collaboration platform startup Tability, which uses AI to help teams to strategise and deliver metrics. He says giving up too early is the most common mistake he sees. “We don’t talk enough about how hard it is to deliver good OKRs,” he says. “It’s hard to start thinking about goals when you’re always focused on work, so very often teams get their OKRs wrong for a couple of cycles and then they give up.”

Sten expands on what can go wrong when you measure performance with OKRs. He says that it’s important to remember that OKRs don’t have the same value for team members as they do for leaders. For individual contributors, it can just feel like more work, an extra thing that you have to do. And teams don’t understand the value for them.

Leaders also misunderstand the value of OKRs, Sten says. Goals can be expressed differently in different parts of the business and this makes it hard to understand if teams are working towards the same things. The introduction of OKRs unifies the language but it doesn’t automatically mean that everyone is working on the right things. Because the terminology is unified the dysfunction becomes apparent, but people often then blame the OKRs for the dysfunction. You can hear more about what Sten has to say on the subject in this Product Experience podcast episode Getting OKRs right.

Are you measuring everything?

If you’re measuring everything you can think of, then stop. Sean says that in the past he’s fallen into this trap. His thinking was that full visibility would mean he’d always know what action to take. But, he says, “it led to a culture of constant instrumentation that was wasteful, and ultimately ineffective, as we hardly did anything with all the data we ended up collecting.”

Now he finds it’s much better practice to work backwards from the change you expect to take – knowing what action you will take when the metric moves a certain way. He says a great example of this comes from implementing DORA metrics (the standard for assessing the efficacy of software development teams) with a past client. Early on, they had agreed on thresholds for the mean time to restore service and change failure rate metrics. Individual failures would be investigated, he says, but if the metrics ever dipped below a certain level they would automatically halt new feature work on the backlog and spend additional cycles on quality.

“No negotiation with a product manager or business stakeholders was required,” he says, “it was something we’d pre-agreed as a team and essentially triggered a programmed response in our development practices. This in fact had the unexpected consequence of creating positive pressure to invest in quality during ongoing development, both to avoid the customer downtime and to avoid having exciting new feature work taken off the table for the team!.”

And finally, consider ALL the benefits

Aside from the obvious and immediate benefits they bring to managing a product, metrics can provide a strong foundation for cross-team communication and collaboration. Sean remembers a financial services project where his team was responsible for integrating with an automated decision engine that helped decide whether customers were accepted or declined for a product, or else referred to a human for further processing.

The rates for accept/decline/refer were measured early on. Then they were communicated between teams to see if the risk modelling matched everyone’s expectations in pilot testing as well as wider customer releases. Rather than have the teams lobby each other directly for feature requests, they could check whether the metrics were moving in the direction they all wanted and agree on changes to work towards this joint goal. Says Sean: “This meant there was an objective basis for cross-team prioritisation against this initiative, rather than who made the more compelling pitch to get work funded.”

Further reading