Common Product Development Mistakes and How To Avoid Them All: A Case Study

21 min read
Share on

In this case study, CPO Ashley Fidler lays bare a catalogue of common errors made by her and her team when launching a cybersecurity software product.

Read on for her honest analysis of the experience and to discover what happens when you combine ideas that are too early for the market, poor customer understanding, a rushed launch, and slow recovery.

Overview

The case study relates to my time at a Machine Learning (ML) platform company that pivoted to cybersecurity in 2016. I was hired as a Technical Product Manager focused on platform in 2014 and eventually became VP of Product after the company’s shift to cybersecurity.

My intention with this analysis, and in sharing these events, is to provide insight into how problematic product decisions get made, despite the best efforts of amazing people, many of whom I consider lifelong friends. These types of situations are all too common, and deserve more open discussion.

That said, we clearly made a series of consecutive product errors, which began when the company ran into problems in 2015.

The company was founded in 2012 with a ML platform that was focused on financial services and differentiated by ease of use for non-technical users. The core technology was similar to that of Databricks, with a stronger focus on solving end-to-end business problems and putting them easily into production.

The product got a lot of early traction. However, we hit a common set of issues facing ML companies, which are well explained in the recent Andreessen Horowitz article: The New Business of AI and How It’s Different From Traditional Software:

  1. Service-heavy: Our platform was much easier to use than traditional approaches to ML, but it wasn’t really an out-of-the-box solution for non-technical users. Our automation of the ML process was a lot better than what had previously been available, but not enough to avoid the need for professional services.
  2. Targeted at enterprise: We had big customers with great logos, the kind which typically invest in core technologies like ML platforms. Of course, those customers expected a lot of additional customization, which made it even more difficult to achieve repeatability.
  3. Burning cash without SaaS growth: We were spending money fast and didn’t have the repeatability and scale of a SaaS product that would justify this spend. This led our board and leadership team to the conclusion that we needed to make a change towards increased repeatability.

Error 1: A Lack of Data

We pivoted to a new market quickly, based on many untested assumptions, without adequate data

Once the company decided to pivot, the belief was that the new product needed to be something that would scale quickly and have a comparatively shorter sales cycle. There were a few options on the table:

  1. Continue down the platform route focusing more on the developer ML platform
  2. Commercialise a custom fraud solution we’d already built for a big bank, or try something else in financial services
  3. Expand a nascent cybersecurity offering

The leadership team and board discussed all three, and eventually ruled out options 1 and 2.

Option 1, the platform, had a long sales cycle and was not suited to truly non-technical users (and making it suitable would have taken a lot of additional investment). The users we had been focused on, financial quants, were generally happy using open-source tools and didn’t feel the need to buy a new product. There were also other platform offerings on the market focused on technical users, and it didn’t seem like we could adjust our product quickly enough, in a way that would be differentiated.

Option 2, the fraud product, was too niche. It was specifically predicting fraud based on whether users were travelling – the market was simply too small and it would have taken a long time to productise, given that it had been built custom. Other financial use cases were also considered, but it was felt that either the market was too small (in the case of, for example, mortgage prepayment modeling), or would require too many professional services (as in, for example, anti-money-laundering). In retrospect, a good option might have been combining multiple financial services use cases with smaller markets, but sales pipeline and timelines were a concern.

Option 3, a cybersecurity offering, seemed to be the right fit. We already had a customer who was open to the idea of us developing a product in partnership, and we had a group of good and well-respected advisors, including members of our executive team who were cybersecurity experts. The direction for the product was novel and the techniques had already been proved to successfully solve customer problems when applied manually.

At this stage, we made a couple of critical errors related to poor assumptions, which will serve as the foundation for the rest of this case study:

  1. We assumed that, because customers had previously purchased the manual approach, they would want to buy a productised version.
  2. We assumed our technology (and our people) would translate easily to this new problem/field and that the product would be easy to build.

Before going into the results of these two specific assumptions (in errors 2 and 3, respectively), a few words on a key personal learning from this decision-making process.

Learning: Understand how to wield limited influence

I wasn’t on the leadership team at this point, and so had little direct influence on the decision making process. However, the mistake I personally made was not understanding how to wield the kind of influence I did have (as a Product Manager who held much of the relevant data).

As is typical, this critical strategic decision was handled at the executive level. This group was very knowledgeable, but naturally more disconnected from the details of building and delivering our product. In our case, we also had an advisory team composed of rather famous practitioners who were true experts in their field, but even less knowledgeable about the day-to-day operations of the company.

I was in the middle management layer, where the practitioners tend to reside. In this case, as is also typical, our group had a lot of data and ideas about the technology and its limitations, but less of an ability to convey our views in a way that got them seriously considered in the decision-making process. The executive team mistakenly thought they had all the data, and we didn’t understand well how to get them to weigh the data we believed they were missing. This was my first personal mistake.

I knew two important things at this point, based on my experience with our cybersecurity prototype:

  1. The product was going to take a lot longer to build than was being estimated. We were missing key data to build and test models, our product didn’t work well with unstructured data, and it couldn’t yet scale to the needed data volume.
  2. The execution team (myself included) did not understand the problem we were solving well, and we weren’t knowledgeable enough about cybersecurity to figure it out quickly (the prevailing assumption was that anyone with an ML background, which we did have, would be able to handle this cybersecurity problem. I was seeing that assumption break down already).
Team of product mangers discussing data
Do your best to find a way to bring the data to the table (Image: Shutterstock)

In hindsight, I wish I had used my window into the ongoing executive discussions to feed data into the team in a format that would have made these critical points digestible. I didn’t do this because, honestly, I was so frustrated by not being included in the conversation that I didn’t take the time to slow down and think about how they might most easily consume this information. I also didn’t yet understand how to reframe my data into the conversation they were having. I had a good relationship with our CEO and could have gotten enough context from him to do this critical (but admittedly thankless) work, but I didn’t. Instead, I made foreboding comments, but without the analytical and data-driven rigor the conversation required. This situation required more finesse and humility than I had at the time.

For those who are interested, Teresa Torres speaks eloquently on this type of stakeholder management in her work on handling your HiPPOs (Highest paid person’s opinion). See, for example: The Art of Managing Stakeholders Through Product Discovery.

Error 2: Relied on Expertise

We relied on expertise, rather than fully validating the customer need and budget before building the product

Going back to the two core assumptions that ultimately drove the outcome of this effort: most importantly, we assumed that because customers had previously purchased the manual approach, they would want to buy a productised version.

The product we had in mind was undeniably cool. It was an automated, ML-driven process to catch sophisticated hackers inside corporate networks. It worked by bringing together different types of log data into a unified data set and then looking for key adversary behaviors related to a successful data breach. The benefit of this approach is that the focus on discovering adversary behaviors within triangulated data sources enables more accurate detection and a lot of automatic filtering of the false positives that plague cybersecurity practitioners.

This process (without ML) had been pioneered manually by one of our expert cybersecurity advisors and used successfully in a number of corporate networks in a post-breach scenario to find forensic evidence after the discovery of a data breach.

The two big errors we made at this point were (1) assuming that customers would want this process automated and in product form and (2) assuming that demand for this product would be just as high in a PRE-breach scenario (to prevent hacks, instead of cleaning up after they had already happened).

The core mistake here was that we failed to do adequate research, both about customer budgets and also, simply, about whether our customers wanted this type of pre-emptive tool. We assumed that people would care equally about using these techniques pre-breach instead of waiting until they had a real and pressing problem. This is a very different sale – more focused on peace of mind than solving an immediate problem. In our case, the need for peace of mind just wasn’t there; cyber-breach insurance took care of most of it.

We also didn’t investigate the overall level of market readiness. If we had, we’d have realized that to use our product effectively, customers needed to have already put some foundational cybersecurity tools and processes in place. We assumed that all our customers would have firewalls, proxies, internal network traffic logging, etc. This was naive, which we would have realized if we had had more market expertise. It turns out that many customers had fundamental gaps that needed to be filled before our product could be useful to them.

Learning: Don’t rely on experts – do discovery anyway

Looking back, there was so much thinking around the product that we just didn’t do. Had we done customer interviews outside “friends and family,” we’d have quickly realised that people weren’t ready for this product. It was, and still is a great product (which is now being sold successfully), but in 2016 the market wasn’t ready. It needed to be part of a package or sold as an add-on, to customers with an existing cybersecurity suite (which it now is).

This glaring set of errors was driven by the political situation in the company. We (rightfully) trusted our experts, who were indeed very knowledgeable. They had identified an interesting problem and matched it to an innovative solution. But it was up to us to validate that this solution would work as a PRODUCT – that we, not being renowned practitioners, could sell this product and that we would be able to find product/market fit in the pre-breach market instead of the, very different, post-breach one. The politics came in because we on the execution team were not cybersecurity experts. Especially in this early stage before I was leading the product team, I personally didn’t feel like I had the right or the background to ask these questions. In true feature factory style, we were told what to build and we built it. In retrospect, I probably could have influenced the situation more if I had been consistent about doing the research and bringing the data, but we didn’t have the resourcing to support that, and I didn’t have the knowledge to make the case for this type of work. This is, sadly, an incredibly common predicament, and it normally seems to be borne more of confusion than intent.

Error 3: Wishful Thinking

We deluded ourselves about technological readiness, build time, and the importance of cybersecurity knowledge

The third error still bothers me because it’s the one I believe I could have impacted the most, if I had been more knowledgeable. We lied to ourselves about timelines and subject matter expertise needs, executed slowly, and committed a lot of unforced errors.

We made the decision to pivot in the Fall of 2015 and it took us six months to even figure out how to build the product (this was a dark time), then another 15 months to get V1 to market (this part was actually fun, but also frustrating as we were starting to run short on cash).

During those first six months we focused all our time on trying to get the approach working by automating it step by step, with no ML. It was challenging because of our limited understanding of cybersecurity and because we didn’t have direct access to our advisors (more on this below). Simultaneously, from a technical perspective, we had to work out how to deal with the extremely large data sets that were only semi-structured (unlike our financial data, which had been tabular), before adding the ML back in. What’s more, this all had to be done before we could start figuring out how to display the information to customers at the right level of detail, allowing them to investigate the data, without becoming a forensic data source of record (which had operational requirements we couldn’t meet).

There was an epic moment of realization towards the end of this early stage where we realized we really didn’t understand how the product we were building was even supposed to work. Our key advisor came to our offices for a two-day workshop in February 2016. This was the first time I had met him. Up to this point, the advisor had only engaged with our cybersecurity experts, who were intended to relay his instructions back to the execution team. Much as in a game of telephone, this communication was garbled and misunderstandings abounded. Going into this workshop, because of the indirect communication, there were points of confusion in our attempts to replicate his process. For example, we knew the cybersecurity kill chain was important, but we didn’t really understand how it fit into the product. During the two days, our advisor walked us through his process in detail and explained exactly how it worked for the customer. It was absolutely mind-boggling. All of a sudden we (the product and technical teams) realized what the product was actually supposed to be and why we had been struggling. Afterwards, we shifted our approach to be more in line with his vision and the real build began. This should have happened much sooner.

 

Our Product Timeline

  • Fall 2015: Decision to pivot
  • Fall 2015- February 2016: Experimental period
  • February 2016: Moment of realisation during visit from the lead cybersecurity expert
  • March 2016 – May 2017: The real build takes place. Including adding ML back in and rebuilding the whole platform
  • May 2017: V1 is released
  • July 2017: V1.1, the real working version, is released
  • February 2018: V2 (with new UI) is released
  • October 2018: Company acquisition

 

Once the build started in earnest, we encountered a number of additional challenges. We had a large amount of staff turnover in our technical team because of the pivot to cybersecurity and subsequent company-wide issues, we also had a lot of tech debt which made simple things like even spinning up dev environments take days at times. This led to a time-consuming platform rebuild. Then there were also the predictable engineering and science problems related to our complex product: getting the ML working, normalizing data, handling large-scale data from multiple sources in an integrated way, designing a user experience that would integrate with typical security workflows, etc.

Learning: There’s a fine line between self-delusion and naysaying

The thing I learned from this process was to be alert for signs of self-delusion, without falling into the trap of doom and gloom.

Throughout this process there were a number of discussions about the timeline. There was even one pretty significant fight about it in the Spring of 2016. It didn’t seem like there were any alternative options, so, after some contentious discussion we pressed on. The thing I personally learned here is that the tone of these conversations is really important. It’s possible to drive deeper conversations about difficult topics by adding some levity. I’m still not good at this, but when I can manage it, it does help.

product managers in discussion
If you see something that’s not right, find a way to push critical conversations (Image: Shutterstock)

We also assumed/hoped that we were building for the right user and that those users would want to use the product as we designed it (because that is what they had done in the manual process our advisor used). It was pretty clear at a couple of times during this process that we didn’t understand exactly who would use this process or how it would fit into the workflow. We had talked with enough customers at this point to have concerns about where we fit in, but we assumed they would just work themselves out somehow. We didn’t have the bandwidth to do anything other than solve our technical challenges and assumed the market-facing people would take care of the rest (which they weren’t equipped to do). We probably could have changed this if we had pushed harder.

In retrospect, I wish I had been braver and forced these conversations, especially the customer fit ones. I did push on the timeline conversation, but still didn’t adequately understand the real customer need enough to feel comfortable forcing this issue. I wasn’t a cybersecurity person and I knew it in my gut. In retrospect, It might have been possible to do more analysis, talk with more customers, and ask more questions of our advisors. Because this didn’t happen, often our attempts to address challenges fell into negativity and no change. Again, a positive tone, when combined with clear thinking and analysis, can make a difference.

You might like: Having Having Difficult Conversations with Confidence an #mtpcon Digital session with Denise Jacobs

Error 4: Rushing to Market

Once these errors became clear, we rushed to market then recovered slowly

In Spring 2017, we finally started marketing the product. We didn’t spend enough time in 2016 really getting ready for a launch, given that the product was quite nebulous, so, when it came time, we ended up rushing to market without adequate time to strategize about how to reach the right customers and pitch our solution. We had a strong executive sales team and relied on them to reach customers, starting with their personal networks.

Because our product was quite cutting edge, we landed some big and interesting customers. All the people we talked to thought it was cool, but the sales cycle was slow and there didn’t seem to be an urgent need for many customers. This is the point at which we discovered that many customers would need more foundational cybersecurity tools in place before they’d be ready to buy, and we weren’t able to provide them with the additional things they needed.

Without those prerequisites, the customer’s budget wasn’t earmarked for us. We were perhaps the fifth priority and we couldn’t serve the same customers the manual process had, because we weren’t a forensic tool. We’d landed ourselves right between a rock and a hard place.

We started to build partnerships to fill in cybersecurity gaps, doing user research to better understand how our tool would fit into existing Security Operation Centers (SOCs) and realizing we needed a different visual approach. We were able to build that quickly and launch a new UI which met the customer need a lot better. We also started investigating how we could work on top of existing forensic data stores, to meet that objection from customers. But the process was slow.

Learning: Don’t agonize. Execute.

As surprising as it may be, even at this point, a few saved months might have made a significant difference in the overall company outcome. Our cash came down to the wire and that really affected the latter part of our acquisition process.

Through this process, the most important thing I learned is the classic start-up understanding that cash runway and speed of execution are critical. We spent a lot of time agonizing over decisions that we should have just looked at and made. We were reasonably focused on our objectives, but not enough to cut through politics and execute where we needed to. When you’re a start-up, especially one low on cash, every minute counts. I know that now, and perhaps it’s a lesson that always needs to be learned from first-hand experience.

Learnings Recap

In brief, what I’d do differently now, and what I recommend to any Product Manager hoping to avoid the errors we made, can be summed up as follows:

  1. Recognise the influence you do have as a Product Manager and don’t be afraid to use it.
  2. You might have a groundbreaking idea, but without proper discovery, you won’t know if it’s the right one for your users – validate your solution for Product.
  3. Be realistic about timelines and ready to spot the signs of delusion – you will know in your gut when your timeline is off so be prepared to force difficult conversations and go to them armed with the insights and data to support your argument.
  4. Time is money, especially in a startup. Save as much of it as you can, be efficient in your decision making and execute.

Conclusion

In the end, the company was acquired and nearly everyone got a position with the parent company. Our product found a home, and, two years later, it’s back in a much more receptive market, supported appropriately by foundational technologies and services.

We made tons of mistakes, endemic to young teams without adequate experience and support to take on the hard challenges we were facing. It wasn’t a  successful outcome, but we escaped without much lasting damage. For me personally, it was a huge learning experience and I’m glad I stuck with it to the end. Though I still regret a number of decisions I made, and didn’t make, I’ve been able to take those learnings into other roles.

My final take away from this experience is, honestly, humility. Product is hard. It requires patience, curiosity, good judgement, and a level of openness and emotional maturity that takes time to develop. I wanted to share this story because I think it’s important to see how easy it is to slip into untenable situations even with the best intentions and product practices. It takes a lot of presence of mind to avoid and it’s worth the continued effort and improvement required.

You might also like: In My Humble Opinion by Thor Mitchell