5 ways to invent and simplify as an AI/ML product manager

In this article, Product Leader, Prerna Kaul, shares how ‘Invent and Simplify’ principle accelerates impactful AI development. Prerna describes five key ways to leverage ‘Invent and Simplify’ to drive innovation, efficiency, and customer value.

6 min read
Share on

What does ‘Invent and Simplify’ mean?

I have worked on Generative AI and machine learning problems at Amazon’s Artificial General Intelligence and Retail organizations for 5 years. During that time, I found Amazon’s leadership principle of ‘Invent and Simplify’ incredibly useful in driving impact. In this article, I describe 5 ways in which I worked from first principles and leveraged ‘Invent and Simplify’ to build great products.

The definition of Invent and Simplify is as follows: Leaders expect and require innovation and invention from their teams and always find ways to simplify. They are externally aware, look for new ideas from everywhere, and are not limited by “not invented here.” As we do new things, we accept that we may be misunderstood for long periods of time.

It is hard to build a real AI product that drives customer value, is widely adopted, and demonstrates consistent quality. In practice, LLMs are non-deterministic - you don’t control all the outputs for delivering a successful product. Due to this, the best way to obtain product-market fit with an AI product is not through the traditional practice of a demo but through a real technical prototype that customers can use in their daily lives. The faster a product manager improves their AI product, the faster they learn the boundaries of LLMs and market needs. ‘Invent and Simplify’ improves speed of execution in the following ways.

Method 1: Focus on Ways of Working

As leaders, we seek ways to improve our processes to help teams become more efficient towards meeting goals. Especially in AI teams, a few mechanisms I have seen work successfully include:

Figure 1. Design Sprints 

Method 2: Define LLM Quality Metrics

Although the popular LLMs have improved tenfold in terms of logic, reasoning, math, and writing, we still observe gaps in inaccuracy, hallucination rate, and user empathy. To build custom agents and models, it’s important to account for these deficiencies through technical improvements, user research, and success metrics.

In my experience, teams often rely on basic LLM evaluation and omit the need to improve upon the baseline quality of their agents and models. These might include measuring goal success rate, turn completion rate, hallucination rate, intent recognition accuracy, and others. Product managers can easily differentiate by tracking baselines and making time for model improvements.

Figure 2. Comparison of Models: Quality, Performance & Price Analysis

Method 3: Fail Fast Approach

When investing in a full build, set a short time horizon for collecting data for training models, launching the product, and aggregating metrics. After testing, discard the strategies that aren’t working and double down on what is succeeding based on data.

Some good ways to implement this include A/B Testing with X variations of a feature, Smoke Testing with the UI of a feature and no backend, Feature Flagging to expose the feature to a subset of users, and shadow mode to get production metrics with zero customer exposure.

Figure 3. Smoke Testing

Method 4: Master Prompt Chaining and Reasoning

Imagine if you could prototype 10 new ideas a day without having to trouble busy engineers with your requests! I recommend investing in learning prompt engineering techniques, specifically chain of thought reasoning. This allows you to execute more complex instructions and prototype LLMs without the need for engineering resources. 

Figure 4. Prompting Techniques

Method 5: Incorporate Privacy and Security Controls

Most tech organizations have a legal department that also helps make product decisions related to privacy and security. While this is a necessary condition, it is insufficient to seek and align on guiding principles. 

At the best product organizations I’ve worked in, legal co-creates privacy and security controls for model training, data anonymization, leakage, responsible AI, etc. It’s a collaborative and iterative effort to ensure that customer data is treated with the highest sensitivity while not sacrificing customer experience.

Some best practices include:

By implementing these practices in your AI/ML products, you’re well on your way to inventing and simplifying all the way to the bank!

Disclaimer: The views and opinions expressed in this article are those of the authors solely and do not reflect the official policy or position of any institution, employer, or organization with which the authors may be affiliated.