Experiment
JUL 22, 2024

A new way to look at applied experiments

In this article, Connor Joyce, Author of Bridging Intentions to Impact, shares how to design effective experiments that drive real results, even with limited resources.

11 min read
Share on

In a recent piece, I shared the power of identifying a feature’s intended purpose as a means for driving alignment around what should be designed and how to measure success. Once a team has a clear understanding of what their new feature is intended to achieve, it becomes crucial to validate its effectiveness through experimentation. When a feature works, the team can replicate it in future efforts and market the impact it creates. If a feature does not perform as expected, experimentation provides the necessary insights to refine and redirect efforts, ensuring that the product development remains aligned with user needs and business goals.

While valuable, many teams still struggle to conduct experiments, often due to a few misconceptions. One common belief is that all experiments equate to the gold standard in academics, the Randomized Controlled Trials (RCTs). This leads to the false notion that simpler or less rigorous methods are inadequate. Another misconception is that experiments must be done with a complete digital infrastructure to be valuable. While it is true that having a full experimentation structure makes testing at scale easier, it is something to strive for rather than a prerequisite. Beyond these, a general lack of knowledge on how to properly set up experiments further complicates the process. 

There are many great pieces describing how to set up experiments but less that help get started today. This article, as well as with my new book “Bridging Intention to Impact”, aims to demystify experimentation by introducing a new gold standard for experimentation of digital products. It then breaks down the seven essential components of experimentation, illustrating that not all need to be fully met to conduct a valuable experiment. By providing this comprehensive guide, I hope to empower teams to design and implement experiments regardless of their current state. Doing so equips the team with shareable insights rather than just theoretical pitches, putting them on a path to make the case that the company should invest further in experimental infrastructure.

Randomized Controlled Trials (RCTs) are often considered the gold standard in experimental research due to their ability to minimize noise and extraneous variables, thus providing highly reliable results. However, in an applied setting, RCTs are rarely realistic. The extensive resources, time, and controlled environments required for RCTs make them impractical for most product teams, who operate under tight deadlines and budget constraints. Consequently, teams must break free from the notion that RCTs are the ultimate goal and instead focus on creating practical, feasible experiments that still yield valuable insights.

The ideal experiment for many teams is fully digital and easily executable. This approach allows for rapid testing and iteration, which is essential in today's fast-paced development cycles. Teams like Airbnb exemplify this strategy, running hundreds of experiments annually by leveraging their ability to conduct fully digital tests. This agility enables them to continually optimize their offerings and stay competitive in the market. A fully digital experiment comprises several key components that collectively ensure its effectiveness and feasibility. Firstly, digital execution allows for seamless data capture and broad participant reach. By utilizing feature management systems, teams can quickly deploy and toggle new features for different user groups, facilitating efficient and targeted testing.

Data infrastructure is another critical component. Robust mechanisms for collecting, storing, and analyzing data are essential for real-time analysis and informed decision-making. Advanced data science capabilities further enhance the experiment's value by applying sophisticated statistical techniques to uncover deeper insights. In addition to behavioral metrics, collecting attitudinal data through surveys and feedback tools provides a comprehensive view of the user experience. This qualitative data complements the quantitative findings, offering a more nuanced understanding of how features impact users. A well-chosen participant pool for both the behavioral and attitudinal data ensures the generalizability of the findings.

In summary, while RCTs may be ideal in theory, the reality of applied settings necessitates a more pragmatic approach. By focusing on fully digital experiments and incorporating the essential components of data infrastructure, advanced analytics, attitudinal data collection, and diverse participant recruitment, teams can conduct effective experiments that drive evidence-based decision-making and continuous improvement.

The previous section outlined an ideal experiment, which is structured around seven primary components. These components form the foundation of a robust experimental framework. In this framework, each component is given a name and an ideal state to provide a deeper understanding of its role and importance along with how to gauge if a company has it established. It is crucial to remember that not all seven components are required to conduct a successful experiment. Instead, teams should evaluate their current capabilities in each category and design an experiment that is realistic and feasible given their resources and constraints. This approach ensures that even with limited resources, meaningful and actionable insights can still be obtained.

Experiments can be viewed on a spectrum ranging from high-fidelity, resource-intensive setups to simpler, more accessible methods. At one end of the spectrum lies the ideal experiment, characterized by fully digital execution, comprehensive data collection, and advanced analytics capabilities. These experiments provide the most reliable and actionable insights but require significant resources and infrastructure. In the middle of the spectrum are fully retro experiments, which utilize existing behavioral data to draw insights without the need for real-time data collection. This method strikes a balance between fidelity and feasibility, leveraging robust data infrastructure while minimizing the need for new data collection efforts. On the other end of the spectrum are basic moderated experiments, which are the most accessible and least resource-intensive. These involve direct interaction with participants in a live setting, providing valuable insights, albeit with a limited sample size, with minimal technological requirements. Understanding this spectrum allows teams to choose the experimental approach that best fits their resources and objectives. Further details on these experiment types include: 

By understanding and utilizing these alternative experiment types, teams can adapt their research strategies to fit their resource availability and still achieve meaningful, actionable insights. Evaluating where your team is at on each of the seven components also sets the stage for developing a strategic roadmap to make progress toward building all of the components. It is a solid approach to begin experimenting with what you have today and use the insights to create support for further investments in improving the components. 

Ultimately, it is up to the product and research teams to carefully assess their data needs in conjunction with the available resources to choose the most appropriate experimental approach. Whether striving for the high fidelity of fully digital, ideal experiments, leveraging existing data through fully retro experiments, or employing basic moderated experiments to gather direct user feedback, the key is to align the chosen method with the team's objectives and constraints. By thoughtfully considering the spectrum of experimental options and understanding the trade-offs involved, teams can effectively gather the evidence needed to drive informed, evidence-based decision-making, ensuring that product development remains user-focused and impactful.

Connor is a keynote speaker at #mtpcon North America. During his keynote 'AI Features Demand Evidence-Based Decisions', Connor will share the essential skills needed to lead successful teams and the traits that will help next-gen product people excel in their careers. Gain insights into tools that empower practitioners and leaders to excel in their daily work.

Don't miss out on this opportunity to learn from product leaders - buy your ticket!

Comments

Join the community

Sign up for free to share your thoughts