00:00
00:00
00:00
Artificial Intelligence (AI) & Machine Learning (ML)
NOV 17, 2017

Product Design in the Era of the Algorithm by Josh Clark

Share on

Machine learning has taken over huge parts of our world, from diagnosis of medical conditions to legal queries to beating human players in Go. How does this affect how we design, build, and manage products? In this insightful talk from Mind the Product London 2017, Josh Clark shares how we need to think about product design and product management in the era of the algorithm.

Rather than trying to design a single route through an experience or product, when working with machines it’s about working out all the possible outcomes and how to handle them. Product managers will be key in this process, if we hope to create products that are going to have a positive impact on people’s’ lives, rather than simply embed the injustices of the past.

Product in the Era of the Algorithm by Josh Clark at #mtpcon

At this point algorithms still make too many mistakes and this holds up their widespread adoption. Voice commands, for example, is one area where there are many bugs – the machines are struggling to pick up the nuances that are communicated in most languages. Microsoft’s picdescbot can describe what is in an image – but is still struggling to pick up the difference between many types of imagery. But this is not going to last long.

So how do we design products in world of algorithms?

Embrace Uncertainty

Google’s snippet tool gives answers to questions posed – for about 15% of searches. When it works it’s great, but there are many instances where it gets it wrong. This can be especially risky when associated with news events and politics – given the amount of controversial information that is on the web.

As with many of these tools, only a single answer is typically presented – which by default reduces the nuance which can be illustrated. We have to design interfaces to have some sort of humility when they are not sure of the answer. Most of the image analysis tools now available have a confidence scale, but it is rarely used in their outputs. This needs to change if they are to become truly useful – when they are confused, let them say so.

Following on from this, transparency and collaboration between humans and machines are where really interesting work can happen. When an algorithm fails, that’s the point at which the machines should ask for human input to improve their results. Wikipedia for example flags a number of its pages as ‘Disputed’ – so that the reader knows to proceed with an extra cautious eye.

Josh Clark Algorithm Era at #mtpcon

Machines Only Know What we Tell Them

Garbage in equals garbage out. The goal of machine learning is often to decide what’s normal, and point out when things are going to deviate from it. The problem is that they learn from the existing situation, which is often far from perfect. For example Google’s speech recognition is worse at recognising a woman’s voice than a man’s because of the data that it was fed originally. This can get much worse, there are examples from across the industry of automated systems that don’t recognise people with dark skin or non-caucasian eyes.

We must be cognisant of the fact that we could easily code our historical biases into the machines of the future. People who have been persecuted in the past are not outliers, they must be integrated into the fabric of our societies and we can help make that happen with technology.

Josh Clark at #mtpcon

Responsible Data Collection

Data input is actually UX research at an unprecedented scale. Design teams need more diversity so that we can better interpret these inputs and allow our products to meet the needs of the range of perspectives we now design for.

We need to make it easy for our users to contribute accurate data to our products. Tinder and Facebook have done this, because of the inherent motivations of their users. We need to see how else we can get our users to help us improve their experiences – but it has to be done transparently so that everyone understands and agrees to how that data is being used.

Hostile Information Zones at #mtpcon

Be Loyal to Users

Finally, be loyal to your users. We are going to be designing products that impact peoples’ experiences, lives and perhaps even human rights. We need to take this seriously and apply our human decency to product decisions – so that we can be kind to each other in the tools and experiences that we make.

Up next

AI ethics advice from former White House technologist – Kasia Chmielinski (Co-founder, The Data Nutrition Project)

How Google makes AI work at Enterprise scale – Miku Jha (Director, AI/ML and Generative AI, Google Cloud)

LLM workflows for product managers: 3 key takeaways (Niloufar Salehi, Assistant Professor at UC Berkeley) – ProductTank SF

The future of product management: Insights from ProductTank San Francisco

Product lessons learned making early moves with AI in media: Lindsey Jayne (CPO, Financial Times)

50:43

A year with ChatGPT and product innovation: Navigating the AI landscape

24:28

How to keep your head about generative AI (when everyone is losing theirs) by Claire Woodcock

43:50

What we get wrong about technology by Tim Harford

Product management in the age of ChatGPT by Yana Welinder

How Canva uses AI-powered features to drive PLG

Recommended

01:02:33

The role of an AI product manager by Hiroki Nakamura

20:17

A gentle introduction to AI in product by Rand Hindi

Product lessons learned making early moves with AI in media: Lindsey Jayne (CPO, Financial Times)

43:50

What we get wrong about technology by Tim Harford