,
Artificial Intelligence (AI) & Machine Learning (ML)
DEC 26, 2022

A gentle introduction to AI in product by Rand Hindi

Share on

In the closing keynote session at #mtpcon London 2022, Rand Hindi, CEO at Zama, takes us through the history of AI, looks at what it can do for society today and asks what we must do to prevent its misuse.

Watch this video or read on for key highlights from the talk.

Rand starts by looking at the use of artificial intelligence (AI) in financial trading – in 2008 AI was responsible for 3% of financial trades but by 2012 the percentage had grown to over 90%. AI became so prominent in finance, he says, that 99% of traders lost their jobs.

What is AI?

To step back, the idea of AI is to reproduce human behaviour in a machine, Rand says, and machine learning is one of the techniques employed to do this. Machine learning is one technique of AI, where you teach a computer to reproduce the behaviour. There’s a type of machine learning called deep learning, he adds, and all three terms are often used interchangeably.

Rand also explains how deep learning uses an artificial neural network. “Deep learning is an extremely powerful technique,” he says, “because the human no longer needs to understand what’s in the data. They just need to feed it into the machine.” He then talks about some of the ways that AI is useful to society – self-driving cars, voice assistants, medical diagnostics and so on.

Human in its creativity and innovation

There was a breakthrough a few years ago, Rand says, when Google AI managed to beat the world champion at the game of Go. It had previously been thought that only a human could play Go. The machine made a move that looked so human in its creativity and innovation that the world champion had to stop and consider whether he was playing against a machine. It made people realise that “AI cannot just do things, AI can invent things”.

This ushered in the second era of AI, where AI started to be creative. “For example, we started creating models that were so good at producing text that you couldn’t differentiate between a human and a machine,” says Rand.

How do we know what’s real today, Rand asks? It’s a real problem, he says, adding that it’s not something that can necessarily be solved but these “deep fakes” are things we need to take into account. He cites a recent podcast where Joe Rogan interviewed Steve Jobs, even though Steve Jobs died over a decade ago. “Where do you draw the line on what’s acceptable with AI?,” Rand says.

Preventing misuse, bias and addressing privacy

AI is now so powerful that we need to think about countermeasures, Rand says. How do we prevent misuse? People building the AI need to build in blocking behaviour. They have to understand how the model works so that they can teach it not to do certain things. This is a very complicated but essential task.

We also need to prevent bias. If there’s bias in the data then the AI will be biased. You need humans who can look at the data and make sure it’s clean and has as few biases as possible. Minorities should be represented in a balanced way for example, otherwise you’re translating bad behaviour into the machine.

If you’re handing your data over then you potentially expose it to people you would rather not see it, so privacy is a big issue for AI. There is a way to address this problem. Homomorphic encryption means you can keep data encrypted even when the AI uses it to learn. “Think about reading a book without having the Rosetta Stone to learn the language,” says Rand. “You can learn the structure, the vocabulary, but you don’t know what it means. It’s the same idea.”

Deep fakes

Deep fakes are the biggest existential threat to society from artificial intelligence, Rand believes. They’re very complicated to deal with – because you don’t know if they’re real or not, and anyone can generate them. “I’m convinced that if we don’t do something about it we’ll never be able to trust something we see or hear online,” he says.

There is one way through this problem that is being worked on at the moment, and that is to digitally sign all your content. “Effectively, everyone will be verified. Everyone will have a digital signature with the content they publish. It says they’re a real person, this is not AI, not fake, you can trust it.” Within the next five years you can expect to have to sign everything you publish online, he says.

The future

Rand concludes by saying that no one wants a future that is technologically overwhelming, where we’re nudged all the time to do or buy something and where there’s nothing you can do that isn’t under surveillance. We want to feel private. He says; “AI can go two ways. It can make things horrible and turn us into slaves and robots with no free will, or it can do all the things we don’t want to do. It can help us be more creative, more productive.”

This article is part of our AI Knowledge Hub, created with Pendo. For similar articles and even more free AI resources, visit the AI Knowledge Hub now.

Up next

AI ethics advice from former White House technologist – Kasia Chmielinski (Co-founder, The Data Nutrition Project)

How Google makes AI work at Enterprise scale – Miku Jha (Director, AI/ML and Generative AI, Google Cloud)

LLM workflows for product managers: 3 key takeaways (Niloufar Salehi, Assistant Professor at UC Berkeley) – ProductTank SF

The future of product management: Insights from ProductTank San Francisco

Product lessons learned making early moves with AI in media: Lindsey Jayne (CPO, Financial Times)

50:43

A year with ChatGPT and product innovation: Navigating the AI landscape

24:28

How to keep your head about generative AI (when everyone is losing theirs) by Claire Woodcock

43:50

What we get wrong about technology by Tim Harford

Product management in the age of ChatGPT by Yana Welinder

How Canva uses AI-powered features to drive PLG

Recommended

How Canva uses AI-powered features to drive PLG

40:55

Metrics don’t have all the answers: a human-centered approach to business strategy by Samihah Azim

LLM workflows for product managers: 3 key takeaways (Niloufar Salehi, Assistant Professor at UC Berkeley) – ProductTank SF

43:50

What we get wrong about technology by Tim Harford