AI governance: an update for product managers

From executive orders to summits on safety, politicians around the world are trying to get to grips with the realities of AI governance. In light of some of the latest initiatives, here’s a look at what’s currently on the table.

7 min read
Share on

While moves to legislate for artificial intelligence (AI) vary around the world, governments are agreed on the opportunities and risks that the technology presents, as well as the need to police it.

There are common approaches to be found from governments around areas like ethics, regulatory frameworks, data protection, accountability and intellectual property. No one wants a repeat of what has happened with social media. As Mind the Product’s managing director Emily Tate comments: “Governments took the ‘wait and see’ approach with social media, and while a lot of good has come from social media, there has also been a ton of harm. There’s a feeling that once the harm is out there, it’s hard to pull it back. To my knowledge, in the US at least, there hasn’t been a single meaningful piece of legislation that deals with social media.”

She adds that “even the fact that people are looking at the harm of AI at this point means it’s a very different story from what has happened with social media”. “Nobody thought about the downsides of social media until it had happened. So that awareness sets us on a better path to be able to use AI responsibly, and find ways to minimise the potential harm.”

There’s been a flurry of political activity around AI in the last few weeks however, so are we any further forward? What developments do product managers need to be aware of?

In the US

At the end of October US President Joe Biden issued an executive order (EO)  on safe, secure and trustworthy AI, which essentially sets some rules for companies to adhere to, and creates guardrails for consumer privacy, civil rights and safety. It also calls for the US to lead the way in making sure that artificial intelligence is developed safely, securely and responsibly.

It’s billed as wide-ranging and ambitious, with the White House calling it “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust”, and Biden saying “I’m determined to do everything in my power to promote and demand responsible innovation”.

In the broadest terms, the order requires developers building AI models that might pose a risk to national security to share the results of their safety tests with the US government before release and directs agencies to set standards for this testing. It also contains initiatives aimed at enabling the US to attract talent with AI expertise, such as reducing barriers to immigration for non-US citizens working in the sector.

Vox has produced a helpful overview and commentary of the executive order, President Biden’s new plan to regulate AI, saying that  “the general response seems to be cautious optimism, with the recognition that the order has limits and is only a start”.The article adds:  “Microsoft president Brad Smith called it “another critical step forward,” while the digital rights advocacy group Fight for the Future said in a statement that it was a “positive step,” but that it was waiting to see if and how agencies carried the mandates out.”

An article in Forbes also comments that tech companies are largely happy with the executive order, with leaders of AI startups applauding the approach, although some said that we need to be careful that it doesn’t develop into a regulatory framework that entrenches the power of the larger players.

In the UK

The UK government hosted an AI safety summit at Bletchley Park a couple of days after the US EO announcement, with all 28 countries at the summit agreeing that artificial intelligence poses a potentially catastrophic risk to humanity. They’ve signed the Bletchley Declaration, a document that sets out the risks, opportunities and need for global action and cooperation on frontier AI and agreed to collaborate on research on AI safety. There was also agreement on future similar events, with the first being a virtual event in six months’ time, co-hosted by South Korea and the UK. There will be another in-person summit in France in a year’s time.

The conference was attended by politicians and the tech elite, including Elon Musk, former UK Deputy Prime Minister Nick Clegg, who is now Meta’s President of Global Affairs, DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman. Reaction has been mixed – as this article in the Guardian relates – with experts responding with cautious optimism and pessimism alike.

Domestically, the UK government shied away from legislation on AI, and has so far opted not to give responsibility for AI governance to a new regulator. Earlier this year it published a whitepaper that set out five principles for companies to follow, with guidelines for responsible use. It called on existing UK regulators to work out their own approaches on how AI should be used within their sectors.

In the European Union

Legislation from the EU on artificial intelligence has been in the works for about four years. In the summer of 2023 there was a flurry of press coverage when the European Parliament approved the EU’s Artificial Intelligence Act and the final version of the act should be published by the end of the year. It’s the world’s first concrete initiative for regulating AI. As this article, The EU AI Act: What it means for your business from Ernst and Young comments: “It aims to turn Europe into a global hub for trustworthy AI by laying down harmonised rules governing the development, marketing, and use of AI in the EU. The AI Act aims to ensure that AI systems in the EU are safe and respect fundamental rights and values. Moreover, its objectives are to foster investment and innovation in AI, enhance governance and enforcement, and encourage a single EU market for AI.”

Or will it? This article from Sifted, What is going on with the EU AI Act?,  points out that policymakers are locked in negotiations that need to be resolved in the next few weeks or else the final adoption of the law may have to wait until after the EU elections in mid 2024. According to Sifted, the sticking points include how the EU should regulate startups that produce AI foundation models.

The problem for product managers

Emily says: “As a product manager you have to determine what you do with all this.  I know of products where the AI features have been turned off in the EU in light of the legislation that is potentially coming. It forces you to make decisions like, ‘Do we use AI for some of these features at all? Or if we have to have a different version for the EU, what does that do to the EU product? How do we promote and sell it to our EU customers?’.”

Emily says that product managers need to get together with relevant colleagues to decide what the political attention on AI will mean for their business. She likens it to the way that businesses have had to deal with GDPR regulation, where some businesses have decided to treat all their customers the same, some have segmented their customer base and given them different versions of a product, and some have simply denied access to customers in certain parts of the world. She concludes: “There is the possibility for a lot of positive stuff with AI. And I am hopeful that we’ll be able to find a balance where we can at least see the harm coming, and minimise the worst of it.”

Further resources