In these two talks from MTP Engage Hamburg Cennydd Bowles and Roisi Proven share two different perspectives on ethics in product management. Cennydd discusses the role of a product manager in big tech companies, while Roisi talks about the current state of tech algorithms and the ethical implications that follow. Watch these two engaging sessions in full or read on for Björn-Torge Schulz’s write-up of both of the talks.
Almost 10 years ago, when I built a product for my former employer that allowed advertisers to book and display targeted ads based on the socio-demographics of our users, I was fascinated by the endless technological possibilities we had. My goal was to enable the highest possible click price and to maximise the click-through rate of each ad.
I never questioned whether this micro-targeting was ethically okay. Nor did I question whether our users agreed with it. The main thing was to increase my metrics.
My attitude has changed in the meantime. Also thanks to us technologists, the world is in a state that should make us pause and reflect. It is time to build products differently and to question long-held beliefs. And so I’m on the search for fresh, hands-on approaches to putting the “ethical” into product management.
Since I’m obviously not alone in this, there was a whole slot on “Product manager responsibility” with two experts on this pressing topic at MTP Engage Hamburg in 2022. Here you can see and read what Roisi Proven and Cennydd Bowles advise me and us. Jump straight to Roisi’s talk below!
Cennydd Bowles – The ethical product manager
Cennydd reminded us of the idea of Techno-Utopia, back in the days, when we were told – or even believed ourselves – that the endless possibilities of the internet, connecting “all” people on the planet, brings us democracy and hierarchy-less societies. Then came the Arab Spring and we felt confirmed: “See, that’s what I meant! Look at how technology is bringing liberal democracy to every corner of the earth.”
From today’s point of view, these statements seem naive. By now, we have seen too many times how Big Tech uses its power to turn our utopian ideas into the opposite.
- Genocidal Propaganda
- UBER self-driving car killed a pedestrian
- Facebook’s Secret Mood Manipulation Experiment
But fortunately, something is moving. I am not alone in my desire for change. The techlash is real. An ethical movement manifests itself and it has three different drivers, according to Cennydd.
The customers
Public trust in technology is at an all-time low. Only 19% of the UK population believe that companies are making their products with the best interests in mind. The vast majority want companies to look beyond profit to positively impact society.
The talent
But also from the inside, tech companies feel the pressure. Tech employees hold their employers accountable for what they are doing, such as the Google Walk Out. We are seeing old-school employer activism, but instead of pushing for better salaries, they are pushing for moral change.
Also, the best talents in the tech sector today can choose who they work for, and their choice increasingly falls on companies that behave well. CNBC reported, “that Facebook has struggled to hire talent since the Cambridge Analytica scandal”.
The regulators
Furthermore, we will increasingly see new guidelines from the regulators to limit the possibilities of tech companies. In the EU and other institutions, after the privacy-oriented GDPR, further topics are on the commission’s table, such as facial recognition, recommender algorithms, or AI explanatory.
So things are moving forward, Cennydd gives us hope. Good news! And just as the product managers present in the auditorium were about to get really cozy in their seats with this comforting thought, Cennydd shocked us by stating: “Product managers are the primary source of unethical decisions.” BAM. That hurt. Nervous chattering among the product managers present. Is he right? Well, us folks are sitting where ideas are turned into globally available software. We have great power, even if it doesn’t always feel like it. And if we believe what the great philosopher Spiderman once had placed in his comic book speech bubble, then “with great power, comes great responsibility.”. We product managers have the responsibility to act responsibly and build responsible products. But how can we do that?
Good thing that Cennydd not only had inconvenient truths for us, but also a few suggestions for solutions. (Spoiler: for some of them we must forget the mantras that have been hammered into our brains for the last 10 years). Take a seat. Here we go:
1. Rethink stakeholders
Pure user-centricity is outdated as a concept. What about people who are not our users? Don’t matter? Is AirBnB, besides being a great service for people with property and tourists a good service for the society, pushing up rents in your city and destroying the idea of a neighbourhood? Have you ever thought about the impact on climate, when working on a new feature or product? When my product is doing harm, on other actors than myself, we call this an externality. In the history of capitalism certain groups of people or the environment have been ruthlessly exploited under the concept of externalities.
If we think about our whole user base, what about the kind of users, that willingly abuses our product to do harm to others? Have we thought about them and how to prevent them from using our product in a harmful way? (Check out the “Inclusive panda”)
2. Anticipate harm
Throw away our lean software development mantras like “Build, measure, learn” or “move fast and breaks things”. They do not work for ethical product management. These concepts tell us to consciously ignore the possible consequences of our product launches. “That works fine if the thing you are breaking is a photo upload app. It’s a very bad thing, if the thing you are breaking is democracy.” Ethical anticipation needs time and space.
We always try to do research on how the world is affecting our product, but why do we try so little to make forecasts on how our product will affect the world? The Ethical Explorer for example is a simple eight-card deck that can help you discover certain risks of your product before launching it. A great tool to use, even if you have not studied ethics.
3. Build an ethical muscle
Over all the data-drivenness and KPI-focus have we forgotten to ask whether what I’m building is actually good? I have been in that place, and since then I am trying to follow Cennydd’s advice to build an “ethical muscle”. We should make a habit out of responsible thinking. Why not watch “The Social Dilemma” together with your team in your next retrospective? Use a sprint 0 to play with the Ethical Explorer card deck. Write your Code of Ethics and implement some fragments into your Definition of Ready. Start a conversation about ethics within your company!
Ethics seems like constraints, but they can be a creative, positive force and seeds of innovation. The compassion, thoughtfulness, and honesty that you have put into thinking and designing your product, will reveal themselves to your users. What a stand-out advantage that is!
Cennydd closed with an urgent call to us product managers: We HAVE the power. If anyone of us has the bravery, the standing in the company, we should speak up and support the change that is happening, to steer the tech sector on a more ethical course. Because not taking ethics seriously is itself an ethical decision. Wow!
Roisi Proven – Debunking the magic of algorithms
Then Roisi Proven welcomed us to her talk about “late-stage capitalism and bananas”. Similar to Cennydd, Roisi did not leave a good mark on the moral state of the tech elite.
As science fiction author Arthur C. Clarke postulated in his famous Three Laws many decades ago, “Any sufficiently advanced technology is indistinguishable from magic.” But what many consider a wise warning is more of a North Star for overly ambitious machine learning (ML) and artificial intelligence (AI) startups on their way to selling us products with utopian promises that can magically solve everyday problems. Or would you have guessed that behind the utopian value proposition of “finding the right treatment for every patient” lies a simple technocratic tool to help decide when a patient should be discharged from hospital?
In order to recognize when we are dealing with “real” artificial intelligence (“Artificial General Intelligence”) when someone just wants to sell us his system of human-made rules for very specific use cases as artificial intelligence (“Artificial Narrow Intelligence”), Roisi gave us the following example.
- ANI: A set of rules, set and trained by humans, e. g. to decide if a picture of a banana is really a picture of a banana. Even a photo of the Bananas in Pyjamas or a Brass Banana can push this system to its limits.
- AGI: Showing a picture of a banana, ask “What’s This?” and getting various statements about the potential banana from nutritional values and recommended consumptions. For all these the AI needs to combine different concepts like ‘fruits’, ‘eating’ and ‘humans” to make valuable statements about the banana. Here we can rather assume some kind of intelligence.
If you want to see Roisi’s much more charming explanation than my boring retelling, treat yourself to the video.
But what’s the problem now with using tools to help us decide when we can discharge a patient from hospital, whether it’s general artificial intelligence or not? It is the data, that have been used to train these machines. “All Machine Learning Models inherit the bias of their creators”. If in the past a hospital discharged people of color earlier than white people, and this data is used to train the machine learning model, then it will also recommend discharging people of color earlier in the future. Where is your utopian “the right treatment for every patient” now?
So the training data determines how an AI behaves. And Microsoft has impressively demonstrated with its chatbot “Tay” how a chatbot can become a racist within 24 hours if it is only allowed to read enough on the internet.
What can we do if we want our AI to work unbiased and responsible? Roisi gives us the following tips, to go on the not-so fun, expensive, and often controversial journey of having de-biased data:
- Start the conversation about how technology is not neutral per se, but biased data exists
- Be honest about the limitations and risks your data models have
- Keep humans in the loop and challenge your beliefs with people who aren’t like you
- Accept that implementing these tips will take longer than a two-week sprint.
- Follow initiatives in that space, to stay up to date
- Twitter’s Responsible Machine Learning Inititative (META)
Didn’t Roisi’s tips look familiar from Cennydd’s talk? For me they have. On the day, I was very grateful that MTP Engage put this important topic on the agenda and that we were able to hear two such wonderful experts who could give me practical tips on how to make more responsible decisions as a product manager. And to put Roisi and Cennydd’s most valuable tip into action, I hereby start the conversation. Who’s in?
Comments
Join the community
Sign up for free to share your thoughts