Current efforts to introduce ethical or moral guidance to intelligent machines are useful, and increasingly urgent. The UK-based Institute for Ethical AI and Machine Learning attempts to grapple with these issues, and has put forth several principles for those who make these products. But the discipline called Value Sensitive Design (VSD) has been investigating this matter in AI and non-AI contexts for years, and offers us some useful preparation for this moment.
As product managers, we should all be familiar with the concept of VSD. But as we see by even casual use of most apps and websites, not to mention buildings and vehicles, those who manage and design products are still under-informed about how, why, and when to put human values at the center of design decisions. As a disabled person from a family of disabled people, when I ask a search engine for information about how to enter a building, I am acutely aware of the extent to which human values are extended to include me. The range and meaning of human values is not definitive. Yet, they must be addressed and defined to some extent to permit technology designs to incorporate them on a conscious level. All implementations of technology and products reflect human values, even by default, reflecting values such as apathy or insensitivity in the cases of products that were not designed with human values at the forefront.
Current efforts toward ethical AI
The Institute for Ethical A.I. and Machine Learning has published its eight principles to guide implementations and design for intelligent machines from a human values standpoint, and those principles are a good starting place. Many of the Institute’s injunctions, such as those urging Reproducibility and Accuracy, are important for the management of applied A.I.. Still, they do not cut to the heart of the human element, the element that ethical A.I. and product frameworks must make central.
The Institute’s principles are:
- Human Augmentation
- Bias Evaluation
- Explainability by Justification
- Reproducible Operations
- Displacement Strategy
- Practical Accuracy
- Trust by Privacy
- Data Risk Awareness
You can visit the Institute’s site for an explanation of each of these principles, but they seem to me not to go far enough. As a former engineer, I see these as heavily oriented toward engineering best practices and they are useful as such, but they are not sufficiently descriptive of human values.
Instructive examples
I appreciate the injunction toward bias evaluation, which is intended to interrogate whether our machine intelligences have learned human biases that we might consider detrimental, discriminatory, or even illegal. I have worked at companies that have uncovered decision-making by their machines which adopted racial, gender, and other biases due to being trained on data sets from human beings who have those biases. For a bank, an algorithm that denies credit or loan requests based on race is breaking the law. But one that denies loan requests based on zip code is less likely to be questioned. The complexities are many. Especially for a field this new, detection, enforcement, and accountability paths are not clear.
Additional debates arise when we ask ourselves whether a content recommendation engine that adopts a sexist or racist person’s preferences in their video or audio content is reasonably seeking greater efficacy by doing so, or if the recommendation engine is actually responsible for reinforcing damaging human biases by failing to diversify the recommendations. I have led task forces at some of the world’s biggest companies that examined exactly these issues, finding no easy answers but agreeing that useful and healthy debates are progress toward better solutions.
For another perspective, we can look at the recent U.S. Air Force drone that was instructed to remove all obstacles to completing its mission of destroying the enemy target. When the drone decided during a simulation that the operator’s instructions to stand down from its mission were preventing it from destroying the target, the drone killed the drone operator. When that behavior was forbidden, in a subsequent mission the drone destroyed the communication towers that were issuing signals to stand down from the operator. While some officials have contradicted the initial report by Colonel Tucker Hamilton in his blog post, the scenario is plausible and illustrates that such a machine intelligence could abide by most of the principles laid out by the Institute for Ethical A.I. and Machine Learning and still kill its controller.
VSD thinking from academia
The 2019 book by Friedman and Hendry from M.I.T. press called Value Sensitive Design is one of the central texts in the field, and its focus exceeds A.I. and machine learning, but includes them. The book’s subtitle, Shaping Technology with Moral Imagination, reveals the thesis of its contents concisely. VSD is not solely about adding value for the stakeholders and users involved, or even about ensuring access for the disabled population, or protecting human life, for instance. According to a 2006 paper by Friedman and Hendry, Value Sensitive Design attempts to center “what is important to people in their lives, with a focus on ethics and morality.”
Quite clearly, what one person considers ethical and moral can differ drastically from what others believe. For Elon Musk, his definition of free speech overrides some of the values important to the critics of his management of Twitter. I propose that while universal agreement on ethics and morality is impossible, it is possible to consciously define the ethical and moral foundation of any technology or product to the satisfaction of its creators.
Any product’s creators can then state publicly the ethics and morality behind its value-sensitive design, and that product’s stakeholders and users can decide whether to use the technology or product. If Musk’s design is led by values that advertisers and users agree with, then Twitter will flourish with those people. The new concept I seek to introduce is that publicly stating the values behind the design will lead to a better informed group of stakeholders and users, and potentially a new level of transparency akin to the mandate on food items that they display their ingredients on the package.
Whether a product abides by its stated ethical values can and should be debated, but a movement toward published statements of that kind would push tech companies in particular toward greater public accountability. Good design is not just value-sensitive design, but value-led design. And such designs should make their ethical and moral frameworks public, or face scrutiny by stakeholders and users.
Friedman and Hendry’s principles for Value Sensitive Design are stated in their M.I.T. book, and their explanation for each can be found therein.
- Human Welfare
- Ownership and Property
- Privacy
- Freedom from Bias
- Universal Usability
- Trust
- Autonomy
- Informed Consent
- Accountability
- Courtesy
- Identity
- Calmness
- Environmental Sustainability
VSD applied to system design
I admire the boldness of these principles in their effort to go beyond engineering best practices and to tackle questions of human ethics and morality. But any person seriously concerned with climate change might consider the principle calling for calmness anathema to the urgency and anxiety that many of these activists find to be mandatory regarding the state of the environment. Finding contradictions, assumptions, and lack of clarity in these principles is not my aim, but rather it is to further a vigorous debate with these as a starting place.
For a dating app, the notion of a profile recommendation engine that abides by the book’s Freedom from Bias injunction might lead to a stated VSD principle such as “We believe in the rights of individuals to choose to date anyone they want on whatever criteria they choose, but our machine learning models are instructed to diversify recommendations where possible.” This raises the question of whether race, gender, sexual preference and other criteria should be made available to machine intelligences that recommend content, dating matches, or anything else. We have seen that machines can learn human biases without awareness of those criteria, using proxies for decision-making based on race, for example. This occurs when models classify users into segments based on tendencies and preferences for which no explicit data category might exist, which can lead nonetheless to the machine intelligence recommending only racially homogenous profiles or TV shows to racially homogenous groups without awareness of the racial (or gender, or other) identifications involved.
All debates and public statements of this kind require careful consideration and agreement among the leaders and stakeholders of these product teams. But my call is for all product and engineering leaders to openly debate the principles that should apply in a value-sensitive design, and when groups of those leaders agree on them for a particular machine intelligence or consumer product, that they publish their principles for the information and welfare of their stakeholders and users. That alone would lead to a revolution in product design and AI safety.
Comments
Join the community
Sign up for free to share your thoughts