• “Ethics” and “AI”: Can We Use These Terms to Take Effective Action?

    What do people actually mean when they talk about “ethics” and “AI”?  In most public discussions these terms have become a Rorschach inkblot test: different people apply different experiences and worldviews to a vast array of technologies and discuss specific examples of AI — from self-driving cars to factory automation to facial recognition systems to court sentencing algorithms and internet content moderation systems. People bring their pet issues to the table, often adding some sci fi fantasies to the mix, leading to interesting but broad conversations, leaving little time and offering no solid framework for breaking down problems and working out solutions.

    I run a research program called Ranking Digital Rights that works to develop and promote global standards for how internet platforms and telecommunications companies should protect and respect users’ human rights. While the terms “ethics” and “AI” may be useful to scholars or policymakers in other contexts, we have stopped using them in our own work because we find them unhelpful. We need to use much more specific words when talking to companies about how they should take responsibility for identifying and mitigating violations of users’ human rights caused by the deployment of automation, machine learning, and algorithmic systems.

    “Ethics” as a concept lacks consensus around priorities and values. Can a patriarchal authoritarian state be “ethical”?  Can a Chinese state-run company engage in “ethical AI”? Some would argue yes, others vehemently no.  To highlight the problem of ethical relativity in the most extreme fashion, at an academic conference last year in Glasgow three academics presented a paper titled “A Mulching Proposal: Analysing and Improving an Algorithmic System for Turning the Elderly into High-Nutrient Slurry.” While the paper was meant as a joke, its point was serious: the academic literature around “ethical” AI systems could be used to support a plan for the mulching of elderly people in poor health.

    More seriously, academics have cautioned against “ethics-shopping,” “ethics-washing,” and the development of vague ethical principles as an effort by companies to escape regulation and accountability. Others have accused industry of manipulating academia by funding “ethical AI” research programs that focus on self-regulatory standards in order to avoid “legally enforceable restrictions of controversial technologies.”

    The Council of Europe and Amnesty International are examples of government and civil society organizations that use international human rights standards, not “ethics” as their frame for identifying problems and proposing solutions. The human rights frame enables clear identification of harms that occur. It also enables clear articulation of how governments and companies are responsible for protecting people against such harms — standards that are well defined by international human rights law.

    In applying the human rights frame to break down problems, harms, and solutions, my colleagues have also concluded that the term “artificial intelligence” is too general and vague to be useful in our work. We need more exact words when making specific recommendations to companies about what they should or shouldn’t do, or to governments about what types of behavior they ought to regulate and how. We need to use words that describe actual technical processes and functions related to specific use cases. Talking about facial recognition systems? Then use “facial recognition.” Talking about the application of algorithms and machine learning to moderate and shape content on platforms, or to develop detailed profiles of users so they can be targeted by advertisers? Then name what you are talking about.

    For the past year my colleagues have been working to develop and test new research indicators that can be used to hold internet platforms (like Google and Facebook) and telecommunications companies (like AT&T or Verizon) accountable for their development and use of algorithms, machine learning, and automated decision-making.  Evaluating Alphabet’s policies for all types of AI across all categories of products and services involves a jumble of different problems, harms, and technologies, from self-driving cars to content moderation. Specifying the type of AI process and use case we are focusing on enables us to break down the problems and solutions, and to have detailed conversations with companies about what types of policies and disclosures they need to put in place in order to understand and mitigate harms causes by algorithmic decision-making systems.

    Next month we will publish the results of what we learned when we evaluated companies against our draft indicators, then used the results as the basis for very concrete conversations with companies about what they might be able (and willing) to do differently, and what might require regulation to impose standards. Even though our work addresses human rights implications of AI, we rarely if ever use the term. We never use the term “ethics,” instead using clearly defined human rights standards as our normative framework.

     

    Rebecca MacKinnon directs the Ranking Digital Rights project at New America. A 2019-2020 University of California Free Speech and Civic Engagement Fellow, she is author of Consent of the Networked (Basic Books 2012), and co-founder of the citizen media network Global Voices.