There’s No Such Thing As ‘Ethical A.I.’
Technologists believe the ethical challenges of A.I. can be solved with code, but the challenges are far more complex
Artificial intelligence should treat all people fairly, empower everyone, perform reliably and safely, be understandable, be secure and respect privacy, and have algorithmic accountability. It should be aligned with existing human values, be explainable, be fair, and respect user data rights. It should be used for socially beneficial purposes, and always remain under meaningful human control. Got that? Good.
These are some of the high-level headings under which Microsoft, IBM, and Google-owned DeepMind respectively set out their ethical principles for the development and deployment of A.I. They’re also, pretty much by definition, A Good Thing. Anything that insists upon technology’s weighty real-world repercussions — and its creators’ responsibilities towards these — is surely welcome in an age when automated systems are implicated in every facet of human existence.
And yet, when it comes to the ways in which A.I. codes of ethics are discussed, a troubling tendency is at work even as the world wakes up to the field’s significance. This is the belief that A.I. codes are recipes for automating ethics itself; and that once a broad consensus around such codes has been achieved, the problem of determining an ethically positive future direction for computer code will have begun to be solved.
There’s no such thing as a single set of ethical principles that can be rationally justified in a way that every rational being will agree to.
What’s wrong with this view? To quote an article in Nature Machine Intelligence from September 2019, while there is “a global convergence emerging around five ethical principles (transparency, justice and fairness, nonmaleficence, responsibility, and privacy),” what precisely these principles mean is quite another matter. There remains “substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain, or actors they pertain to, and how they should be implemented.” Ethical codes, in other words, are much less like computer code than their creators might wish. They are not so much sets of instructions as aspirations, couched in terms that beg more questions than they answer.
This problem isn’t going to go away, largely because there’s no such thing as a single set of ethical principles that can be rationally justified in a way that every rational being will agree to. Depending upon your priorities, your ethical views will inevitably be incompatible with those of some other people in a manner no amount of reasoning will resolve. Believers in a strong central state will find little common ground with libertarians; advocates of radical redistribution will never agree with defenders of private property; relativists won’t suddenly persuade religious fundamentalists that they’re being silly. Who, then, gets to say what an optimal balance between privacy and security looks like — or what’s meant by a socially beneficial purpose? And if we can’t agree on this among ourselves, how can we teach a machine to embody “human” values?
In their different ways, most existing A.I. ethical codes acknowledge this. DeepMind puts the problem up front, stating that “collaboration, diversity of thought, and meaningful public engagement are key if we are to develop and apply A.I. for maximum benefit,” and that “different groups of people hold different values, meaning it is difficult to agree on universal principles.” This is laudably frank, as far as it goes. But I would argue that there’s something missing from this approach that needs to be made explicit before the debate can move where it must go — into a zone, not coincidentally, uncomfortable for many tech giants.
This is the fact that there is no such thing as ethical A.I, any more than there’s a single set of instructions spelling how to be good — and that our current fascinated focus on the “inside” of automated processes only takes us further away from the contested human contexts within which values and consequences actually exist. As the author and technologist David Weinberger puts it in his recent book, Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility, “insisting that A.I. systems be explicable sounds great, but it distracts us from the harder and far more important question: What exactly do we want from these systems?” When it comes to technology, responsibilities, and intentions equally lie outside the system itself.
At best, then, an ethical code describes debates that must begin and end elsewhere, about what a society should value, defend, and believe in. And the moment any code starts to be treated as a recipe for inherently ethical machines — as a solution to a known problem, rather than an attempt at diagnosis — it risks becoming at best a category error, and at worst a culpable act of distraction and evasion.
Indeed, one of the most obvious and urgent current ethical failings present today is a persistent overclaiming for and mystification of technology’s capabilities — a form of magical thinking suggesting that the values and purposes of those creating new technologies shouldn’t be subject to scrutiny in familiar terms. The gig economy, the human cloud, the sharing economy — the world of big tech is awash with terms connoting a combination of novelty and inevitability that brooks no dissent. Substitute phrases like “insecure temporary employment,” “cheap outsourced labor,” and “largely unregulated online rentals” for the above and different possibilities for ethical engagement start to become clear.
Lest we forget, we already know what many of the world’s most powerful automated systems want, in the sense of the ends they are directed towards: the enhancement of shareholder value for companies like Google, Amazon, and Facebook, and the empowerment of technocratic totalitarian states such as China. Any meaningful discussion of these systems demands a clear-eyed attentiveness to the objectives they are pursuing and the lived consequences of these. The challenge, in other words, is primarily political and social, not technological.
As the author Evgeny Morozov argued in a recent Guardian piece exploring fake news (another turn of phrase that conceals as much as it reveals), any discussion of technology that doesn’t explicitly engage with its political economy — with the economic, political, and social circumstances of its manufacture and maintenance — is one explicitly denuded of the questions that matter most.
At best, an ethical code describes debates that must begin and end elsewhere, about what a society should value, defend, and believe in.
“What,” Morozov asks, “drives and shapes all that technology around us?” If we cannot open up such questions for democratic debate, then we risk turning “technology” into little more than a “euphemism for a class of uber-human technologists and scientists, who, in their spare time, are ostensibly saving the world, mostly by inventing new apps and products.”
Perhaps the most telling myth of our time is that of machine superintelligence, the promise of which simultaneously turns A.I. ethics into a grapple with existential threats and a design process aimed at banishing banish human unreason, at outsourcing society’s greatest questions to putative superhuman entities in the form of A.I. (and, presumably, the experts who tend and optimize them).
Even the most benign version of this scenario feels nothing like a world I would wish to live in. Rather, give me the capacity to contest passionately the applications and priorities of superhuman systems, and the masters they serve; and ethical codes that aim not just at a framework for the interrogation of an A.I.’s purposes, but the circumstances and necessity of its very existence.