• The Future of Social Ordering: Talking to Tim Wu

    When might future courts operate more like self-driving cars, and when like auto-piloted planes? When might future legal proceedings still require human attorneys and firm handshakes? When I want to ask such questions, I pose them to Tim Wu. This present conversation focuses on Wu’s recent Columbia Law Review article “Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems.” Wu is the Julius Silver Professor of Law, Science and Technology at Columbia Law School. His publications include: “Network Neutrality Broadband Discrimination” (2003), Who Controls the Internet (2006), The Master Switch (2010), The Attention Merchants (2016), and The Curse of Bigness (2018). Wu clerked for Justice Stephen Breyer and Judge Richard Posner, and has worked at the White House National Economic Council, at the Federal Trade Commission, for the New York Attorney General, and in the Silicon Valley telecommunications industry. Wu writes widely for the popular press and is currently a contributing opinion writer for the New York Times. He has been named to the Politico 50 list of those transforming American politics, and was named one of America’s 100 most influential lawyers by the National Law Journal.

    ¤

    ANDY FITCH: To first make concrete this paper’s claim that hybrid human-machine systems represent “the predictable future of legal adjudication,” could you sketch the contrasting models of self-driving-car and auto-piloted-plane technologies, and point to why the latter analogy seems more apropos for augmented judicial decision-making? And could you outline some basic ways in which hybridized adjudication at digital platform firms like Facebook and Google might provide today’s best glimpse at “the future of social ordering in advanced societies”?

    TIM WU: For some time the legal system has faced what computer scientists call a scaling problem. Human attention is a scarce commodity, and there are just too many disputes about too many things for human judges to plausibly decide them all in a reasoned fashion. That’s why, as it stands, the legal system employs all kinds of shortcuts to deal with the great ocean of disputes: private arbitration, plea agreements, class actions, and so on.

    To me it seems predictable and maybe inevitable that we’ll begin leaning on software to help with dispute resolution and other high-volume systems of social ordering, either public (the law) or private (user, consumer, or employee grievances). I view the full replacement of human judges (the self-driving court) as far-fetched, and also undesirable for reasons I’ll explain later. But the augmentation of judges with intelligent software (more akin to use of an autopilot to help fly planes) looks plausible and might, if deployed well, even be desirable — though an awful lot depends on how the software gets used.

    As I suggest in this paper, we’ve begun to see prototypes of human-machine hybrid decisional systems in various public and private settings, including criminal sentencing (where it has proved problematic, but has become common) and content control. As for the latter, content control refers to the “takedown” practices of firms like Facebook and Google which, every day, face thousands of decisions related to content banned by their terms of service. Those decision-making processes provide some of the most advanced examples of a hybrid human-machine system we’ve seen.

    How does content control work at the major platforms? If you try to post Al Qaeda videos or child pornography on Facebook, intelligent software now recognizes the materials and prevents them from ever being posted, ex ante. But other decisions are made ex poste: if you post something like an offensive sexist joke (“Kill all the men”), and someone complains, this content-control decision can become subject to a fairly complex process of appeals, eventually reaching a human acting as judge.

    That combination of machines and humans, I think, offers a likely prototype for the future. It employs machines for the easy cases, and introduces human elements once questions become harder and begin to require discretion, judgment, and even legitimacy. Done well, this combination could help relieve the problems of scale that plague the legal system, by using human attention wisely. Done poorly, it could be a mere cost-saving tool that leads in dystopian directions.

    Automated judicial decision-making raises predictable sci-fi-infused anxieties about which most precious human qualities our judicial process stands to lose. Your paper remains somewhat agnostic on what this “special sauce” for a functional and credible court system might entail, but you do point to a cluster of concerns that arise in any number of AI-related conversations, particularly concerns of perverse instantiation (in which digitized mechanisms somehow fail to realize our true aims), and of unaccountability (in which we cannot assess certain decisions produced by a computer program operating far beyond our own cognitive capacities). Could you flesh out those problematic prospects with a couple of present-day and near-future examples? And could you offer any comparable possibilities here of AI helping to improve on the judicial special sauce — beyond just freeing up time by farming out the most predictable cases?

    Have you ever felt angry or frustrated when your computer or some site doesn’t function properly, and when you then encounter the complete lack of human accountability that is the essence of software-based systems? I think that captures the worst of what “robotic justice” could become: impersonal, inhumane, and unflinching. As scholars Richard M. Re and Alicia Solow-Niederman put it, software decision-systems already have a bad tendency of being “incomprehensible, data-based, alienating, and disillusioning.”

    We’re already familiar with the automated speed traps that mail you a ticket if you drive too fast. It is not hard to imagine a future where the combination of pervasive surveillance and advanced AIs leads to the automated detection and punishment of many more crimes: say public littering, tax evasion of any kind, conspiracy (agreements to commit a crime), possession of obscene materials, or threatening the President.

    Automated enforcement of these crimes would, perhaps, make for a more orderly society, since many of the laws I just mentioned are under-enforced. Yet you don’t need to be a committed civil libertarian to wonder about a future of being constantly watched, accused, and potentially convicted, all without human involvement.

    But despite the examples I’ve just given, I still think that full automation of most aspects of our legal system will remain implausible for quite some time. And in this paper I want to stress why problems in legal decision-making may be particularly resistant to full automation.

    When it comes to making complex AI-augmented decisions, what counts as “success” differs from one field to the other. Consider medical diagnosis: as a patient, if AI will give you a more accurate reading than a human doctor, you probably have little reason to prefer the human doctor just because she’s human. If self-driving cars come to have fewer accidents than cars operated by humans, same thing. In each example, we tend to defer to a relatively objective metric of success or failure.

    In the law, however, success (or certainly “justice”) can’t be so easily measured. Legitimacy and procedural fairness can play a large role in what one considers a just result. Decisions might put people in prison for the rest of their lives, or even put someone to death — and how that decision is arrived at seems to matter a great deal. Even if somebody writes a program that, on evidence, tends to outperform the average trial jury in terms of compliance with the law as written, I doubt many of us would accept as legitimate that program’s determination of guilt or innocence for a serious crime.

    In a typical commercial dispute, meanwhile, both sides usually think they are right. The cases that get litigated, as opposed to settled, could usually go either way. Hence the quality of a decision often has less to do with the particular claims before the judge (or judges), but with how the logic of this decision, as precedent, will fare as a rule of decision for future cases.

    These are just a few of the calculations that makes a metric for high-quality legal decisions difficult to arrive at. Those who have spent time in legal theory know there are other challenges as well, such as Karl N. Llewellyn’s distinction between the written rules and the real rules — with the law, in the hands of a fluent judge or lawyer, rarely operating precisely as it has been written. This makes the potential for absurd or even dangerous results through perverse instantiation very high. In fact, even without AI, the legal system remains highly prone to yielding absurd results. One basic job of a judge involves preventing lawyers from abusing the system to achieve such ends.

    I do concede though that some AI advocates might view all these concerns as epiphenomenal or empty. With the legitimacy of computerized adjudication, for example, it may just be a matter of time. Maybe right now we still can’t accept the idea of a computer finding a prisoner guilty. But if you told someone from an earlier generation that we could trust a computer to fly an airplane or dispense large sums of cash without supervision, they’d surely look at you funny.

    And it isn’t as if humans are perfect. “Judgment” can be another word for bias. In the US, African Americans regularly get arrested and convicted of minor crimes for which white people might get a pass. So if, over time, software proves itself fairer and more accurate in legal decision-making (less subject to such biases, more likely to weigh objectively all of the evidence), then perhaps we’ll come to regard computerized judges as more legitimate than human judges or juries. As the Bitcoin believers like to say: “In code we trust.”

    Still I remain highly skeptical that AI will soon replace most human judges, for reasons I develop more completely in the article.

    At the same time, your focus on hybrid systems implemented by Facebook, Google, and Twitter suggests that much of the most consequential decision-making (again, even on basic constitutional questions of speech control) now occurs in corporate offices — prompting further concerns about public transparency, potential bias, censorship, and more general lack of redress. Here as one quick case study, could you offer some historical context for certain platforms’ quite recent (and no doubt profit-motivated) shift from providing blanket free-speech protections, to promoting “healthy” and “safe” speech environments? On the more human side, how might such platform firms justify a relatively small group of mostly young Bay Area professionals privately delineating the parameters of acceptable speech for billions of people worldwide?

    Thanks for the opportunity to speak about the free-speech (as opposed to just the AI) aspects of this paper. So over the last five years, we’ve seen a remarkable shift in speech norms on the main Internet platforms: Twitter and Facebook most obviously, but also Silicon Valley more generally. In the early 2000s, Silicon Valley strongly wedded itself to a libertarian, more-is-more view of speech. But since the early 2010s, especially since 2016, and especially on the main platforms, that has changed. Facebook, Google, and Twitter now emphasize (not unlike college campuses) created “safe” and “healthy” speech environments. Consequently, they have begun to block most forms of hate speech.

    This change in tone has brought several remarkable developments. First, it represents a significant shift in speech norms, toward frankly a more European approach. Second, despite the fact that these platforms contain an enormous quantity of speech, content control here happens without any public involvement in the usual sense. These private platforms, not part of government, don’t get held accountable in the same way for their decisions.

    That prospect of a small group of people in Northern California setting speech norms for the nation, and indeed the world, is not easy to defend. In many ways, it represents a byproduct of the extreme nature of American First Amendment jurisprudence, which forbids government from playing a meaningful role in setting speech norms. That leaves setting these norms to the private platforms — and follows a general trend whereby private rule-making by large corporations replaces government. More specifically, I’d say that we’re witnessing the creation of what are surely, by volume, history’s largest censorship machines (again in private hands). I’m no free-speech absolutist, but it does seem pretty important to recognize the broader public concerns here.

    Similarly, we have stuck to this paper’s pointed focus on judicial decision-making, though of course an expansive regulatory apparatus plays its own crucial part in the everyday operations of our legal system. So when you describe, for instance, Twitter constantly updating its speech-moderation policies, I do wonder what not just tech-augmented adjudication, but tech-augmented regulation or even lobbying might look like. AI theorists sometimes posit near-future scenarios of augmented military conflicts taking place far faster than humans can process: with us basically sitting by idled, unnerved, vulnerable, largely uninformed. By extension, what might augmented friction between ever-inventive firms and their public regulators look like? As tech innovations continue to accelerate, will the resulting ever-revised and further-specialized regulations leave various segments of American society increasingly bewildered? Or what guiding principles can give a wide range of tech-focused regulation its coherence, legibility, and broader social credibility?

    Thanks again for broadening the discussion. We definitely can consider not just the usage of tech to help decide cases, but its usage by firms to either evade or avoid laws with which they’d rather not comply — or to try and change laws that impose costs on them.

    Those aren’t just future issues. Remember Napster? For 20-plus years, tech has already played a major role in avoiding laws and regulation. The most blatant efforts include the avoidance of copyright law, and of laws banning drugs sales and money laundering — through the darknet sites that are successors to Silk Road, and the use of anonymous cryptocurrencies. But we also see more subtle types of legal avoision, such as the so-called “regulatory entrepreneurship,” employed by firms like Uber, Airbnb, and the online Fantasy sports leagues. Each of these embraced conduct already operating in a legal grey zone (gypsy cabs, vacation rentals, fantasy-sports gambling), scaled it up, and in that way actually evaded major regulatory regimes (the livery laws, hotel regulation, widespread bans on sports gambling). That’s what the use of technology for anti-regulatory purposes already looks like.

    And that just covers the use of non-AI technologies for antiregulatory purposes. Might AI and machine learning one day play an important role in such anti-regulatory efforts? Maybe, though how this might happen is not yet obvious. AI typically gets deployed as a replacement for human decision-making at scale, or in areas of great mathematical complexity. Lobbying and regulatory affairs remain relatively low-volume, slow work that isn’t mathematical. Instead, they require cordial human relationships, creative legal thinking, and subtle inferences of what people might be likely to do. Among the tasks for which lawyers get hired, this type of work seems likely to be among the last replaced by an AI — that is, until that day we manage to invent that robot with a really good handshake.