• Less Work: Talking to Daniel Susskind

    When will we finally arrive at a world with less work? When we do get there, which most pressing concerns might we face in terms of economic distribution, civic contribution, and collective human flourishing? When I want to ask such questions, I pose them to Daniel Susskind. This present conversation focuses on Susskind’s book A World Without Work: Technology, Automation, and How We Should Respond. Susskind, a Fellow in Economics at Oxford University’s Balliol College, and co-author of The Future of the Professions, researches the impact of technology (particularly artificial intelligence) on work and society. Previously Susskind worked in the British government — as a policy adviser in the Prime Minister’s Strategy Unit, as a policy analyst in the Policy Unit in 10 Downing Street, and as a senior policy adviser in the Cabinet Office.

    ¤

    ANDY FITCH: Could we start with two countervailing labor-market forces that automation has brought about ever since the Industrial Revolution: with substitutional pressures displacing human workers, and with complementary pressures creating whole new industries and occupations? Could you sketch a historical trajectory on which complementary dynamics have predominated for the past several centuries (whether or not we always recognize them), but may now subside going forward — with new technologies still prompting new needs for labor, but with machines themselves increasingly providing that labor?

    DANIEL SUSSKIND: For any discussion about the future of work, we have to start from the observation that many people in the past have worried about automation and have been wrong. By and large, there has always been enough work for human beings.

    Why have these anxieties repeatedly proved themselves misplaced? The short answer involves those two forces you just described. New technologies have two different effects on work. On the one hand, they substitute for human beings and displace us from certain tasks and activities. But at the same time, they also complement human beings by raising the demand for our work at other tasks that have not been automated. Our anxious ancestors focused too much on that harmful substituting force and tended to neglect that helpful complementing force. Indeed, ever since modern economic growth began, in this battle between the harmful substituting force and the helpful complementing force, the latter has won out and there has always been enough demand for the work that human beings do.

    But the simplest way to state this book’s core argument is that, given the technological changes taking place, there are good reasons to worry that the substituting force may finally overrun the complementing force in the 21st century — leading to a gradual decline in demand for the work of human beings.

    Here in terms of anticipating threats posed by substitutional pressures, could you describe how both economists and policymakers have applied basic premises of the ALM hypothesis — drawing on its nuanced conceptions not just of which long-standing job categories might get displaced, but of which particular tasks, associated with present-day iterations of these ever-evolving jobs, might get displaced? How have we sought to outrun, for example, concerns of certain mid-skilled and/or managerial tasks (reliant on routine procedures and explicitly articulatable knowledge) getting eclipsed by automation?

    This hypothesis emerged as a revealing way to explain a puzzling story that started to emerge in labor markets at the turn of the 21st century. Starting in the 1980s, new technologies appeared to help both low-skilled and high-skilled workers at the same time — but those with middling skills did not appear to benefit at all. Around the world, if you lined up occupations from the lowest- to the highest-skilled, you would have seen the pay and employment share of jobs grow for those at either end of the line, but wither away for those in the middle. And the economists David Autor, Frank Levy, and Richard Murnane sought to explain these developments through the Autor-Levy-Murnane hypothesis (or ALM hypothesis, for short).

    This hypothesis had two basic components. First, it corrected for our unhelpful way of discussing the world of work in terms of jobs, such as “lawyer” and “doctor” and “teacher” and “accountant.” The ALM hypothesis told us we should instead think bottom-up in terms of “tasks.” And this is intuitive, since jobs aren’t monolithic indivisible lumps of stuff — workers perform a wide and always changing variety of tasks and activities in their jobs.

    The ALM hypothesis’ second component came from a particular conception of how systems and machines operate. This view was that, if you wanted to design a system or machine to perform a particular task, you first needed human beings to explain how they performed this task, and then you had to capture those human rules in a set of instructions for a machine to follow. To design a system that can make a medical diagnosis, for instance, you had to sit down with a doctor, get her to explain precisely how she made a diagnosis, and then try to capture her explanation in some instructions for a machine.

    When the ALM hypothesis then brought these two components together, the result was a clear distinction between which activities machines could and could not do: while we might readily build machines to perform “routine” tasks, because it is easy to articulate how we perform them and straightforward to write a set of rules for a machine to follow, we would not readily build machines to perform “non-routine” tasks (requiring faculties like creativity or judgment), because we find it very difficult to explain how we perform these tasks and very hard to know where to begin in writing those rules.

    So this powerful combination of ideas could explain why labor markets around the world were taking on an hourglass shape. When economists broke down a range of different jobs into the tasks that made them up, many of the activities that middling-skilled workers performed turned out to be routine, whereas those done by the low- and high-skilled were not. Technological change was eating away at the routine tasks in the middle of the labor market, leaving these indigestible non-routine tasks at either end for human beings.

    That high-skilled jobs turned out to involve lots of non-routine tasks was unsurprising. These roles tended to draw on faculties like creativity or judgment. More interesting was that many low-skilled jobs also involved lots of non-routine tasks. In part, this is because many of these roles were in the service economy, and the interpersonal skills required to provide services were hard to capture in a set of rules. But it was also because low-skilled work often required manual tasks that were hard to automate. Computer scientists were familiar with this finding that many of the basic things we do with our hands remain very difficult to explain and hard to automate. They call this Moravec’s Paradox, after the inventor Hans Moravec, one of the first people to note it down.

    The ALM hypothesis might sound sort of abstruse and academic, but what I try to do in the book is show readers quite how popular this view remains today — not just in the conversations of economists, but in the way that many of us still think about automation. We instinctively assume that machines can do things that are predictable or repetitive, rules-based or well-defined (in short, routine stuff), but that machines will struggle to do things hard to specify, or complex (in short, non-routine stuff). This is how public commentators and policymakers so often think, too.

    And here your book might even agree with certain AI skeptics that we probably remain pretty far from designing an artificial general intelligence capable of outperforming humans at all tasks (basically by thinking like a new-and-improved version of ourselves). But here you also outline prospects for bottom-up development, more akin to Darwinian natural selection mindlessly stumbling towards complexity. So could you first describe this recalibrated perspective on technological progress in philosophical terms: with humans needing to recognize that our own impressive modes of consciously deliberative innovation may not represent the only or the best means of performing challenging tasks? And then, in more concrete terms, could you describe where a pragmatic revolution already has begun: with AI engineers or AI systems themselves not cracking the code on how to emulate human behavior, but finding new machine-centric ways to perform such tasks?

    When I started to think about the future of work, I began by looking back on what economists had written about machine capabilities. And I was intrigued. Many of the tasks that leading economists thought could not be automated (like driving a car, making a medical diagnosis, or identifying a bird at a fleeting glimpse) were, in fact, being increasingly performed by machines. Almost all major car manufacturers now have driverless-car programs. Countless systems can diagnose difficult medical problems. And Cornell’s Lab of Ornithology has even developed an app where you can take a photo of a bird, and it will identify the species.

    So I sensed something was awry with the traditional assumptions that economists had made about which tasks could be automated — and in particular with the routine/non-routine distinction. This was not simply a case of bad luck on the part of these economists. They were wrong, and the reason why they were wrong was very important. Remember that many economists thought it impossible to automate these non-routine tasks, because a driver or a doctor or a birdwatcher would struggle to articulate exactly how they performed these tasks, and so we didn’t know where to begin in writing a set of instructions for machine to follow. But in the end, not only were all these tasks automated: they were automated through a very different process than what economists had expected — not by getting machines to follow the rules human beings followed, but by performing these tasks in fundamentally different ways.

    Take the system recently developed at Stanford to determine (as accurately as leading dermatologists) whether or not a freckle is cancerous. How does it work? It does not try to copy the “creativity” or the “judgement” of a human doctor. Instead it runs a pattern-recognition algorithm through 129,450 cases, hunting for similarities to the particular lesion in question. It performs the task in an un-human way, based on the analysis of more cases than any doctor could possibly hope to consider in a lifetime.

    But economists had not plucked their view of machine capabilities out of thin air. In the book, I distinguish between two waves that have taken place in the field of artificial intelligence. In the first, most researchers were “purists,” believing that the best way to build capable machines was somehow to mimic human beings. This mimicry took various forms: sometimes copying the human brain’s actual anatomy, sometimes imitating the ways that researchers believed human beings thought and reasoned, sometimes trying to capture the rules that humans follow. But in each case, humans provided the template in one way or another.

    However, in the last few years, we have entered a second wave of artificial intelligence. Practitioners are far less concerned with how these machines work (whether they resemble the operations of human beings in some way), and far more concerned with how well they work. In short, they are now “pragmatists” rather than purists. What has driven this pragmatist revolution is, in large part, the remarkable technological progress that has taken place in enhanced processing power, data-storage capability, and algorithm design. This progress means it is now possible to perform a widening range of tasks, not by copying the rules humans follow or the thinking processes we go through, but in entirely un-human ways. As a result, many non-routine tasks, which not so long ago seemed impossible to automate, are now within reach.

    By extension, could we take what right now still seem like fields inherently requiring empathic human labor (such as child care, mental-health care, elder care), and could you describe circumstances in which machines might teach themselves how to perform tasks that we presume necessitate a human (manual, cognitive, and/or affective) touch — and might achieve a result humans can find quite satisfying?

    Sure, let me give you a few examples. To begin with, before I wrote A World Without Work, I co-authored a book called The Future of the Professions with my father. One profession we looked at was accounting. And almost every time I talked to a group of accountants, somebody would stand up and say: “Look Daniel, you don’t understand. My clients come to me because they want the personal touch. They want to look me in the eye. They want me to really feel and understand the difficult problems they face.” And I found myself responding again and again: “Well, that actually isn’t why your clients go to you. They go to you because they want their tax affairs done efficiently and effectively. And if they can find another way of getting their affairs done, perhaps more cheaply and accessibly than going to you, then they’ll probably choose that other option — even if it doesn’t rely on human interaction.”

    We often conflate the traditional ways in which we might have solved a problem (namely, involving human interaction) with the problem itself. So this is a slightly playful example, but it illustrates a deeper point — that in the second wave of artificial intelligence, machines are increasingly crafted to function in a very different fashion from human beings, and to solve old problems in new and unfamiliar ways.

    For a more challenging example, my book describes an experience of Joseph Weizenbaum, one of AI’s founding fathers. Weizenbaum built a system called ELIZA, essentially designed to act like a psychoanalyst. So you’d sit down with this system, and it would ask you how you were feeling, and you might respond “I’m feeling well,” and ELIZA would ask: “Are you really feeling well?” Weizenbaum built this system as a bit of a joke, as a parody of the predictable ways in which you might interact with a psychoanalyst. But when Weizenbaum called in his secretary (who knew full well the slightly jovial spirit in which he had built this system), and asked her to sit down for a conversation with ELIZA, it only took a few questions before she turned around to Weizenbaum and asked him to leave the room. She felt more comfortable engaging in these issues with this inanimate machine than she did with a fellow human being — and Weizenbaum writes in his book Computer Power and Human Reason about how this experience disturbed him and challenged him.

    Like so many of us, he had thought that a real-life interaction with a human being was a prerequisite for solving the sorts of problems that traditionally we would go to a therapist to solve. Though this seemed not to be the case. Of course, if you look at this situation a bit more deeply, it probably makes some sense. Weizenbaum’s secretary felt more comfortable sharing certain personal secrets with this system that she knew wouldn’t judge — rather than with her boss. But the basic point remains: systems and machines might figure out how to perform tasks that have required empathy when performed by human beings, and might do so in some fundamentally different ways.

    Now it might turn out that, for certain activities, we value more than just the outcome of the process. In the book I describe, for instance, cases where diners at fine restaurants have felt short-changed to discover their coffee was made by a capsule-based machine, rather than a highly trained barista — even though, in blind tests, people struggle to distinguish between capsule-based coffee and a more bespoke coffee prepared by hand. Customers seem to value not simply the outcome, but the process itself: the pop of the bag, the tap of the tamper, the trickle of the coffee into the cup, the human craft. And similarly, in parts of our lives far more important than coffee-making, in healthcare or education for instance, getting the physical care or the knowledge we need may not suffice. We might value our children learning from a fellow human being, or a fellow human being standing by our bedside. So long as that is true, these sorts of tasks and activities might be most protected from automation.

    I also want to pick up on you describing the book as “skeptical” of artificial intelligence, because I think this is very interesting. It had me nodding in agreement. I am wary of artificial “general” intelligence, or “AGI,” the ambition to build machines that, like human beings, have wide-ranging capabilities (rather than artificial “narrow” intelligence, where a machine can only do a very small set of activities to a high standard). I am skeptical that we will achieve AGI any time soon. I am also suspicious of claims that in the near-term we will build machines that think or feel or reason like human beings. But in the book I try to explain how even non-thinking machines, with impressive but narrow capabilities, will still have dramatic impacts on our working lives. So even as sort of an artificial-intelligence skeptic in terms of how these machines work, I’m far from skeptical about their impact on work.

    Well, you also don’t forecast some apocalyptic break in which jobs suddenly cease existing for humans. You instead redirect us to ongoing questions such as: what kinds of jobs, for which workers, bringing what degree of economic well-being, what sense of personal dignity, what broader prospects for social equality? And again you point to polarization in our labor markets, and to lean superstar firms (outcompeting rivals in part by relying on so few workers), as precursors for more systemic change to come. So here could we start sketching concerns of both present and future labor-market monopsony — in which not only will many workers see their individual skill-sets no longer possessing much monetary value, but in which a small number of increasingly concentrated and automated firms can treat ever more of us as surplus labor (the way that, for instance, certain marginalized social groups long have been treated)?

    That’s right. Anyone who picks up this book expecting an account of some dramatic technological big bang in the next few years will find themselves disappointed. I don’t believe that’s likely to happen. I expect that work will remain for some time to come. But I do worry that, as we move through the 21st century, more and more people will find themselves unable to make the kinds of economic contributions that they might have hoped to make in the 20th century. This might sound like a less alarming proposition, but it is still a very challenging one. And in my view, the main economic challenge it presents us with is a problem of inequality.

    It is no coincidence that anxieties about automation are intensifying at the same time that worries about inequality are growing. The two problems are closely related. Today the market, and in particular the labor market, is the main way that we share out material prosperity in society. For most people, their job provides their only source of income. But the inequalities we see emerging suggest that this traditional way of sharing out income is creaking. Some people already get far less than others. In my view, technological unemployment is just a more extreme version of this same story, one where the market mechanism breaks down completely, and some people get nothing at all. At times, it is tempting to dismiss worries about technological unemployment as this abstract threat, hovering at a distance, lingering in the future. But that’s a mistake — technological unemployment is closely related to the inequalities we already see today, with both rooted in an increasingly dysfunctional labor market.

    You also spoke of monopsony, and the relative power of workers versus those who employ them. I think this will be a critical issue in years to come. If technological progress leads to a decline in the demand for labor, then it will bring a corresponding decline in labor power. John Kenneth Galbraith coined the term “countervailing power” to describe the different forces that might hold concentrations of economic power in check. In the 21st century, as the countervailing power wielded by workers falls away, I believe the state ought to step in to help buttress and bolster that power.

    Questions of labor-market monopsony, frictional technological unemployment, structural technological unemployment, of course will take us to questions of redistribution. But before we get to redistribution, could we pause on possibilities for predistribution? When could or should we pursue policies that seek to preempt extreme forms of economic concentration, occupational polarization, and workplace deskilling from ever taking place? What comparative insights might we draw, for instance, by considering not just different nations’ different social safety nets, but striking discrepancies in how automation has produced widespread deindustrialization in the UK or US, but much less in Japan or Germany? Do you see industrial labor markets in these latter two countries just corroding slightly more slowly than the rest? Or what lessons about proactively shaping such markets should we all take from their relative successes?

    To the extent that I explore predistribution in the book, I see it having a role in the capital market, rather than the labor market. We have to find new ways to give more people in society a financial stake in the forms of capital becoming increasingly valuable and important.

    In some sense, we already do this. Since the start of the 20th century, the state has tried to share out “human capital” (the phrase economists use to describe the skills that workers might apply in their jobs) as widely as possible. This is one basic point of mass education. Opening good schools and universities to all is an attempt to make sure valuable skills do not stay concentrated in the hands of a privileged, well-educated few. The argument I make in the book is that, as we move through the 21st century, we will need to share out “traditional capital” more widely as well. I am particularly interested in the idea of a “Citizens’ Wealth Fund,” where the state takes a stake in valuable capital on behalf of its citizens. We have precedents for this. Today, sovereign wealth funds (like the trillion-dollar fund owned by Norway, and the Alaska Permanent Fund, worth a more modest 60 billion) perform a similar role, as pools of state-owned wealth that, in effect, provide each citizen with a stake of valuable capital that many would not otherwise have.

    More broadly, your book posits that, while 20th-century market economies outperformed centralized state planning (better meeting challenges of providing for large populations through expanded production), our near-future challenges will cluster around questions of distribution. However you also suggest that, perhaps especially in democracies, policymaking will need to address acute concerns not just of distributional justice, but of contributive justice. So here could you articulate why you see a conditional basic income (CBI), more than a universal basic income (UBI), ensuring each community member’s material well-being, while also proactively promoting social cohesion and a shared sense of fairness — particularly amid heightened prospects for internalized shame among the structurally unemployed, for intergroup and intergenerational resentments, for various tendencies towards exclusionary and/or zero-sum thinking?

    Again I do expect that the main economic challenge in decades to come will be that distributional one. In the 21st century, technological progress will make us collectively more prosperous than ever before. The challenge, if this is also a world with less work, will be how to share out this prosperity, even as our traditional mechanism for doing so (paying people for the work they do) becomes less effective.

    At first glance, a UBI offers a very attractive alternative mechanism. It provides a way of sharing out prosperity, without piggybacking on the labor market. And at least in this respect, it would solve the “distribution problem.” But there is this other problem that a universal basic income does not address. This is the “contribution problem,” the need to feel that everybody in society pays back in some way. Today, social solidarity comes in part from the fact that everyone who can is trying to pull their economic weight, contributing to that pot through the work they do and the taxes they pay. In a world with less work, it will no longer be possible for everyone to make these sorts of economic contributions. And a UBI does nothing to address this problem. Jon Elster put it firmly but sensibly when he said that the UBI “goes against a widely accepted notion of justice: it is unfair for able-bodied people to live off the labor of others.”

    So how can we re-create that sense of communal solidarity? A big part of the answer must involve attaching conditions to the basic income. This means we will need to find non-economic ways for people to pay into the collective pot, if they cannot contribute through the work they do. And that is why I propose a CBI, a basic income conditional on carrying out certain non-economic activities that particular communities consider valuable and important.

    This might sound quite radical, though even today there are a huge range of non-economic activities that we all consider socially valuable, but which get little or no return on the market through a wage — and which the CBI would recognize. In Britain, for instance, about 15 million people regularly volunteer, half as many people as do paid work. The economic value of this volunteering is estimated at 50 billion pounds a year, making it as valuable as our energy sector. But the market mechanism doesn’t value this work at all.

    And then for updated redistributive approaches to human capital, could you outline your general recommendations for education: for rethinking what we teach (prioritizing non-routine complementary tasks, but alas only as a temporary labor-market solution), how we teach (with digital interfaces providing significant opportunities for economies of scale, and for affordable one-to-one teaching structures), and when we teach (on an ongoing basis throughout workers’ lives, as unpredictable tech economies keep evolving)? And perhaps for one additional angle, where might we need to rethink why we teach (with tangible economic incentives no longer providing such a compelling rationale)?

    We should keep in mind that the time hasn’t quite yet arrived for a universal basic income or a conditional basic income. In my view, the challenge for the next couple of decades is not a world without enough work for people to do, but a world in which there is lots of work — though with more and more of us lacking the skills and capabilities to do it. This means that the most important policies to pursue will focus on education. And exactly as you said, we currently spend a lot of time discussing what we teach, which capabilities we value and want people to have. But we also need a more thoughtful conversation about how we teach (given that traditional classrooms today look remarkably similar to those from a hundred years ago), and when we teach (given the persistent cultural presumption that education is something you only do with great intensity and seriousness when you are young).

    But looking further into the future, to a world with less work, this approach might not be enough. Education experts are fond of quoting the Spartan king Agesilaus, who said that the purpose of education is to teach children the skills they will need when they grow up. Their point in citing this king’s fairly prosaic advice is that, right now, our education system often fails in this seemingly basic task. But when considering a world with less work, Agesilaus’s description of education’s purpose prompts a very different thought: that the skills needed to flourish in the future might look very different from those in demand today. So what might these skills be? I’m not sure we have a good answer. Today, we tend to conflate working and flourishing. To succeed at work means to flourish in life, and so the skills must be the same. But in the book, I argue that this particular correlation may unravel.

    Paradoxically then, even as omnipresent firms seek to consolidate their hold on societies, work itself (at least paid work, the all-encompassing pursuit of which, you say, has supplied so many of us with an opium-like diversion from broader existential concerns) might become much less important. So how might a Big State help to fill this potentially problematic vacuum? How, for example, might this Big State best support possibilities for constructive (or at least not self- or socially destructive) leisure? And how might individual political leaders need to hone and to share their own moral vision for collective human flourishing — rather than just their technocratic knack for promoting material well-being?

    In part, I see the Big State as an unavoidable response to the economic challenges we’ve discussed today: how might we share out prosperity, if we can no longer rely upon the labor market? The answer, in my view, is through the state. But in calling for this Big State, I mean something very different from the sorts of monolithic states we saw in the 20th century. I do not have in mind teams of smart people, sitting in central government, poring over detailed economic blueprints, trying to command and control our activities, all to make the economic pie as large as possible. The 20th century showed quite clearly that this kind of planned economy is no match for the productive chaos of free markets. So instead, I have in mind a central role for the state not in production, but in distribution — not in making the economic pie larger, but in making sure everyone gets a slice.

    But then, as you said, I believe the Big State will need to take on other roles in addition to this distributive one. And one such role will be as a “meaning-creating” state. Today, many nations adopt labor-market policies, a spread of interventions that shape the world of work in a way that society thinks best. But I expect that, in decades to come, we will want to complement these policies with something different — leisure policies, shaping how people use their spare time. Again that might sound quite radical, but in the book I show how the state already does this. In the UK, for instance, we have an entire government department (the Department for Digital, Culture, Media and Sport) which shapes our spare time in many ways, from making sure that all children have the chance to learn to ride a bike and swim, to making sure the best museums in the country remain free, to banning the most beautiful works of art from being bought and taken overseas.

    Today, these interventions offer a relatively haphazard collection of minor intrusions on our spare time. But in a world with less work, we will need to think about such interventions in a more structured and thoughtful way. That’s likely to be a challenging task. To start with, it will mean a radical change in perspective. Our governments tend to treat leisure as a superfluity, rather than a priority — a low-hanging fiscal fruit to be cut and discarded with ease. Think, for instance, about President Trump’s attempts to eliminate the National Endowment for the Arts, the Institute of Museum and Library Services, the Corporation for Public Broadcasting.

    We also tend not to conceive of our politicians as moral leaders required to take explicit positions on what it means to live a flourishing life. We more typically expect politicians to act as technocratic managers tasked with solving esoteric policy problems. Yet the challenge in a world with less work will not simply be how to live, but how to live well. All of us will need to reflect on what it really means to live a meaningful life. And we just may want our political leaders to help us find answers to these deeper questions.