Technology so dominates our sensibilities today that we want technological solutions to the problems of technology. There may even be some: what about an app to tell us how much social media junk food we consume — and by comparison how modest our intake of informative vegetables may be? But overall, we are at risk of forgetting how many of our troubles — and how much of what we really value — comes from the social organization of our lives.
Technological change is one source of today’s deep challenges to democracy. But it does not simply determine outcomes, and the problems we face are not narrowly technological. Take transformations of work and employment. They are shaped not only by technology but by basic social conditions like inequality, the power of corporations and the erosion of unions, and habits of seeking perennially new consumer gratifications. Or take the weaponizing of technology and the consequent arms races. These are made possible by specific characteristics of technologies, but they’re also driven by deteriorating international institutions, escalations of international conflict, and the impacts of global economic rivalries on politics.
The dramatic new technologies that worry us as citizens and sometimes excite us as consumers are not neutral products of research or experimentation. They reflect investments. And they have been developed in an era when many Western countries including the US were systematically increasing corporate and market power over and above the claims of citizens, workers, or local communities. Likewise, this was an era in much of the West when conventional political parties and other institutions functioned less and less well. Of course, the political and institutional context of continued technological development is different in China and other countries. This may or may not be a source of solace.
The new technologies themselves, and the ways they are developed and deployed, reflect their context. Recent advances in artificial intelligence, for example, have come during an era of ascendant individualism. This means not only a celebration of private property that makes it easier for entrepreneurs and venture capitalists to build companies and fortunes. It also means that misleading ways of thinking about intelligence have flourished. Seeing intelligence as entirely an attribute of individuals distorts economics, weakens thinking about the public good, and misleads us about AI and its implications.
AI has seen remarkable advances and is deployed, often invisibly or unremarked, in a host of products. It makes possible both social media and the smart phones on which we use them. The biggest recent news has been the rise of machine learning. In essence, this involves training AI systems on the massive sources of data that are now available as by-products of other processes — notably market transactions but also GPS records of where we drive, business records of how we work, government records of whether and how we pay our taxes, and media records of what we download and otherwise consume (not to mention records of innumerable games of chess, and go, and bridge).
These data sets, the surveillance they imply, and the fact that they are largely proprietary all raise concerns. But they make possible machines (including algorithms and self-instructing software systems under that term) that are able to outperform humans at an increasing range of tasks. However, much thinking about this, starting even with labeling it “artificial intelligence” instead of “computational statistics,” is shaped by three distortions in understanding human intelligence.
First, there is the idea that intelligence is just a matter of task performance, and — at the extreme — that doing well at one task at a time counts as intelligence. The notion of “general intelligence” tries to address some of the limits in this, recognizing that human intelligence involves the ability to apply learning from one domain (or set of experiences) to another.
Artificial general intelligence (AGI) is now the holy grail. Researchers aren’t yet close to producing an AGI with the general intelligence of a rat, but no matter, if you believe the hype — and the fears — superintelligence is just around the corner.
Secondly, there is the idea that intelligence resides entirely in the brain. This is what encourages any number of otherwise intelligent people to indulge in the fantasy that uploading the contents of their brains to silicon will grant them immortality. This fantasy is particularly common among men (specifically) who are linked to the tech industry.
Quite aside from whether we will be able to replicate the contents of thinly sliced, cryogenically frozen brains in computer-readable formats, there is the question of whether “we” are really in our brains. None of the tech industry alpha males really acts as if only his brain counts, so far as I can see (or judge by the recent #MeToo movement). Nor do reputable neuroscientists think so.
Brain stems anyone? How about the central nervous system? Isn’t thinking in the brain to some extent dependent on bodily experience (both ontologically and ontogenetically)? Among the most powerful imperatives of our brains is maintaining homeostasis in our organic beings, and thus responding to both internal and external stimuli. We are not just brains. And our brains are not us.
Third, and perhaps most important, thinking about AI commonly ignores the importance of culture and community to the development — and indeed the very existence — of intelligence. Among the most important things our brains (and bodies) equip us to do is communicate. Our intelligence is deeply rooted in this. Much of this communication is linguistic — and indeed numerate. Yet no single individual either invented or knows all of language or mathematics or more generally, culture.
Sharing in culture is not entirely universal. It is always in part located. We speak various languages, not language in general. We find some art more meaningful, some music more moving. Participating in culture both depends on and facilitates particular communities and at the same time gives us capacity to reach beyond them.
We are intelligent beings in very large part because we participate in culture. Put another way, culture enables us to be intelligent in ways bacteria and bees, or even rats, cannot hope to be. Bacteria, bees, and rats are impressively intelligent in some ways, and indeed are seriously social, not just biological individuals. But they aren’t a patch on people.
Perhaps one day computers will have friends. This would bolster my optimism about artificial intelligence. But meanwhile, the extent to which the AI way of imagining intelligence is biased towards the idea of discrete, self-enclosed, intelligences is both amazing and pernicious. We can too easily forget that our very ability to think depends on language and culture.
Happily it’s not all AI. A growing number of investigations of distributed intelligence and even “intelligence in the wild” are noticing that thinking is not a discrete activity of separate brains or just a mastery of tasks. But too much of the discussion forgets that our very ability to think depends on language and culture. Forgetting this reinforces other ways in which we forget the social, including the social conditions on which democracy depends.
We are aware of problems in local communities, like the opioid epidemic. But we are not attentive enough to the devaluing and decline of community as such. We are aware of problems in institutions, like the way higher education has changed as it became centrally a matter of inequality and rankings. We are aware of problems in journalism as employment and support for in-depth reporting disappear. But we are only just awakening to the extent that these problems and others like them threaten institutional support for knowledge itself (which, by the way, is not the mere accumulation of data).
Aghast at some of the consequences of elections mediated by Facebook, Twitter, and other platforms, we call for technical fixes. We call for giant companies to be better citizens, exercising more “responsibility.” We want better filters to screen out the bad stuff (never mind the possible slippery slope to restrictions of free speech). We worry about fake news, hacking, and bots but forget to worry consistently enough about the state of our communities and our not-completely-online social institutions.
A co-evolution of humans and machines is likely, (with lots of variation on each side of that divide). The category of machines will include some slightly human-like intelligences, perhaps, possibly embedded in robots (but not mainly). Crucially, it will include a variety of complex systems that transform the very nature of human existence — just as our existence has previously been transformed by the rise of modern communications and transportation infrastructure and before that by the advent of language itself and the spread of literacy — not to mention agriculture, urbanization, improved diets, and the state.
New technologies have arisen as both face-to-face communities and larger institutions have deteriorated and as inequality has grown more extreme. This erodes the social conditions for democracy.
Democracy is challenged by the rise of social media and the invitation social media platforms give to malicious manipulation. It is challenged by technologically-based transformations of work and employment. It is challenged by massive investments in large-scale systems that serve markets, corporations, and capital accumulation more than they serve ordinary people. Perhaps most basically the ways we think about new technology undermine our grasp of how central being social is to being human — and being intelligent.
Craig Calhoun is President of the Berggruen Institute and Centennial Professor of Sociology at the London School of Economics.