• AI Doesn’t Know How You Feel

    “Awkward giggles,” confusion, and nervousness: on October 21, Fujitsu Labs announced their software would be able to detect all of these emotional expressions and feelings. They’re not alone in their plans. In May, journalists reported Amazon was working on a wearable device that will be able to detect its owner’s feelings. It will “sync with a smartphone app,” and supposedly be capable of detecting “joy, anger, sadness, fear, disgust, boredom…” Meanwhile, Affectiva is promoting the idea that its software might be able to tell you when you’ve hurt people’s feelings on social media, while the Dutch company Braingineers touts its alleged ability to discern how “users really feel.”

    These proposed inventions are based on shaky understandings of human psychology which presume that emotions are concrete, detectable entities that can be reliably recognized and identified across time and space. Such a view comes from Darwin; it was resuscitated in the 1960s by psychologist Paul Ekman, and still circulates widely today (especially in Silicon Valley, as Rich Firth-Godbehere has incisively noted). Darwin, Ekman, et al. claim that there are universal emotions, hardwired into all people, that can be found in all cultures and eras, and which are expressed the same way everywhere.

    Such claims, however, have been undermined by path-breaking research in psychology, history, and anthropology, which have shown how much feelings vary in different times and places. Some feelings that Amazon and other Emotion AI companies promise to detect — like boredom — didn’t even exist until the 1850s in America. (That’s when the word was first invented.) Other feelings they promise to identify, like anger, have existed for far longer, but their expression and meaning have been dramatically reshaped over time — and are still changing before our eyes. For instance, 19th-century white men might brawl and fight to express their fury, or gather for “indignation meetings” in public squares. In contrast, 20th-century office workers seethed inwardly and learned to smile while keeping their anger under wraps, worried that their tempers might interfere with office collegiality and personnel department rules. And now, with the rise of social media, Americans are finding new ways to express the feeling, often by typing on their phones. Yet in interviews we conducted with a wide range of Americans, we found that many had styles of online anger which differed dramatically from one another. Some expressed anger at social injustice, others trolled; still others bit their tongues and held their fire. Which of these expressions would Amazon’s device term anger? How would it know what it was witnessing (or spying on)? Would it be able to differentiate between righteous indignation and, say, the feeling that comes from being “hangry”?

    Seemingly unaware of the complex cultural influences on feeling, companies that aspire to read emotions treat them as unchanging and primordial, so easily recognized and evaluated that even an algorithm can do it. Their efforts are worrisome not just because they may create expensive and largely useless technology; they’re troubling as well because they may end up training us to express our feelings in ways that conform with machines’ expectations, thereby homogenizing the rich array of emotional cultures that exist within the US, and across the globe.

    If these devices were to tell us what we are feeling — or what they think we are — they wouldn’t just be reporting our emotions, but instead they would be giving them new shape. The words a culture uses to define moods, the connotations and values these labels carry, all are part of how we understand what we’re experiencing. A 19th-century American sitting alone, for instance, might have described herself as experiencing solitude, while a 21st-century American by himself might describe himself as lonely. The labels we (or, theoretically, our devices) apply to our feelings shape and define them.

    Beyond that, all technologies have embedded within them social norms and expectations. Facebook, for instance, suggests one should have hundreds if not thousands of friends, and in the process, has dramatically reshaped ideas of what it means to be isolated or connected, lonely or sociable. Meanwhile, Amazon suggests humans are — or should be — endlessly acquisitive, and teaches them how to be. These technologies are not neutral bystanders to our feelings, but instead exert powerful forces over them, normalizing some experiences, discounting others.

    A growing number of commentators grasp that there is an emotional problem with many of the software platforms being created. And some have tried to address it. In Silicon Valley, the common wisdom is that more humane technologies will emerge when we better understand what humanity is and design technologies that cater to this humanity. This is the line of thinking that Tristan Harris — former Google tech ethicist and founder of the Center for Humane Technology — has been espousing. But Harris, like most of Silicon Valley, assumes users are the same emotionally; that they all have the same feelings. As Harris sees it, social media companies like Facebook and Twitter have been bedeviled by a problem of emotional ergonomics: he uses the image of someone sitting in a chair to illustrate his point. When we sit in chairs, they either conform to our backs or they don’t. And when they don’t, they cause injury. Harris conceives of the relationship between our moods and our social media along similar lines. Instead of producing algorithms that appeal to users’ better natures, he contends that Facebook and Twitter have been “downgrading” our humanity by designing systems that encourage us to express anger and which appeal to our primordial, reptilian instincts.

    This is welcome critique — we desperately need to revamp social media to foster a new and kinder emotional culture. But Harris’ model only takes us so far, for he assumes that there are universal emotions, and that software either fosters good ones or bad ones.

    Ultimately, we need a more sophisticated understanding of the way human emotions and technologies interact. We aren’t static creatures whose emotional essence can be determined and around which some “humane” technology can be built that conforms to that essence. Instead it’s better to think of humanity and emotions as plastic categories that evolve across time, across cultures, and space. We shape our technology and our technology in turn reshapes our emotional lives.

    Thus conceived, tech doesn’t stand apart from who we are emotionally — it doesn’t just read us from afar. Instead it is — and always has been — an intrinsic part of our emotional destiny, shaping and reshaping our inner lives. It can amplify and deepen the astounding emotional variety that humans currently express. Or it can be deployed to extinguish or homogenize emotions and cultures, making feelings across the globe appear as standardized as the emoji we increasingly rely on to express them. Intentionally or unintentionally, Silicon Valley is taking this latter path. Embedded in their algorithms is a philosophy of conformity. Their project is to put our emotions into uniform categories. In the interest of expanding rather than contracting the human experience we should resist this project.

     

    Luke Fernandez and Susan J. Matt are authors of Bored, Lonely, Angry, Stupid: Changing Feelings about Technology, from the Telegraph to Twitter (Harvard University Press, 2019).