• Getting Bias Out of AI

    For the Provocations series, in conjunction with UCI’s “The Future of the Future: The Ethics and Implications of AI” conference.

    Over a six-month period in 2013, recruiters hiring for positions ranging from banking to management to marketing across the country received 9,400 fake resumés, each belonging to an imaginary person. Each resumé, all of them identical except for the demographic of the applicant, arrived under one of eight names. The experiment, conducted by economics professors, found that applicants with African-American-sounding names received 14 percent fewer requests for interviews than the candidates with white-sounding names. In a similar study four years later, researchers for the San Francisco Federal Reserve Bank sent out 40,000 fake resumes from young, middle-aged and older men and women with otherwise identical qualifications. They found higher callback rates for younger applicants, and even fewer callbacks for older women than men.

    Recruitment and hiring processes remain tainted by human biases. Image recognition technology has labeled black people as gorillas. Predictive software used in courts can falsely single out black defendants as potential reoffenders. And seemingly sweet chatbots can turn into racist, homophobic, sexism-spewing potty mouths. AI behavior reflects the very contradictions within humanity that we grapple with and try to guard against at work, in politics, in education, and in our personal lives.

    How do scientists create intelligent, unbiased autonomous machine systems for every aspect of our lives from hiring, to space exploration, to human service and war? Today we can deploy artificially intelligent robots onto planets to discover water, into Las Vegas casinos as mechanical concierges, into shipping warehouses to move goods, or into Syria to predict violent hotspots — but no one can control what those machines will encounter or come to understand along the way. And humans can’t seem to help themselves from imbuing the artificial intelligence they create with their own inherent conscious and unconscious prejudices. These biases are baked into algorithms and the data itself that trains these systems.

    In states across the country, algorithms are now also used in crime prevention and policing, including one created by scientists at UCLA using Los Angeles Police Department data. “The only data we use are data on crime events themselves, what type of event is it, where did it occur, and when did it occur,” explains the program’s co-creator, Jeff Brantingham, a professor of anthropology who specializes in predictive policing. There is an important distinction in crime analytics in predicting who will commit a crime, as opposed to where and when it will occur, Brantingham said. “That’s really important from a civil liberties point of view.”

    But as Kate Crawford, a principal researcher at Microsoft, wrote in a New York Times op-ed, what many of these algorithms have in common is “a white guy problem.” Local predictive policing programs in the US, she writes, “could result in more surveillance in traditionally poorer, nonwhite neighborhoods… Predictive programs are only as good as the data they are trained on, and that data has a complex history.”

    Here’s a thought experiment: look at society from a child’s view. It may seem specifically categorized into rigid boundaries — hard and fast wants and needs. But parents, teachers, and interactions on the playground introduce moral lessons, concepts of socialization and cooperation, and nuance into these experiences. Children cannot just get away with going after what they want by beating up other children at school.

    Now picture a very different kind of baby, an artificially intelligent baby. Raised in a science lab, it learns by imprinting the experiences researchers code for it, or by experiencing the playground of the Internet, where it soaks up digital knowledge. This playground looks very different from a Montessori School. It is a place where the loudest, most persistent voices win. (One researcher, Mark Sager, at the University of Auckland, has actually created “AI baby” based on computational neuroscience models and theories of emotional and behavioral systems. This AI baby looks freakishly human, and learns from its environment, responding and interacting to people it sees.)

    Without controlled surroundings, as well as careful, thoughtful parenting, the important and not-so-obvious distinctions that parents try to teach their kids could get lost for an AI baby — that is to say, for any AI technology — in the pattern of digested information for a machine. How could an “AI playground” be constructed to better educate and socialize technology?

    In breakout discussions at the 18th International Conference on Artificial Intelligence in Las Vegas, scientists discussed how they can build an autonomous machine that can adapt to its environment, even when no one knows what that machine might actually learn or discover in that particular environment.

    Hundreds of artificial intelligence and machine learning experts had gathered over four days, and in most presentations, for every 25 (mostly white) men in a room there were only two women. For some discussions, I was the only woman and the only minority present. It’s not hard to see how avoiding bias might fall to the bottom of the list of priorities for technology in such conditions, especially when you realize that the creators largely tend to come from a monolithic background. It is a dilemma that some at the Vegas conference acknowledged.

    “The problem we have is we always project from our own experience,” said an AI designer seated in a paisley ballroom chair. But one human’s history encompasses only a narrow scope of experiences. “We are all a product of our own personal history.”

    Some researchers are now working to account for and recognize human biases internalized by AI, and not just after those biases become apparent. The hope is to head off discriminatory or unfair machine choices before the AI has a chance to regurgitate the very behaviors it picked up from humans.

    Kush Varshney, an IBM Research staff member, believes that scrubbing human bias from AI could help make machines tools for social good. To get there, he and his team have developed a method to curtail inequitable outcomes that are already built into the existing algorithms. If you know, for example, that predictions from data sets used for loan approvals or denials tend to exclude a group based on race, gender, or socioeconomic status, Varshney’s model would identify and anticipate those outcomes and correct for them in advance by tweaking other variables, such as zip code or age.

    “We can go back and change the predictions,” Varshney told me. The hope is that this approach, which could be expanded to other uses of AI, would level the playing field in advance in the process, altering the outcome of unfair predictions.

    Francesca Rossi, a research scientist at IBM’s T.J. Watson Research Center, believes AI systems can learn to behave in a way that is aligned with human values if creators embed ethical guidelines and principles into their models. But she recognizes that another challenge remains, one that’s equal parts human and technical: Not every culture shares the same values. Teaching machines to understand differences in moral codes adds another layer of complexity. A system being built for use in China, France, and the US, for example, “requires a multicultural discussion,” Rossi said. A service robot might be expected to treat an elderly person in Japan with customs and actions that show respect in ways that are unfamiliar to Americans.

    Building fairness into AI systems is vital as our lives become increasingly (and alarmingly) guided by algorithms. But if scientists and designers cannot fully understand why an artificially intelligent system responded to its environment in a certain way, or how it makes a decision (like when a cute social bot suddenly turns sexist, making a surprise comment that shocked its creator: “Your mom is like a bowling ball. She’s always coming back for more.”), how are the rest of us supposed to trust any of it when it comes to our own health, transportation, justice systems, elections, and everyday decisions?

    We average humans can’t fully detect the built-in biases, because the “mind” of the machine remains so obscure, even to its own creators. Transparency in the creation of AI — who is producing it and how, and whether or not these creators represent the diversity within society — could begin to make some of these mechanisms less mysterious, opening the door to discussions on how to curb bias before it begins.

    Those working in the field — from smaller-scale projects to revolutionary ones in big tech — must continue to address potential biases head on as they build and improve their own algorithms. Engineers and tech designers can make efforts to develop methods to anticipate inequitable outcomes, and account for those within the algorithms — which is what Kush Varshney and his team at IBM, as well as other groups in the AI community, are attempting to do. AI developers must pursue a more concerted effort to counteract bias, instead of contributing to it.

     

    Erika Hayasaki is an Associate Professor who teaches in the Literary Journalism Program at UC Irvine. She has written about science and technology for WIRED, Foreign Policy, MIT Technology Review, The Atlantic, The New Republic, Slate, and Newsweek.

     

    Image Credit: Soul Machines