• Misogyny Online: Death by a Thousand Cuts

    One year into the Trump administration, malice and fear mongering have become normalized rhetoric, facilitating the jump to full-on hate speech that has found fertile ground online. From nationalist chat rooms to conservative evangelical groups, the Other is both a perpetual threat and a source of thrill. Muslim-Americans should be deported; kneeling NFL players who protest the police need to be put in jail, preferably along with Black Lives Matter and Antifa activists; North Korea should be nuked straight away; transgender people belong in jail and not in the army; Dreamers need to go.

    Above all, the Other is female. In the virtual “locker room,” banter targeting women provides a gateway to other forms of intolerance like racism, anti-Semitism, and xenophobia. Making demeaning jokes about women often transitions into racist comments. While a wife-beater has been assuming a prominent White House position for months and is vindicated by our President, the subjugation of women is the common denominator among involuntarily celibates (“incels”), PUAs (Pick Up Artists), men’s rights groups, Trump supporters, alt-righters, and neo-Nazis alike, while platforms like Reddit’s Purple Pill, Discord, or Return of Kings, recently fueled by the backlash of the #MeToo movement, advocate aggression towards “femoids,” “feminazis,” or “SJWs” (social justice warriors). As misogyny binds men in a “manosphere,” it allows friendships to develop that are consecutively channelled into an ideology of white male supremacy.

    As a doorway to the logic of ethnocentrism and racism, sexism online has received less attention from politicians, lawyers, and public policy makers. While accounts of targeting women who challenge the gaming industry (Anita Sarkeesian, Zoe Quinn, Brianna Wu) and cases of cyber-bullying resulting in suicide (Amanda Todd, Brandy Vela) have been widely mediatized, the casual comments, “controversial humor” (a blind spot of content moderation), offensive memes, and non-consensual photo-sharing thrive on social media. Considerable attention has been paid to a range of criminal behaviors online (hacking, online fraud, and identity theft), but technology-facilitated sexual violence (revenge pornography, cyberbullying and stalking, malicious impersonation) often remain in the twilight zone when it comes to policy making and content moderation.

    While we might be outraged by certain Reddit users, we are equally capable of forgetting that misogyny doesn’t only reside in the hostile content they produce. This content is re-packaged and re-perpetuated by search and social media algorithms that are owned and patented by Silicon Valley. It is stored, replicated, and categorized by machine learning through “big data” aggregations and statistical calculations. As a result, sexism becomes anonymous, high-speed and ever-present, hard to encapsulate in private, legal, and State measures. In Katherine Hayles’ book, My Mother was a Computer, we are reminded that the noun “computer” originally referred to a woman who would perform manual calculations. However, in today’s tech industry run by men (current female representation in Facebook senior leadership is 27%), women have become a vulnerable minority. Companies like Facebook and Google that capitalize on the myth that the technological solutions they design are neutral, value-free, and transparent, have ultimately given us a world where we can not feel safe online.

    This is particularly perceptible in the case of search engines: U.S. commercial engines like Google, Yahoo!, and Bing hold the power to define how information is indexed, prioritized and retrieved. Our clicks hold some sway in determining the search results, but advertising revenue and the economic pursuits of companies are prioritized, creating a layering of control within the cybersphere. For example, in my own search of “women” on Google, the highest-ranking pages invariably accord with sexist preconceptions: dating sites, soft-porn websites, rankings of “hot celebrities,” shadowy right-wing “news sites,” and Donald Trump’s misogynist quotes. Google search also often automatically codifies “women” as “girls,” leaving me wondering about the cultural implications for both age groups.

    As the search algorithms are fed to us ad nauseam, often imperceptibly, it might seem like the results take the form of casual micro-aggressions, rather than intentional acts of violence. But these aggressions are still capable of exacting tangible damage in real life, as they recreate the social dynamics that mirror offline patterns of oppression and marginalization. In 2013, an ad campaign developed for the UN showcased discrimination using genuine Google autocomplete suggestions. It featured photographs of women’s faces with autocomplete results for terms like “Women shouldn’t…” and “Women need to…” placed over their mouths, replacing their own words and effectively silencing them. Despite Google’s declaration that it would “clean up” the search engine, the silencing remains. While the autocomplete function can be disabled (particularly for what Google deems “sensitive categories”), a quick look at the two highest results for the search terms “Women should” returned “Women should always obey” and “Should women really be able to vote?.” As Google’s highest endorsement, they seem disquietingly legitimate.

    My further search for the combination of “women” with any non-white ethnicity returns equally skewed results, including “The Ultimate Guide to Getting Laid with Latina Women Using Tinder,” at number three, even though I do not use Tinder, date, and have never been in a relationship with a Latina woman. Other results oscillate between pornography, fetishization, and hate speech. In cyberspace, opportunities for perpetuating misogyny rely heavily on the algorithmic intersections of race and gender. As Safiya Umoja Noble argues in her book, Algorithms of Oppression: How Search Engines Reinforce Racism, the top results of a page-ranking system significantly shape the cultural image of “Latinas,” “Asian women,” or “Black women.” In doing so, they also fail to provide an accountable cultural context for how women of color have traditionally been discriminated against.

    The Facebook algorithms that determine the nature of our news feeds are no kinder to women than Google ones. Rather than presenting itself as a source of universal knowledge, Facebook is founded on the logic of emotional solicitation: we are encouraged to like, wave, tag, smile, and share as part of our online engagement. However, there is a reverse side to the positive emoticons, as derogatory language runs parallel to the symbols, targeting women through explicit threats, non-consensual image use, and photo alterations that represent the opposite register of affective economy. The platform provides a distribution network for hyperlinks to sites and profiles; among those, however, are sites that discredit, defame, and threaten women, whether through real, or fake, accounts. While Facebook algorithms strive to edit out offensive words and pornographic images, they combine abstract mathematics with everyday cultural bias.

    As someone firmly located in Facebook’s “progressive” echo chamber, I daily experience unsolicited explicit misogynist content that circumvents both image-matching software and human moderation. Recent encounters include crude #MeToo jokes by people in my own network; fat-shaming posts about Charlottesville activist Heather Heyer (as mentioned before, the deployment of sexism, along with retrograde misogyny, is central to advancing white supremacy); and radical messages issued by hate communities listed by Southern Poverty Law Center (only some have been removed from the platform). Because of my past affective engagement with that content (clicks, comments, sad or angry emojis), and because Facebook algorithms prioritize the emotional impact over the factual, I am more likely to see a sexist meme of Kamala Harris than a link to a New York Times op-ed. Since Facebook announced that they will now prioritize posts from “friends” over journalism, this is unlikely to subside in the near future.

    In light of this, we, women, are left with the question of how to manage amidst this algorithmically-assisted aggression? Our responses to algorithms are a tacit series of self-adaptations: we enunciate carefully when speaking to Alexa; we formulate our queries in search engine-friendly terms; we use hashtags to make our social updates more machine readable. The algorithms are, ostensibly, designed to create a secure space for sharing, yet the reality that emerges is not one for care, but precarity.

    As the algorithms provide us with information and lend a sense of emotional connection, their reach is even wider: they count votes, approve loan applications, target citizens or neighborhoods for police surveillance, select taxpayers for IRS audits, and grant or deny immigration visas. When this power is coupled with automated decision-making returns that are potentially incorrect, unjustified, or unfair, the impact extends beyond the scope of the platforms.

    Within this, our trust, as both knowledge producers and consumers, becomes complicit. In response, women and their allies working for tech giants and small start-ups can and should target prejudice, hate, and fear of women, whether through hiring policies, procedural regularity, alternative data inputs, accountability audits and post reviewing, or a critical collaboration between computer scientists and policymakers. If we want algorithms that treat women and minorities in ways that align with social justice, we must design them that way.