• Patterns of Life: AI and “Actionable Data” in Warfare

    For the Provocations series, in conjunction with UCI’s “The Future of the Future: The Ethics and Implications of AI” conference.

    The past two decades have brought two interrelated and disturbing developments in the technopolitics of US militarism. The first is the fallacious claim for precision and accuracy in the United States’s counterterrorism program, particularly for targeted assassinations. The second is growing investment in the further automation of these same operations, as exemplified by US Department of Defense Algorithmic Warfare Cross-Functional Team, more commonly known as Project Maven.

    Artificial intelligence is now widely assumed to be something, some thing, of great power and inevitability. Much of my work is devoted to trying to demystify the signifier of AI, which is actually a cover term for a range of technologies and techniques of data processing and analysis, based on the adjustment of relevant parameters according to either internally or externally generated feedback.

    Some take AI developers’ admission that so-called “deep-learning” algorithms are beyond human understanding to mean that there are now forms of intelligence superior to the human. But an alternative explanation is that these algorithms are in fact elaborations of pattern analysis that are not based on significance (or learning) in the human sense, but rather on computationally detectable correlations that, however meaningless, eventually produce results that are again legible to humans. From training data to the assessment of results, it is humans who inform the input and evaluate the output of the algorithmic system’s operations.

    When we hear calls for greater military investments in AI, we should remember that the United States is the overwhelmingly dominant global military power. The US “defense” budget, now over $700 billion, exceeds that of the next eight most heavily armed countries in the world combined (including both China and Russia). The US maintains nearly 800 military bases around the world, in seventy countries. And yet a discourse of US vulnerability continues, not only in the form of the so-called war on terror, but also more recently in the form of a new arms race among the US, China and Russia, focused on artificial intelligence.

    The problem for which algorithmic warfare is the imagined solution was described in the early 19th century by Prussian military theorist Carl von Clausewitz, and subsequently became known as the “fog of war.” That phrase gained wider popular recognition as the title of director Errol Morris’s 2003 documentary about the life and times of former US Defense Secretary Robert McNamara. In the film, McNamara reflects on the chaos of US operations in Vietnam. The chaos made one thing clear: reliance on uniforms that signal the difference between “us” and “them” marked the limits of the logics of modern warfighting, as well as of efforts to limit war’s injuries.

    Efforts to limit injury in war are inscribed in International Humanitarian Law (IHL), a body of customary law developed in the aftermath of WWII as part of the Geneva Conventions. Rule number 1 of IHL is the Principle of Distinction, which states that “The parties to the conflict must at all times distinguish between civilians and combatants. Attacks may only be directed against combatants. Attacks must not be directed against civilians.”

    Within US military circles, the enduring dream of finding a technological solution to the fog of war is now invested in expanded data gathering and analytics. But the ability to accumulate massive amounts of data has been accompanied by the debilitating challenge of rendering data into “actionable” information. Inspired by the usefulness of predictive data analytics in finance, marketing, and consumer behavior, a growing number of companies now offer technologies for what is called “pattern of life” analysis. These rely on techniques that define life as patterns of change in machine-readable signals over time, thus reducing human experiences, relations, and cultural practices to a series of phenomena nicely tuned to the capacities of data analysis.

    Referred to under the mystifyingly singular term “AI,” these technologies promise to increase the ability accurately to recognize actors on the ground. But the evidence shows quite the opposite. The Bureau of Investigative Journalism, for example, analysed “precision” air strikes carried out by the US and Coalition military in Pakistan from 2004 through 2015, including  the 3,341 fatalities that resulted from every known attack during that period.  Of those deaths, 190 (or 5.7%) of the victims were positively identified as “children”  by the study, and 534 (or 16%) were identified as “civilians.”  Along with these deaths coded as ‘collateral damage’ another 52 (or 1.6%) were positively identified as so-called high profile or high value targets — which is to say the actual targets of the attacks. And finally we are left with the remainder, the 2,565 people (or 76.7%) of those killed, categorized simply as “other.”

    If the timeline were to continue, the problem would intensify. US “counter terror” airstrikes have doubled since Trump’s inauguration, targeting particularly Somalia, Yemen, and Afghanistan. In March of 2017 parts of both Somalia and Yemen were declared areas of “active hostilities,” exempting them from targeting rules introduced by President Obama to prevent civilian casualties. At the same time, the level of secrecy around these targeted assassination campaigns increased.

    It is in this context that in April of 2017 the DoD announced plans to develop its flagship AI program, Project Maven. At the time, then Deputy Secretary of Defense Robert Work asserted the urgent need to incorporate “artificial intelligence and machine learning across [DoD] operations.” The plan includes an initial project on labeling data within video images generated by US drone surveillance operations. In June of 2018, the DoD launched the Joint Artificial Intelligence Center, directed by Lieut. Gen. John “Jack” Shanahan. “For fiscal year 20,” Shanahan said, “our biggest project will be what we are calling ‘AI for manoeuvres and fires.’”

    The urgent question posed by Project Maven is what criteria are being used to designate targets as imminent threats. The claims for precision that justify new investments in automated targeting systems are based on a systematic conflation of the relation between a weapon and its designated target on one hand, and the identification of what constitutes a (legitimate) target on the other. No amount of improvement in the precision of hitting a designated target can address the growing uncertainties and obfuscations about what constitutes a legitimate target in the first place. Insisting that AI is able to make such distinctions is part of a campaign to deny US military culpability as it increases reliance on ever more questionable forms of stereotypical categorization of who constitutes a legitimate target, and — at the same time — expands the temporal and spatial boundaries of what comprises an imminent threat, as we saw in the recent killing of Qassem Suleimani and his entourage.

    The promotion of automated data analysis under the rubric of artificial intelligence, and in the name of accuracy, can only serve to exacerbate military operations that are demonstrably discriminatory in their reliance on profiling, and indiscriminate in their failures to adhere to international laws of war. Rather than support further acceleration of the speed of warfighting, Americans need to challenge proclamations of an inevitable AI arms race and redirect our tax dollars to innovations in diplomacy, social justice, and environmental protection that might truly de-escalate the clear and present threats to our collective and planetary security.

     

    Lucy Suchman is a Professor of Anthropology of Science and Technology at Lancaster University in the UK.