Military Algorithms and The Virtues Of Transparency

This content has been archived. It may no longer be accurate or relevant.

From Jotwell:

For all the justifiable concern in recent years directed toward the prospect of autonomous weapons, other military uses of automation may be more imminent and more widespread. In Predicting Enemies, Ashley Deeks highlights how the U.S. military may deploy algorithms in armed conflicts to determine who should be detained and for how long, and who may be targeted. Part of the reason Deeks predicts these near-term uses of algorithms is that the military has models: algorithms and machine-learning applications currently used in the domestic criminal justice and policing contexts. The idea of such algorithms being employed as blueprints may cause heartburn. Their use domestically has prompted multiple lines of critique about, for example, biases in data and lack of transparency. Deeks recognizes those concerns and even intensifies them. She argues that concerns about the use of algorithms are exacerbated in the military context because of the “double black box”—“an ‘algorithmic black box’ inside what many in the public conceive of as the ‘operational black box’ of the military” (P. 1537)—that hampers oversight.

Predicting Enemies makes an important contribution by combining the identification of likely military uses of algorithms with trenchant critiques drawn from the same sphere as the algorithmic models themselves. Deeks is persuasive in her arguments about the problems associated with military deployment of algorithms, but she doesn’t rest there. She argues that the U.S. military should learn from the blowback it suffered after trying to maintain secrecy over post-9/11 operations, and instead pursue “strategic transparency” about its use of algorithms. (P. 1587.) Strategic transparency, as she envisions it, is an important and achievable step, though likely still insufficient to remedy all of the concerns with military deployment of algorithms.

Deeks highlights several kinds of algorithms used domestically and explains how they might parallel military applications. Domestic decision-makers use algorithms to assess risks individuals pose in order to determine, for example, whether to grant bail, impose a prison sentence, or allow release on parole. Even more controversially, police departments use algorithms to “identif[y] people who are most likely to be party to a violent incident” in the future (P. 1543, emphasis omitted), as well as to pinpoint geographic locations where crimes are likely to occur.

. . . .

As compared with domestic police departments or judicial officials, militaries using algorithms early in a particular conflict are likely to have far less and less granular information about the population with which to train their algorithms. And algorithms trained for one conflict may not be transferable to different conflicts in different locations involving different populations, meaning that the same problems with lack of data would recur at the start of each new conflict. There’s also the problem of applying algorithms “cross-culturally” in the military context, rather than “within a single society” as is the case when they are used domestically (P. 1565), and the related possibility of exacerbating biases embedded in the data. With bad or insufficient data come inaccurate algorithmic outcomes.

. . . .

Deeks argues that the military should “fight its institutional instincts” (P. 1576) to hide behind classification and limited oversight from Congress and the public and instead deploy a lesson from the war on terror—that “there are advantages to be gained by publicly confronting the fact that new tools pose difficult challenges and tradeoffs, by giving reasons for their use, and by clarifying how the tools are used, by whom, and pursuant to what legal rules.” (P. 1583.) Specifically, Deeks argues that in pursuing transparency, the military should explain when and how it uses algorithms and machine learning, articulate how such tools comply with its international law obligations, and engage in a public discussion of costs and benefits of using algorithms. (Pp. 1588-89.)

Link to the rest at Jotwell

PG notes the original article was published in The Virginia Law Review and is available here.

PG is inclined to think there is virtue in secrecy for algorithms used in military conflicts. At a minimum, disclosure provides a headstart for those who may be future opponents in developing the same sort of artificial intelligence. Additionally, there is a possibility of providing assistance to a future enemy in its development of counter-weapons to defeat or reduce the effectiveness of the original algorithms.