Technologies of the Post-Racial

Algorithmic Governance and its Invisible Biases

Short statement
Workshop/conference “Race After the Post-Racial”
Organised by Françoise Vergès and David Theo Goldberg

College d’Etudes Mondiales/La Colonie, Paris
6 – 7 Dec 2016

[Special thanks to Eyal Weizman]
facial-recognition
Image: Police Oracle

Anti-racist struggles have long been articulated around a cultural effort to present racism as an atavism, as a retrograde thought and behaviour that has no place in an advanced society; and, as a corollary, to shame “the racists” for their idiocy and lack of culture. Tentatively, we can identify a two-fold problem arising today that may bring this anti-racist strategy into crisis: first, that racism is re-appearing, not as a thing of the past, but as an effect of cutting-edge technological advances of our times; and second, that, increasingly, racism doesn’t need racist subjects to operate.

_

The context within which I would locate these shifts is that of our “colonial present”, which is articulated primarily through the ‘war on terror’ – as Derek Gregory, Achille Mbembe, and many others have argued. Boundless and permanent, this war against a diffuse ‘terrorist’ ‘other’ has also blurred the boundary between war and peace. Among the many consequences of this blurring, one is the convergence of military and police practices under the overarching discourse of security. If I may speak in very schematic terms, for the sake of time here: on the one hand, we have militaries conducting policing operations in unruly post- or neo-colonial frontiers of the Global South.; and on the other, we see a militarisation of the police in the urban heartlands of the Global North.

Through the research conducted at Forensic Architecture, we have examined the former phenomenon in sevral contexts, such as the activities of the Israeli military in the Occupied Palestinian Territories, or the US drone warfare in Pakistan, Afghanistan, Somalia, and Yemen. For example, our work on the platform Where The Drones Strike in partnership with the Bureau of Investigative Journalism and the UN SCRT – which mapped and analysed the most comprehensive database of reported drone strikes in Pakistan –  was an attempt to reverse-engineer the logic of pattern recognition that informs and triggers such strikes – as we know from a number of studies, including Gregoire Chamayou’s Drone Theory. This deciphering task was largely facilitated, later on, by a series of leaks of classified “drone papers“, which further revealed the inner workings of the US assassination program, especially with regard to the process of detection and selection of targets. Without going into the details here, this process relies heavily on signal intelligence, or ‘SIGINT’, produced through big data analytics of an enormous mass of the electronic signals resulting from the general activities of the population under surveillance. There are two ways in which we can, in our turn, ‘detect’ racism at work here. On the one hand, the programming of algorithms for detection of threat patterns are biased by cultural ignorance of local habits, which end up producing highly deadly ‘false positives’. We can take the example of an infamous drone strike that took place in March 2011 in Datta Khel, FATA, Pakistan, which we have analysed as part of our report. A drone fired a missile at a ‘jirga’ – a traditional gathering of tribal elders and community leaders – killing upwards of 43 civilians. Whether it was algorithmic detection that wrongly identified a large gathering of males of fighting age, in an open field, as a reunion of militants; or whether it was simply caused by a trigger-happy drone operator and his misinformed supervisors: we will probably never know. But that leads us to the second way in which racism operates here: it is only because, at the end of the day, the targets are perceived as generic, racialized ‘Muslims’ living thousands of miles away from the American heartland that such decisions about life or death are allowed to be taken so lightly, based on such an experimental technology. Diffused and distributed along a chain of human-technology interactions that spans vast territories, such deadly acts of racism are difficult to perceive as such – let alone to stand up against.

_

On the other hand, we have what Foucault called the “boomerang effect” of colonial practices: how experiments and techniques elaborated in the colonial frontiers are then re-imported in the metropolis. In our case, this translates in the implementation of military technology and strategies by police forces in the Global North.

For example, we’re witnessing the massive adoption of predictive policing tools and big data analytics by police agencies in the US and in Europe –  the rationale being that of optimising the allocation of resources through the definition of expectable crime “hot spots” where to concentrate policing. In contrast with the rush towards this technology, most studies that are emerging on the topic show that, not only the use of such tools does not actually result in any reduction of crime rate, but above all, it tends to reinforce existing biases in policing practices. This is easily explained as soon as one looks how such predictive tools work: put simply, what they do is to take existing police records to define primary ‘targets of interest’ and to run a kind of light version of pattern analysis that ‘reveals’ the social networks of previously arrested persons. Now, there is no need to further demonstrate the racial bias in the disproportionate number of arrests and incarcerations of people of colour in the US. This means that, under the disguise of a neutral, scientific, objective method of policing, predictive policing ends up regurgitating “insights” that are just as racially biased as the data they are fed with. Thereby, what could be still denounced yesterday as racism within police and judicial institutions is somehow laundered through technological fetishism, and re-emerges as objective, impartial, state-of-the-art security practices.

Facial recognition is another technology that is being massively adopted and deployed by police forces around the world. And again, studies have shown that most facial recognition algorithms present a racial bias. For example: a new software equipping special surveillance cameras in Los Angeles can recognize individuals up to 600 feet away, and match them against a list of suspects. While this list, again, will contain a disproportionate amount of African Americans, a study has shown that the software performs ‘worse’ precisely on this demographic; as a consequence, African Americans are more likely to be mis-recognised by facial recognition systems, and as such to be investigated or even arrested for a crime they have not committed. Why did the software present a racial bias? A number of possible factors: training dataset used to develop the algorithm containing a disproportionate amount of faces from a given ethnic group; programming focus on facial features that may be more easily recognised in a given ethnic group; referencing of previous scientific papers around facial recognition that also contained a racial bias, etc… What is perhaps most frightening is the fact that such technology does not need to undergo public scrutiny before being deployed. For that reason, until such time that an independent critical study is undertaken, such algorithmic racial biases can remain hidden within the black box of technology.

Finally, it is important to mention that the penetration of big data analytics within society, and the kind of policing practices this technology lends itself to, is far from limited to police forces in the strict sense. For example, a series of studies have warned against a re-emergence of “red-lining” in the digital age, whereby complex data analytics are used by companies to assess a customer’s worthiness and risk, and to offer ‘targeted’ services at indexed prices, or to deny such services – as well as “reverse redlining”, with specialised data brokers splitting a market of potential customers into micro-categories that are proxies for race and class, and marketers using this data to target the most vulnerable ones with exploitative financial products. This is just one example among many.

_

So to go back to the broader problem of racial bias and new technologies:  in the examples that I briefly brushed over here, it may seem that we don’t really have “racism without racist subjects”; rather the problem would be that racial biases and ingrained forms of structural racism influence the production and functioning of increasingly widespread profiling technology; and, what is more, once embedded within increasingly incomprehensible software and algorithmic networks, such racial biases – which still originate in human subjects – may be widely diffused and amplified by technology, at the same time as they are made invisible, imperceptible.

This, in itself, seems to me to constitute an important field of questions to be addressed by critical race scholars and computer scientists – hopefully together, through new forms of collaboration. Activism around “opening the black box” of widely used technologies, or calling to “make algorithms accountable”, may prove increasingly relevant as we move forward into an hyper-mediated condition of algorithmic governance.

But could it be – and I will end on this speculative, and still too abstract point – that contemporary technology isn’t just a new vector for the silent spread of racisms; could it be, instead, that digital technology carries some uncanny structural affinities with racism? “The binary logic of racism [(us/them)] resembles the [binary logic of the digital (one/zero)] in that it copes with difference, exception, and indeterminacy by reducing such remainders to the nearest whole, or identity”. So perhaps it is legitimate to ask: to what degree could the pervasive use of digital technology as a collective infrastructure of perception and communication, thought and action in the world, inconspicuously also support the proliferation of racisms, and other binary conceptions of the political today?