One of the most difficult challenges of governing responses to the pandemic has been the development and deployment of infection tracing and tracking platforms, which may include not just apps on phones, but tens of thousands of social workers checking and reconstructing infection vectors and enforcing quarantines face-to-face and hour-by-hour. Generally, in the West these efforts have been both chaotic and contentious. This is not the case everywhere. Having gone through SARS-2, Taiwan had a system prepared, and was able to do effective tracing through a combination of high-tech and high-touch, and to support quarantine efforts simply through cell tower triangulation. This required coordination and compliance that may be culturally impossible and thus politically impossible in other countries.
The controversies about tracing in many Western countries were based on concerns that private platforms would use the cover of the emergency to get their claws deeper into the private worlds of their users. This led to both serious debates on who should be building the digital means of production and unserious shouting matches based on rumor and insinuation. The sad story of the United Kingdom’s bungling attempts to build and deploy an app for this purpose is a minor scandal in itself. In the United States, Google and Apple, makers of the two most used mobile operating systems, collaborated on a contagion tracer app based on encrypted Bluetooth signals, but the federal response and fifty state responses were each so “sovereign” that, as of this writing, it was not even possible to get caseloads to a level where coordinated app-assisted contact tracing could help. The system never got itself together enough for the app to be useful. Post–General Data Protection Regulation expectations as to the dynamics between American cloud platforms and European tech sovereignty were scrambled when Google and Apple refused various European government requests to make their app less anonymous because that would expose their users to privacy vulnerabilities.
Here the trade-off is not just between a system that works well as an epidemiological technology and one that a public will actually download and use. In this case, we also observe a disconnect between legitimate if costly suspicions, concerns, and refusals concerning how the technology actually works and those concerning how people imagine the technology must work. The latter, based on thematically consistent if also untrue misconceptions, are often supported by both mainstream and fringe media: the tracing apps are directly linked to your email account, or passport, or they use your phone’s camera somehow, or they add your name to a special permanent government database, etc. Add this to the real underlying chaos of the rollouts, and no wonder people are unenthusiastic. The dizzying variety of tracing apps in use throughout the world would make it impossible for anyone but the most dedicated tracker of trackers to have a clear picture.
Nevertheless, when public perception is based on cultivated paranoia, the hypothetical becomes the actual. Policy follows. To be sure, if the public debate orbits around not the merits and flaws of actually existing technologies but their mythological alter egos, then the result is that there isn’t enough direct and open interrogation of the actual technologies as they are, and the public is left to argue with bogeymen.
Among these tall tales are not only domestic scenarios but also fantastically misinformed references to how similar approaches have been used in, specifically, Asian countries. In the West, lessons of the comparative governance and how responses were realized in countries like China, Taiwan, Singapore, and Vietnam are also being viewed through a distorted lens. The insistent tropes that the virus itself is “Chinese,” that it may have been grown deliberately in a Chinese biotech laboratory, that it may have something to do with 5G, that it demands that we wear hazmat suits from a sci-fi movie, that “in China” the government responses are as dystopian and inhumane as you can dream up, and so on, are not confined to the fringes of Western biopolitical discourse. They are close to the center. “China” serves as the cautionary tale, but of what? For its part, Chinese state media has constructed its own wall of propagandistic nonsense, congratulating itself and blaming everyone else.
When public perception is based on cultivated paranoia, the hypothetical becomes the actual. Policy follows.
In the West, China is now so deeply associated with technology that anxieties about technology are projected into anxieties about China, and to an extent vice versa. Old Orientalist notions offer fake legibility and comfort for a Western dreamworld where the flipside of Sinofuturist fantasies is that fear of technology becomes fear of Asia. The hardcore version of the narrative is the evolution of the old “yellow horde” into new waves of emotionless robots. Add a genuinely psychologically destabilizing event like the pandemic, and the disturbances of a clear sense of agency and subjectivity in relation to epidemiological demands, and this outcome is not incomprehensible.
Such misapprehensions should be rejected by post-pandemic politics, even if the prospective demands for a positive biopolitics are not without controversy, legitimate and otherwise. The negative biopolitics is based on a too-common sense that all forms of society-scale sensing, modeling, and governance are, in essence, forms of pernicious “surveillance” and so should be resisted on those terms. Concurrent libertarian positions are based on the idea that individuals exist first in a state of self-sovereign freedom and are only later “captured” by technology or collective representation. Many responses, however, are based in wholly appropriate critiques of large private platforms whose manipulated models of society present it as composed merely and solely of hyperindividuated user profiles and predictions of their next desire, click, vote, or purchase.
While libertarian attitudes may be at home in different pockets among both the political Left and Right, they are antithetical to the epidemiological model of society that operates not at the level of individuals but of what connects them together. It is unlikely that the equitable and effective biopolitics that is needed for post-pandemic politics can emerge if wide-scale sensing and modeling is dismissed and resisted entirely as unwarranted “social control.” The overinflation of the term “surveillance” is both politically debilitating and intellectually dishonest.
The pandemic has very likely changed some of the ways that we define, interpret, discuss, deploy, and resist sensing and modeling. Perhaps not. As the virus first reached Europe, another theorist of technology, based in Germany, argued to me that people should resist being tested for the virus and contributing to epidemiological models, because acquiescing to this regime would in the long run only encourage the invasive normalization of “big data biopolitics,” which he went on to claim is, ultimately, inseparable from eugenics, the colonial-era slave trade, AI bias and the torture of Uighurs. He proudly said that he even told his students that they should all refuse testing and, having recently checked in on him, he maintains this position still.
In previous years he had a bigger audience for this line of thought, but fewer would see it his way now. Many are recognizing not that “surveillance is actually good,” but rather that the suppressed positive potential of the technologies on which it depends demands attention and overdue realization. We could be using these technologies for much more important things than, for example, racial profiling and modeling individual consumers so as to predict which video they’re likely to click on next. For post-pandemic biopolitics, the more necessary functions of planetary-scale computation that have been ignored or suppressed or sidelined come to the fore once again.
The epidemiological view of society and the agency of model simulations is changing the conversation about these matters. The debate isn’t made easier by pandemics, but it is opened up in important ways. It is a mistake to reflexively interpret all forms of sensing and modeling as “surveillance” and all forms of active governance as punitive constraint. We need a different and more nuanced vocabulary.
In addition to the right to reasonable privacy there is also a right and responsibility to be counted. For post-pandemic biopolitics, inclusivity is essential. As said, equitable systems depend on the accuracy of models because the risk is always collective. However, as current controversies over citizenship and the census reveal, we see that certain neighborhoods, certain populations, certain types of inhabitants, certain less visible persons, places, and processes are undercounted, under-measured, under-accounted for, even unnamed, and are functionally invisible in terms of what is due to them. To address this honestly, it is worth asking questions about the critical discourse of “anti-surveillance.”
At times it would appear that the term “surveillance” has ascended to the status of an almost sacred negative concept not only for libertarian-anarchist idealists but also for Western political culture in general, across the political spectrum, including the complacent center. It can function as a tenet around which entire worldviews of technology, globalism, urban planning, and the fictive autonomy of self-identity are made to orbit. Notions of “resistance” to surveillance, at once both fuzzy and axiomatic, credentialize entire art biennials, software movements, streaming documentaries, and political parties, and I am actually sympathetic to the bigger goals of many of these. But when the master concept is inflated and amplified to explain so very much about what’s what and why, it is inevitable that the surveillance bubble will soon pop as more precise concepts appear to make sense of sensing, modeling, and prediction as a social technology.
In the West, China is now so deeply associated with technology that anxieties about technology are projected into anxieties about China, and to an extent vice versa.
An old joke might apply here: in the West, as a teenager you read either Nineteen Eighty-Four or Atlas Shrugged, and your politics are thereby cemented. Both of these works are Cold War novels about a self-sovereign individual who regains his stolen agency at the expense of the unwanted observation and interference from the larger social Other. Freedom means freedom from supervision, and so the heroic individual resists the oppressive and pervasive societal manipulation, finally realizing his solitary existential triumph. The themes are ingrained and built into the grammar of Western political common sense across the ideological spectrum. No wonder that when the United States built planetary-scale sensing and modeling infrastructures, its script for how they would be designed and received was so clear.
If we were to add to this list the one Foucault book for with which every undergraduate claims familiarity, Surveiller et Punir / Discipline and Punish, then philosophy’s specific contribution to this tale becomes more clear. An intricately told history of the formation of the prone subject position of the prisoner as object of carceral logistics has devolved over successive retellings into the infamous meme: “factories are like prisons! schools are like prisons! prisons are like prisons!” To be sure, postpandemic biopolitics must include, not exclude, people whose public persona is based on their deep concern about algorithmic bias (quite real) but who nevertheless cannot name a single algorithm (A* search algorithm? Fast Fourier transform? Gradient descent?). When they don’t really know what an algorithm is, but they are willing to take to the streets to demand that it be “fucked,” then even if their instincts are not entirely wrong, the discourses available are failing them.
Like any such big vague idea, “anti-surveillance” falls under the diminishing returns of degradation into ever-increasingly dull-edged beliefs. A pro forma sterility of the critique becomes arduous and demoralizing. Handwringing op-eds “about what technology is doing to us,” simultaneously po-faced and hysterical, obsessive and ignorant, radical and reactionary, echo a tedious and incredulous moralizing about contemporary digital etiquette. These are fixated less on weighing cumulative outcomes than on ensuring that one is not personally complicit with remote evils, based on self-evidently shoddy ideas about how relays of transitive harm between the moment of data capture and subsequent uses actually work. It is a politics of resentment oriented on the propriety of personal identity convened by what Polish writer Bogna Konior calls the “Anglo-Saxon luddite aristocracy.” Another project is possible.
Exchanging manipulated attention capital for infrastructural access is not even remotely an ideal way to organize computation for a planetary society. But as the pandemic has very clearly demonstrated, robust and inclusive models are how any capable society comes to comprehend itself and compose itself. For a post-pandemic biopolitics, the circular reasoning and vicious cycle of “surveillance vs. anti-surveillance” frameworks must be replaced by a more polychromatic vocabulary so that a very different means of modeling production can be realized.
From The Revenge of the Real: Post-Pandemic Politics by Benjamin Bratton. Used with the permission of the publisher, Verso Books. Copyright © 2021 by Benjamin Bratton.