Abstract
Keywords
Introduction
On 24 November 2010, having recently begun my doctoral research, I joined a student tuition fee protest in the centre of London. Both this and later protests opposed the government’s plans to increase student tuition fees and implement spending cuts to United Kingdom (UK) higher education but were ultimately unsuccessful. Although predominantly peaceful, much of the media coverage of the protest focused on the Metropolitan Police’s controversial use of the ‘kettling’ technique to temporarily detain around 3000 protestors, including myself, in an area of Whitehall for around 10 hours (NETPOL, 2012).
Two years later, on 24 November 2012, I participated in an antifascist protest in Berlin that marked the 20th anniversary of the death of an activist called Silvio Meier. That year, the annual commemorative protest, a case study within my doctoral research, garnered more attention than usual. This was not only because of the major anniversary but also the controversies surrounding the recent discovery that a neo-Nazi terror group had killed nine people of Turkish, Greek and Kurdish descent between 2000 and 2006 across Germany without being detected by the police. That year, protestors marched under the portraits of some of these individuals (Merrill, 2017).
The day after the London protest, a friend told me they had seen me in a photo of it printed in a national newspaper. While I could never confirm this, soon after the Berlin protest, I identified myself in some of the local news coverage of the protest that had been uploaded to the internet even before the protest had ended. Of course, my involvement in these protests had likely been captured in numerous digital photos and videos, not least those taken by the police surveillance teams that were present throughout both protests.
That such photos and videos still likely exist without my knowing resonates with how the masses of relatively mundane mediated traces of individual activists have rarely, if ever, been considered with respect to the focus of this special issue,
I start by recounting existing understandings of the memory-activism nexus and positioning the state therein. Thereafter, I draw on Scott’s (1998)
Re-stating the memory-activism nexus
A tripartite understanding of the memory-activism nexus has become popular across social and cultural memory studies since Rigney (2018) introduced the concept to grasp the complex interplay between
The state was central to the second-wave memory research of the 1980s and 1990s, primarily via national frames of analysis. At this time, the ability of states to monopolise memory to serve national identities while silencing others was partly connected to their control of archives (Olick and Robbins, 1998). As such, efforts to handle the epistemological consequences of recognising that ‘archives always belonged to institutions of power’ became characteristic of memory studies in general (Assmann, 2008: 102). Some suggested reading archives ‘against the grain’ via their contents, while others argued they should be studied ethnographically ‘along the grain’ in relation to their creators (Schwartz and Cook, 2002; Stoler, 2009).
More lately, the state’s conceptual importance within memory studies has been eroded by a third wave of research that has displaced national frames of analysis in favour of transnational perspectives (De Cesari and Rigney, 2014). However, states persist and ‘progressively attempt to administer memory’ (McQuaid and Gensburger, 2019: 125). Recognising this, McQuaid and Gensburger argue that while the state has long featured in memory studies, insufficient attention has been paid to how states are organised and structured and how they shape public memory via policy and bureaucracy. According to McQuaid and Gensburger, memory researchers have become fixated on the content of public forms of state remembrance and commemoration while ignoring the workings of the state that lie behind these phenomena.
These shortcomings arguably also apply to existing research on the memory-activism nexus. The state is often inferred but rarely foregrounded. With regard to research on memory activism, the state is often conceived as a target and something to be avoided (Partridge, 2023). Gutman and Wüstenberg (2023), for instance, define memory activism as ‘the strategic commemoration of a contested past to achieve mnemonic or political change by working outside state channels’ (p. 5). The state also features within the memories that memory activists promote, particularly those linked to state violence. A similar relationship to the state tends to characterise the study of memory
I start to do this in this article while trying to avoid the trap of conceiving the state as a monolithic, unitary actor. States are, of course, multifaceted, multi-scalar and multi-sited – ‘encountered in many forms and diverse spaces’ (Wüstenberg, 2017: 27). They are, as Joyce and Mukerji (2017) write, ‘assemblages of the active agencies of many and often conflicting people and objects in a myriad of sites’ (p. 1). They are also hybrid institutions whose agencies intermesh with civil society actors including activists (Wüstenberg, 2017). In fact, they are doubly hybrid because they and their agencies are also held together ‘at particular key sites and through the actions of key actors and processes, both human and non-human’ (Joyce and Mukerji, 2017: 1; Merrill, 2024). States, in other words, have materiality: ‘from transport infrastructures to post offices to legal archives’; they exist in ‘different configurations of people and things’ (Joyce and Mukerji, 2017: 2).
My use of ‘the state’ in this article does not mean to mask these complexities. Still, the key state actors and non-human material formations that I am primarily concerned with are the police (and, to a lesser extent, the private companies they contract with) and their databases – themselves hybrids of human and technological agency. In focusing on police databases, I am not seeking to understand the public state administrations of memory promoted by McQuaid and Gensburger (2023). Instead, I am interested in the even more hidden workings of the state to record activists with the potential to remember them in the future. As such, I target an alternative type of state remembrance to that which has commonly concerned memory studies scholars. In short, I am interested in the state’s secretive shadowed places of memory.
1
Stoler (2009) writes,
Shadowed places are what states create, emblematic conventions of the archival form. . . State sovereignty resides in the power to designate arbitrary social facts of the world as matters of security and concerns of state. (p. 26)
As this quote illustrates, these ‘shadowy’ places of memory (alongside their more illuminated counterparts) contribute to how the state (via its many agencies) carries out statecraft and exerts power. They connect to Weberian ideas of state power as bureaucracy, legal order and the monopolisation of the legitimate use of force within a given territory (Weber 2004) as well as to Foucauldian theories of governmentality that suggest the most efficient means by which a state can ensure its survival is often through the production and control of its population (Foucault 1991; see also O’Neil, 1987). Likewise, they underpin the logistical power of states – an impersonal ‘form of material practice that extends across its multiple sites’ (Joyce and Mukerji, 2017: 2). All these ideas of state power inform my perspective. However, here, I rely primarily on the compatible work of Scott (1998) to highlight in more detail how the state exercises power by
Seeing and remembering like a state: from high modernism to dataism
In
To provide an historical example, the Nazi state rapidly created a surveillance society to increase the legibility of its subjects and to identify those that its exclusionary ideology deemed to be political enemies of its state and racial enemies of its people (Fritzsche, 2005). Within this task, the Gestapo’s spy network and the national censuses of April 1933 and May 1939 were decisive. The censuses established ‘a comprehensive archive of who was German and who was not’ that would eventually inform the systematic oppression and murder of Jewish, Roma and Sinti people (Fritzsche, 2005: 196; Luebke and Milton, 1994). However, in early 1933 the Nazi’s most immediate threat came from political opponents on the left (Wachsmann, 2008). Illustrating this, using repressive legislation introduced the day after the Reichstag fire (27 February 1933) by June 1933, in Prussia alone, the Nazis had placed 25,000 people – the majority of whom were Communists – in custody without legal proceedings (Wachsmann, 2008). The Nazi state did not only rely on the creation of new state simplifications. It also identified its political opponents by remembering pre-existing simplifications. Analysis of the surviving card catalogue of the Gestapo’s Osnabrück branch has shown how a 1933 wave of arrests of Communist and Social Democrat activists was based on the recollection of information compiled from 1929 onwards by the local Prussian police (Bondzio, 2021). The Prussian police created around 3000 personal files based on the observation of, among other things, Nazi and Communist protests. Whereas they did this according to existing regulations and laws and rarely used the files for police measures, the Gestapo rapidly turned the files towards Nazi political goals (Bondzio, 2021).
Developments in digital technology and computation since 1945 mean that contemporary states now see and remember in new ways. Fourcade and Gordon (2020) highlight this through reference to ‘the dataist state’. They write,
it is no longer necessary to flatten society to make it legible (as high modernism required); instead, ubiquitous data capture (both acknowledged and secretive) means that identification and legibility can be produced algorithmically–that is, categories emerge organically from regularities observed in the data. (Fourcade and Gordon, 2020: 80)
Dataism, as an ideology, has replaced high modernism. Likewise, statistics as a flattening mode of state simplification has been superseded by algorithms as a modelling mode of state simplification (Fourcade and Gordon, 2020; Van Dijck, 2014). Individuals are no longer governed by being statistically flattened; they are dissected and modelled – ‘seen through slices of data’ taken from different aspects of their lives like health, finances and education (Fourcade and Gordon, 2020: 80). Dataist state power is thus tied to the ability of states ‘to access more and more private data as they strive to develop more ambitious forms of population management’ (Fourcade and Gordon, 2020: 90).
The algorithmic governance that underpins dataist statecraft overlaps with what Zuboff (2018) calls surveillance capitalism. Scott (1998) already acknowledged that capitalist entrepreneurs were as responsible for the simplification of subjects as the state, albeit motivated primarily by financial gain. 2 However, in the dataist state, the entangled relationship of governance and commerce has become extenuated, with the two collapsing into one another in unprecedented ways (Fourcade and Gordon, 2020). Corporations may have long been involved in the delivery of state services, but now they ‘are designing the very categories (the bits of information) through which both states and markets now apprehend the world’ (Fourcade and Gordon, 2020: 81). These developments have placed further strain on the liberal democratic belief ‘in a private sphere of activity in which the state and its agencies may not legitimately interfere’, something Scott (1998) named as one of the reasons why ‘the authoritarian temptations of twentieth-century high-modernism’ (p. 101) could be resisted. While the well-intentioned liberal democratic motivations of dataism find expression in, for example, the automation of welfare state provisions, ultimately, the dataist ideology creates the conditions for states not only to reward but also to punish its citizens based on ‘what parts of their lives show up in its databases’ (Fourcade and Gordon, 2020: 87).
It is these databases, specifically police surveillance databases, that I turn to in the next section. But first, it is important to acknowledge, as Bowker (2005) does, that ‘databases are not a product of the computer revolution; if anything, the computer revolution is a product of the drive to database’ (p. 109). The attention I pay to databases throughout this article thus should not be misread as suggesting technological determinism. As the character of Scott’s four conditions for disaster and the example of the Nazi state highlights, technology itself is not the sole nor necessarily the main determinant in such negative outcomes. It provides the state with new opportunities, but the politics of the state, specifically if it is authoritarian, and the general drive to see, classify and remember subjects in order to exert state power plays a decisive role in whether the outcome is desirable or disastrous.
Surveillance databases, predicative re-membering and the epoch of potential memory
The high-modernist state primarily relied on archives – conventionally defined as locations where records and documents are preserved – to store its state simplifications. What Assmann (2008) calls political archives were, in other words, the original shadowed places of state memory. But within the dataist state, archives (at least as conventionally conceived) have been superseded by databases – large, structured collections of data designed for rapid computational search and retrieval tasks – and new shadowed places of memory have emerged, including police surveillance databases. The broader technological developments that this shift indexes have, in turn, seen the terms ‘archive’ and ‘database’ become increasingly contested when theorising memory (Manoff, 2010: 386; see also Rigney, 2024). While the former, in a general sense, can be conceived as including the latter and databases are essentially one sort of archival technology (Bowker, 2005; Ernst, 2012), there are also important differences between the two. These differences are highlighted by considering ‘database’ with respect to the double meaning of ‘archive’ as commencement and commandment (Derrida and Prenowitz, 1995).
In terms of commencement, Manovich (2001) highlights that databases are commonly less sequential than conventional archives. Early hierarchical and network databases partially followed a linear logic by relying on a limited number of predefined pathways to reach data. However, later relational and object-oriented databases broke from this logic (Bowker, 2005). As Manovich (2001) writes, ‘the database represents the world as a list of items, and it refuses to order this list’ (p. 225). These sequential differences are also somewhat suggested by the contrasting content of former police archives and contemporary police databases. Previously, as the example of the Nazi state reveals, police archives were typically composed of information in the form of sequential reports written by spies or informers (Bowker, 2005). Today, surveillance databases instead contain data that ‘pull people apart along multiple dimensions’ only to later be sequentially reconfigured as information (Bowker, 2005: 30). Bowker (2005), accordingly, states that when it comes to databases, ‘narrative remembering is typically a post hoc reconstruction from an ordered, classified set of facts that have been scattered over multiple physical data collections’ (p. 30). While Bowker and Mannovich’s equation of the sequential logic of conventional archives with narrative arguably relies on a too simplistic understanding of the latter and fails to acknowledge how such archives also foster mutable post hoc processes of remembering, their arguments suggest that a deepening of this mutability has accompanied the uptake of databases as an archival technology.
In terms of commandment, both conventional archives and databases, as already suggested, are central to a state’s governance of its population. The data in contemporary police surveillance databases serve dataist states in broadly similar ways to the information previously archived by high-modernist states, namely, to identify and locate subjects, including activists. Today’s data servers may have replaced yesterday’s card catalogues but the possible repressive consequences for activists are similar. The governing possibilities of the archive and database thus relate to their mnemonic potentiality. Assmann (2008) stresses the latency inherent to the archive –‘it stores materials in the intermediary state of “no longer” and “not yet”, deprived of their old existence and waiting for a new one’ (p. 103). Meanwhile, Bowker (2005) discusses databases as extending an ‘epoch of potential memory’ (p. 30). In this sense, both the information held in archives and the data held in databases can be considered objects of latent memory that might come to support future acts of remembering (Kuhn, 1995; Smit, 2024). The mnemonic latency of the archive and database also relates to processes of forgetting insofar as the acts of remembering they support are usually the exception rather than the norm. In the terminology preferred by Assmann (2008), the passive remembering of storing information and data is not far removed from the passive forgetting of thereafter neglecting or disregarding this information and data.
Yet, the potentiality of the archive and the database also differ. Echoing the general distinctions between high-modernist and dataist states, Bowker (2005) stresses that with the arrival of databases, the critical question no longer concerned ‘what the state “knows” about a particular individual . . . but what it can know
Ultimately, in an epoch of potential memory, it is not enough to ask what a state remembers but rather what it could remember. The answer to this question relies on recognising that the information modelled from, for example, police surveillance databases is determined far more by computational agencies than the information inferred from their predecessors, police archives. This potentiality is not only anticipatory but also predictive. Surveillance databases support new dataist regimes of algorithmic governance premised by promises of so-called ‘predictive policing’. These promises in turn have encouraged police agencies to become increasingly dependent on private tech companies to collect, store and analyse data using proprietary algorithms (Brayne, 2020; Reigeluth, 2014). Critically, in a dataist state any slice of an individual’s digital data record can lead them to be predictively re-membered as an activist according to the algorithm applied in any specific context and can be used, in turn, to justify their further surveillance or even detention. Likewise, other competing or complicating slices can be algorithmically forgotten.
In addition, in common with the compression of time and space brought on in part by developments in digital technology (Harvey, 1989), these algorithmic decisions can be applied across multiple linked databases, and the relevant base data and generated information can be shared rapidly between different police agencies but also between states. Here is the assemblage that is the state in all its multi-scalarity. It may be the police agencies of the state (and, to some extent, their corporate partners) that do the actual seeing and remembering, but they do so in a more fragmentary manner. It is the state that abstractly gains from the more total vision and memory that comes from aggregating these police agencies’ sights and recollections alongside those of other agencies that sit in its assemblage. This aggregated vision and memory connects, in turn, with the adoption by dataist states of predictive, Big Data–based approaches. These approaches have not only amplified the police’s use of directed surveillance techniques but also encouraged the adoption of dragnet surveillance lowering ‘the threshold for inclusion in police databases’ (Brayne, 2020: 14). After all, ‘indiscriminate surveillance is easier to implement than carefully targeted surveillance’ (Fourcade and Gordon, 2020: 85). The consolidation of such large amounts of data in the hands of the state, sometimes via corporate intermediaries, raises fears of Scott’s (1998) third condition coming to pass with a slide towards totalitarianism and authoritarian regimes of political control. However, so long as Scott’s fourth condition – a prostrate civil society – is avoided, this is not inevitable, and the actualisation of such regimes will ultimately depend on factors such as legal safeguards and the ability of activists to hold states accountable (Fourcade and Gordon, 2020).
Digital activist traces and the repressive potential of mediated prospective memory
But what slices of activists’ lives show up digitally in contemporary police surveillance databases? Today, activists leave all sorts of digital or digitalised traces – that is, digitally mediated personal data – that can be of interest to the police. These include, but are not limited to, private digital communications, digital CCTV footage, web traffic, mobile phone data and biometric data (Brayne, 2020). They can also include social media posts and those digitally mediated traces that activists leave during protests – like the digital photos and videos taken of me protesting in London and Berlin – including not only those that end up in the mass media or on social media platforms but also those captured by the police themselves. Ultimately, almost all digital data related to activism can be considered an activist trace because, as Reigeluth (2014) has summarised, digital data are essentially the simplification of personal details and behaviours to the lowest unit that a computer can process and store – a bit. Collected and held in police surveillance databases, these digital activist traces provide the state with something akin to what Tenenboim-Weinblatt (2013) calls mediated prospective memory.
Compatible with the idea of an epoch of potential memory, the concept of mediated prospective memory relates to those mnemonic media practices that are orientated towards the future in referring to ‘collective remembrance of what still needs to be done, based on past commitments and promises’ (Tenenboim-Weinblatt, 2013: 92). Already used in relation to the memory-activism nexus, the concept has primarily been deployed to understand how primarily progressive but also reactionary activists curate digital data and information including their own mediated activist traces so to create a prospective memory that might help them pursue their political and social promises in the future (see Moroz, 2020; Schwarzenegger and Wagner, 2023; Smit, 2020; Smit and Van Leeuwen, 2024). Beyond the opposing ideologies of left and right, the concept can also be adaptively applied to have relevance to the way a state governs its subjects. As the earlier discussion of the Nazi state illustrated, mediated prospective memory can support state repression and oppression. In this application, the promise of mediated prospective memory is partially replaced by threat insofar as it allows states of the future to both exercise control without the use of force and justify the use of force (see Foucault, 1991; Weber, 2004).
Respecting distinctions between repressive and oppressive state violence,
Repressive and oppressive mediated prospective memory are entangled with forms of progressive and reactionary mediated prospective memory (as well as other forms of mediated prospective memory). For example, in contemporary protest settings numerous actors simultaneously produce mediated prospective memory according to different motivations. It is not only the police that digitally record protests. In recent years, protests have been increasingly characterised by a ‘spiral of surveillance and counter-surveillance’ (Wilson and Serisier, 2010: 170). The development of this spiral foregrounds that states do not see and record everything. They are not the perfect archivist. They passively and actively look away and forget, particularly in cases of their own culpability. The footage of police bodycams, for example, goes missing (see Fan, 2017). In turn, the police have become increasingly scrutinised by an array of monitoring groups, including Liberty (founded 1934), Statewatch (founded 1991), The Network for Police Monitoring (NETPOL) and Big Brother Watch (both founded in 2009).
All these groups have been further spurred on by the wide-reaching ramifications of the 2013 ‘Snowden leaks’ regarding the United States National Security Agency’s global surveillance programmes. These leaks publicly foregrounded the workings of the dataist state, broadened awareness of data-driven surveillance and triggered new forms of data activism – that which interrogates the socio-political consequences of datafication (Van Dijck, 2014). The work of these monitoring groups and their data activists breaks through the secrecy and shadows surrounding police surveillance databases that hinders their academic study. Returning to my memories of protesting in London and Berlin, in the following section, I draw on the reports of these groups in combination with journalistic accounts, legal decisions and police documents to foreground the risks of remembering like a dataist state.
The risks of remembering like a dataist state: London, Berlin and beyond
What traces of my involvement in that 2010 London protest might be left in police databases? When we were finally permitted to leave the kettle, we had to file through a mobile police station. As we watched others go before us, we speculated about what data might be taken from us and discussed our legal rights. In the end, nothing was directly asked of us, and no cameras were immediately visible inside. However, other protestors during this wave of protests reported that as they left kettles, they were arrested, had their personal details recorded on film and were then ‘de-arrested’ (Green Party, 2010). Thus, it was clear that our dispersal was not only an exercise in physical and psychological control but also in data collection. As NETPOL (2018) has noted, until a 2013 UK High Court ruling found it unlawful, kettles were often used to gather data. Protestors were often required to provide a name and address and have a photo taken before being allowed to leave. In the past, kettles also led to mass arrests, which in turn provided police with the opportunity to collect data from protestors’ mobile phones. The 2013 ruling concerned protestors who were filmed when leaving a kettle, again in London, this time in November 2011. The court decision reveals that the film footage was to be retained for 6 years based on the ‘limitation period for civil actions in respect of false imprisonment and malicious prosecution’ (Mengesha v Commissioner of Police of the Metropolis, 2013). However, the retention of this footage for any period was judged unlawful because it was collected unlawfully. The decision also noted that ‘no policy, let alone any rules, has been devised or published setting out the circumstances in which such images and details may have been retained’. Given the ad hoc manner by which these protestors were filmed without a clear policy on retaining the generated data, it is not hard to imagine that digital traces of my and others’ involvement in the 2010 protest may remain in police databases as latent memory.
In the UK, legal debate has also surrounded the Forward Intelligence Teams (FITs), groups of police officers who gather data during protests, which have been deployed since the early 1990s. In 2008, Liberty brought a judicial review against the use of FITs, which was initially decided in favour of the police (Wood v Commissioner of Police for the Metropolis, 2008). However, a 2009 Court of Appeal hearing ruled that the retention of FIT photographs and footage had to be justified on a case-by-case basis and that those relating to people who had not committed a criminal offence could not be retained (Wood v Commissioner of Police for the Metropolis, 2009). As a result, according to one report, the Metropolitan Police’s public order unit was forced to delete 40% of the protestor photos it held, indicating how legal measures can force states to ‘forget’ (Lewis and Laville, 2009). FITs use so-called spotter cards to target known individuals, but it has been estimated that recording one individual at a protest leads to a further 1200 others being photographed (Danezis and Wittneben, 2006). These photos are stored in CRIMINT, a database run by the Metropolitan Police since 1994. CRIMINT is an intelligence database, not a crime recording system: it stores data about criminals, suspected criminals but also about protestors. As of 2005, the database, which stores data for at least 7 years, held seven million information reports and 250,000 intelligence records. Despite the 2009 ruling, and as various freedom of information requests have proven, the identities of many protestors who have not committed any crimes are still often included among these records.
Furthermore, a new UK law, enacted in 2022, significantly lowers the threshold for the arrest of protestors meaning the digital records of a greater number of protestors may be lawfully retained. This is because more will have committed newly defined crimes, which include, for example, failure to follow police instructions and, of relevance to the memory-activism nexus, criminal damage to memorials (Police, Crime, Sentencing and Courts Act, 2022). The act arguably contributes to Scott’s fourth condition for disaster. In addition, the fear that these records might be turned to both oppressive and repressive uses in the future is not allayed by the recent independent review of the Metropolitan Police that found it to be institutionally racist, misogynistic and homophobic (Dodd, 2023a). While these circumstances encourage ongoing legal efforts to ensure the police regularly delete unlawfully collected or obsolete data, instances of accidental data loss during official data removal processes, such as when 400,000 crime records were deleted due to a coding error in January 2021, can spark public concern (Dodd et al., 2021). States must, it seems, balance processes of passive remembering and active forgetting (see Assmann, 2008).
Similar police databases exist in Germany both at the national level and that of its federal states. As of 2010 and, according to a report by Statewatch, Germany’s Federal Criminal Police has more than 200 databases containing entries on over 18 million people (Topfer, 2010). These databases fall into three categories: joint databases, which are run centrally but also fed with data from the 16 German Federal police forces and are widely accessible through the German Police Information System (INPOL); central databases, which are run and fed centrally and occasionally made available to other state security agencies; and finally, office databases which are operated and accessed exclusively by the Federal police. The largest databases are those used for identification purposes (Topfer, 2010). Relevant in terms of my attendance at an antifascist protest in 2010 are the so-called ‘violent offenders’ joint databases that were established in 2001. Among these is the ‘Violent Offenders–Left’ database, which, according to a later Statewatch report, contained around 1600 people in 2018 (Monroy, 2018). The name of this database, itself a state simplification, is misleading because simply being stopped in the vicinity of a protest might lead someone to be added to the database. The Federal police also reside over a database of ‘politically motivated crime’ that uses the categories of left, right, foreign ideology, religious ideology and others. In 2017, around 500 individuals were recorded under the left category, but again, these people did not necessarily have criminal records. For example, registering a protest with the authorities, something that is a legal requirement, might lead to an individual being added to this database (Monroy, 2018). Police forces in Germany also maintain different databases of individuals tailored to political orientations or specific protest events, including those related to anti-globalisation activism and the G8 and G20 summits (Monroy, 2018). Such data have been remembered by the state and used to prevent people from crossing national borders and to justify the revocation of journalist accreditations (Monroy, 2018).
Besides sometimes relying on discriminatory and stigmatising labels, German police databases are often poorly managed, with data often retained longer than it legally should be. A 2012 Federal data protection review of the ‘politically motivated crime’ database, for instance, found that the longest retention periods were often selected and that deletion deadlines were often missed. It found that 90% of the database’s records should have already been deleted. An audit of those under the left category carried out between 2012 and 2015 required a reduction of entries from 2900 to 331. Some entries were found to be up to 10 years old, and others were erroneous. Again, legal measures forced the state to forget but a further review of INPOL in 2022, albeit based on a small audit, found that the failure to delete unlawful data persists (Monroy, 2022). By then, INPOL alone held around 6.7 million portrait images of 4.6 million individuals. Freedom of information requests show that in 2022 about 1.5 million images were added to INPOL. A total of 400,000 images were deleted but this was only half as many compared to the previous year with this apparent shortfall left unexplained (Monroy, 2023). It is unlikely (but not impossible) that my digital traces ended up in one of these databases, but such systemic weaknesses and structural discrimination do not bode well for any activist traces that I and others may have left in the local databases of the Berlin police back in 2012. During that protest, organisers implored participants to turn off their mobile phones to prevent the police from acquiring locational data from them via so-called tower dumps (see Blinder, 2021). This and the decision of many protestors to cover their faces – something prohibited by law in Germany since 1985 and thus, ironically, sufficient cause for an individual to be added to the databases already mentioned – are clear indications of activists’ long-trialled counter-surveillance strategies and their awareness of state efforts to see, if not remember, them.
The arrival of new surveillance technologies and forms of prospective mediated memory now further complicates protestors’ efforts to avoid state identification. A case in point is facial recognition technology (FRT) and especially that which is automated via artificial intelligence (AI). FRT is becoming one of the sharpest tools in dataist states’ surveillance toolkits and is increasingly being used to monitor protestors. FRT uses various techniques including those facial matching procedures, which check isolated images of faces against a set of existing images held on databases, that were already widespread 10 years ago. But of greater relevance here are those newer automated procedures whereby FRT connected cameras are used to scan public spaces and crowds so to identify individuals in real time by comparing their faces against those in a database. In this procedure, any face that passes the camera is scanned and then analysed, thereby subjecting the individual to a biometric identity check (Big Brother Watch, 2018). Automated FRT works by identifying a face in an image, converting that face to a mathematical model based on the position, size and shape of facial features and then algorithmically comparing that model with those created from a database of faces. In turn, it matches faces according to a percentage of corresponding features. The increasing use of AI means that many FRT technologies now ‘learn from the millions of faces they process in order to improve the accuracy of their matches over time’ (Big Brother Watch, 2018: 5). Thus, the prospective mediated memory that underpins FRT is not only composed of digital photographs and videos of faces but also the mathematical models of these faces, which dataist states can then remember via algorithmic comparison.
By its own records London’s Metropolitan Police trialled live FRT 10 times between 2016 and 2019, capturing a total of around 180,000 faces or ‘recognition opportunities’ (National Physical Laboratory and Metropolitan Police Service, 2020). These were collected over a combined duration of around 69 hours and used in combination with watchlists including between 42 and 2401 individuals, including at the 2017 Remembrance Day and Notting Hill Carnival events, where around 12,800 and 101,000 faces were scanned respectively (National Physical Laboratory and Metropolitan Police Service, 2020). Groups like Big Brother Watch (2018) have stressed FRT’s oppressive qualities, not least in terms of the disproportionate misidentification of non-male and non-White faces that reinforces the over-policing and racial profiling of ethnic minorities and can often be attributed to the datasets on which the AI software is trained or the configuration of the camera hardware that is used. FRT’s repressive potential – in terms of its application to activists – has also recently been illustrated by concerns that its use during the 2023 coronation of King Charles would support a police crackdown on protest during the high-profile ceremonial event (Dodd, 2023b).
In Germany, the Hamburg police started to use automated FRT following the protests surrounding the G20 summit in 2017. A special group was created which built a database populated with material taken from the Federal Criminal Police, local police video surveillance, footage from public transport providers, private content uploaded by citizens and video and image content taken from media sources (Raab, 2019). The Hamburg police then used FRT to create and store mathematical models of the faces in this material. This was subsequently deemed to be unlawful by the city’s Commissioner for Data Protection and Freedom of Information (Hamburg DPA), who issued an order in late 2018 to delete the reference database and the mathematical models that were trained on this database (Raab, 2019). However, this order was rejected by the Hamburg Administrative Court without deciding on the lawfulness of the Hamburg police’s data processing procedures. Furthermore, the Hamburg Senate initiated steps to reduce the powers of the Hamburg DPA (Raab, 2019). Ultimately, the Hamburg Police announced that it had deleted the database in June 2020 as it was no longer required in connection with the G20 protests under criminal law. While the Hamburg DPA welcomed the deletion, they noted that nothing prevented the Hamburg police from using FRT in this way again and demanded clarification regarding the legality of its use (Identity Week, 2020).
Some clarification in this respect will likely be provided by the new European Union (EU) AI Act, which will include provisions on the use of automated FRT within policing. So far, negotiations surrounding this act have highlighted the ambiguous position of the German government. The strength of the German data privacy lobby and criticism of the trial of FRT at Berlin-Südkreuz railway station between August 2017 and July 2018, including by data activists that highlighted that the trial involved the collection of more data than permitted, seems to have at least partly influenced the German government’s position (Raab, 2019). It has called for banning real-time FRT in public places while allowing the technology to be used retrospectively (Hersey, 2023). This reveals a desire for stricter safeguarding surrounding the police’s use of FRT than those proposed by the EU and many other member states, but leaves open the possibility for the technology to be used within dataist regimes of remembering.
FRT continues to spread via state and market conduits. It continues to be developed and refined to the extent that, accelerated by the face-mask mandates of the Covid-19 pandemic, it is making it increasingly easier for the police to identify even those protestors that cover their faces. It continues to be fed with larger and larger databases of digital material, drawn from an increasing array of sources. Each entry in these databases has the potential to become a digital activist trace, a data slice that, when modelled according to a chosen algorithm, will help identify and ‘re-member’ an activist. In short, the police surveillance databases, which only 10 years ago were limited (although they did not feel like that back then) to mostly the photographs and video footage generated by police agencies themselves and the facial matching opportunities these presented, can now very likely be expected to swell exponentially and increasingly feed automated FRT in real time. As they do so, the well of mediated prospective memory will deepen, the epoch of latent memory will take further hold and the risks of repressive state remembering will increase.
Conclusion: the right and fight to be forgotten
I was never able to find that photo of myself from the London protest in the newspapers, and I can no longer locate the local news coverage of me at the Berlin protest. However, thanks to these and any number of other digital traces, my face may very well be somewhere in those surveillance databases that are accessed by police agencies across the world. That face, photographed, videoed and now mathematically modelled, sits as prospective mediated memory with the potential to become a digital activist trace that might be exploited for repressive ends of a mostly ignored form of state remembrance.
Police surveillance databases and FRT have far more damaging consequences and potentials for many other types of people than myself. But through these technologies, our fates are entangled, and we are all implicated in these technologies’ outcomes (see Rothberg, 2019). A match is generated because another is not. The mathematical model of my face, along with millions more, feeds the AI that helps identify someone else’s. This implication leads to an imperative to address the risks of prospective mediated memory in the age of dataist states. These risks are tied to the discriminatory tendencies, procedural weaknesses, and the lack of oversight that surround the UK and German police databases that I have discussed here. They also relate to the rapid and unregulated roll out of automated FRT by police forces across the world. These risks, in turn, further legitimise the need to be aware of how activists are seen and recorded by the state in the present because, ultimately, this will shape how they might be remembered by the state in the future. As Bowker already wrote in 2005 (albeit in connection to the very different empirical context of biodiversity databases),
What we are doing now–globally, willy-nilly–is setting the agenda for what the world will be based on our understanding of what the world has been . . . it is critical at this juncture that we pay close attention to the ontologies and politics of our databasing of life. (p. 127)
These risks indicate one, underexplored, way in which the state contributes to the complexity of the memory-activism nexus, and in doing so, foregrounds the necessity to further problematise this powerful actor’s role in that nexus. They also justify the critical efforts of police monitoring groups and data activists to bring legal challenges against the unlawful retention of digital activist traces. The extent to which these groups might draw on the past and memory within their activism demands further research but – especially when stressing the connections between data, information and memory – they also contribute to the memory-activism nexus. For one, their efforts influence the sort of state response and remembrance that all forms of activism, including memory oriented activism, might expect to face in the future.
The efforts of these groups and activists also connect with broader debates about the right to be forgotten. Emerging alongside greater public awareness of the dataist state, since roughly 2010, the idea of the right to be forgotten has guided debates about the legal rights that individuals hold regarding the removal of their private data from internet searches and other digital directories. The right to be forgotten is not simply a legal instrument for the protection of individual data and privacy rights but has much wider consequences for social memory (Tirosh, 2017). In the context of this article, it represents a response to the growing pervasiveness of latent memory, and a legal instrument that might pre-emptively limit the repressive use of mediated prospective memory.
At the same time, legal action related to the right to be forgotten rarely results in the complete deletion of the digital data in question, but rather in amendments to the indexes on which search engines rely to return results. The data often remain but become more hidden. By extension, uncertainty regarding the digital activist traces that (may) endure in the databases that the police use (whether their own or those of big tech companies), as indicated here via my own personal reflections, serves the dataist state as a form of power in a Foucauldian sense. Meanwhile in a Weberian sense, the law itself is state power and it is important to distinguish between the realms of law, justice and morality. Ultimately, the courts, like the police, are an instrument of the state. As states become authoritarian, their laws change and those activist traces that may have been collected and created according to more democratic legal restrictions can be turned to other more oppressive and repressive ends (as history has shown us).
These changes can happen quickly, quicker, for instance, than many of the limitation periods used to guide the retention and deletion of data in legal settings. The question, then, might become not one of how best to ensure the right to be forgotten but how to pursue the fight to be forgotten, or, perhaps more appropriately, the fight to not be remembered. To formulate this question differently: if Scott’s third condition for disaster cannot be avoided, what can be done to avoid the fourth? Such questions invoke the possibility that legal activism and challenge alone may not be enough. It may be equally important that activists develop and pursue tactics that avoid them being seen, and thus potentially remembered, by the state in the first place.
At a time when many parts of the world are witnessing the rise of right-wing populism, the reduction of civil liberties, wars and economic uncertainty, it becomes even more pressing to consider the potential consequences that the data collected and stored by states about activists today may have for those same activists tomorrow, when it is remembered by a future state, especially in possible scenarios where an authoritarian state gains power, or a ruling state becomes authoritarian. Time will tell if these dystopian speculations come to pass or not. The efforts of state and police monitoring groups, along with the developing tactics of activists both in the streets and online, will influence any outcome in this respect. Their work is critical in slowing the progress of contemporary state surveillance and acquires further significance when we look to the possible dangers of the future. These dangers first need to be identified in order to be avoided. Thus, we must be alert not only to the risks associated with contemporary states’ obsession with
