Abstract
Keywords
Introduction
Over the last 20 years, a new, and increasingly dominant, business model of the internet has emerged. As powerfully charted by Zuboff (2019), this new internet business model provides services in return for the right to impose detailed surveillance on their users. In a novel twist on existing theories, Zuboff (2019) highlights that those on social media are neither the customers nor the product (see Smythe (1977) and Fuchs (2017)); rather, their actions are the
Zuboff's (2019) diagnosis of many of the discontents of the current digital economy is impressive. It makes an important contribution to an already existing literature, identifying how the current business model of the internet exploits and disfigures existing social relations (Crary, 2013; Pasquale, 2015; Couldry and Mejias, 2019). What Zuboff (2019) and these other analyses share is a picture of the world in which big data social media companies increasingly impose an order of rationalisation on our social and private lives. Undoubtedly, this is an important insight. To restate these claims in Habermasian (1984, 1987) terms, the current business model of the internet has led to an unprecedented invasion and instrumental rationalisation of the lifeworld based on (capitalistic) system imperatives. 1
Yet, while systems definitely do impose their ordering on social and material life (Habermas, 1984, 1987), and while the imposition of this highly instrumental and intrusive order poses an important threat to existing social and material life (Zuboff, 2019), these new systems are also subject to crises and breakdowns that can threaten social and material life (see Polanyi (1957), Habermas (1975), and Beck (1992)). As Beck (1992: 22, original emphasis) highlights, ‘
How the increasing imposition of order can in turn create greater risks of systematic disorder is instantiated well in how surveillance capitalism's attempts to impose control create greater risks of breakdown due to the specific underlying preconditions of a surveillance capitalist economy. The core to surveillance capitalism is the competition of digital platforms to maximise ‘behavioural surplus’. The result is an
One advantage of this critique is that it builds on, in a creative way, one of the criticisms of Zuboff's (2019) book, which is that the gap between surveillance capitalists’ intentions and realisations may be greater than Zuboff allows (see Kapczynski, 2020). In this way, this paper highlights the gap between the attempts of surveillance capitalists to control social and economic life and their ability to do so. Analysing the implications of this gap can further support the claim that the imperative to collect and connect tends to generate systematically important but hugely fragile digitalised infrastructures of everyday life that can threaten the functioning of social life.
This paper proceeds in three steps. First, it briefly outlines Zuboff's (2019) account of surveillance capitalism and the risks it does and does not focus on. Second, the paper outlines how the search for behavioural surplus generates both the imperative to collect and the imperative to connect. Last, the paper highlights how the imperative to collect and connect generates systemic digital risk.
Surveillance capitalism: The risks it focuses and the risks it does not
Zuboff (2015, 2019) has identified a new stage in ‘informational capitalism’ as ‘surveillance capitalism’. In surveillance capitalism, users of internet services from oligopolistic data companies become the ‘raw material’ for the development of services that predict the future behaviour of individuals. A key to the shift in the dominant internet business model is a transition towards the generation of profit through the maximisation of ‘behavioural surplus’ (Zuboff 2019: 129–131). The rise of ‘behavioural surplus’ signifies a shift from collecting data on users to improve services to collecting data on users so as to provide services that classify and predict the behaviour of their users. Zuboff (2019: 74–77) specifically identifies Google's pivot in 2000 towards funding its search service through selling services to advertisers based on the information Google collected on users as a key, early exemplar of surveillance capitalism. For Zuboff (2019: 74, 162–163), Google, Facebook, and increasingly Microsoft have become three paradigmatic cases of surveillance capitalist companies. Rather than the previous model of selling hardware or software for use, they have all come to exemplify a model in which they provide users services in return for appropriating their data.
Employing the metaphors of ‘dispossession’ and ‘rendition’, Zuboff (2019: 138–139, 233–234) develops a powerful critique of this process of data extraction from users. Not only do these digital environments violate our privacy – they are also increasingly designed to maximise the extraction of data that can be used to predict behaviour and to force us into social and economic situations in which our behaviour is more easily and reliably predicted. The growing inescapability of the necessity of using surveillance capitalist-based services so as to function in social and material life leads to the dominance of ‘instrumentarian power’ (Zuboff 2019: 434). For Zuboff (2019), instrumentarian power is the increasing dominance of the instrumental rationalisation of social life for the purposes of data extraction. Ultimately, for Zuboff (2019), these processes generate threats to social life, to private life, and to public and democratic life. 3 There however have been several important critiques that question the ability of the surveillance capitalism framework to illuminate contemporary capitalism. Before proceeding to build on and further amplify the side effects of the pursuit of surveillance capitalism, these need to be addressed.
Addressing critiques of Zuboff's (2019) theorisation of surveillance capitalism
Zuboff's (2019)
The first of these critiques is that Zuboff (2019) neglects the problems with other forms of digital capitalism, such as the monopoly pricing power and labour exploitation power of Apple (Morozov, 2019; Breckenridge, 2020: 934; Kapczynski, 2020: 1474–1475). A second set of critiques revolves around the claim that Zuboff's (2019) critique of surveillance capitalism involves overstatement in terms of its impact, such as ‘loss of the right to a future tense’, and that data is ‘dispossessed’ (Morozov, 2019; Cuellar and Huq, 2020). A third set of critiques is that Zuboff (2019) potentially overstates the ability of surveillance capitalist corporations to predict and control individuals (Morozov, 2019; Breckenridge, 2020: 930; Kapadia, 2020: 342; Kapczynski, 2020: 1473–1474; Jansen and Pooley, 2021: 2845). Lastly, there have been suggestions that Zuboff (2019) has overstated the importance of surveillance capitalism to contemporary capitalism as a whole (Kapczynski, 2020: 1472–1473).
In terms of addressing the first critique, it appears legitimate to concede that there are many fundamental problems with digital capitalism outside of the appropriation of behavioural surplus. In particular, the practices of labour exploitation of Apple, the way Apple uses its monopoly position to accumulate massive profits (Fuchs, 2017), and how it instigates social practices that intensify social exclusion for those who are unable to acquire their products (McGee, 2023) raise doubts about the valorisation of Apple as a suitable emancipatory alternative. Nevertheless, despite raising important questions about Zuboff's (2019) analysis of alternative dimensions of digital capitalism, this in itself does not undermine the analytical value of her critique of surveillance capitalism.
In responding to the second critique, there are questions that can be raised about whether all of the key terms of the critique have been sufficiently evidenced. Nevertheless, even if we are unsure whether one's data is ‘dispossessed’ – without a more extensive argument for the rights to control our data than the book includes – however, the book still powerfully outlines a new business model. In this way, Zuboff (2019) brings together a series of different processes together into a powerful framework (Cohen, 2019: 240; Cuellar and Huq, 2020: 1284). As such, the key insights revolving around the emergence of a new business model and its new
In terms of the third critique, there are important questions regarding whether we can equate the surveillance companies’ existing plans to predict and control with their actual power to do so (Kapczynski, 2020: 1473–1474). Nevertheless, while this question is important for the full extent of the ability of instrumentarian power to control behaviour, it is clear that Zuboff (2019) has not overstated the ever-growing reach of the extraction architecture and the systemic imperatives behind this drive. That is, even if surveillance capitalism companies may not have the power to predict and control society as thoroughly as Zuboff (2019) fears (though as Kapczynski (2020: 1473–1474) acknowledges, we should not simply dismiss this risk), their ability to increasingly mediate between every aspect of social practices still raises fundamental risk questions that need further investigation.
The last critique raises important questions in terms of what is the scope of surveillance capitalism. That is, the challenge may be raised that if it is just a few social media companies engaging in this business model, then it is hard to view this as the dominant feature of capitalism. Firstly, it should be conceded that traditional industries producing natural resources still occupy a key role in contemporary capitalism. Nevertheless, it can be argued that digital capitalism is becoming a hegemonic model (Srnicek, 2017: 5) and that this is clearly manifested in the increasing ‘smartness mandate’ across all of society (Halpern and Mitchell, 2022). Moreover, within the broader rubric of digital capitalism, the quest to extract and control immense amounts of data is the primary business model (Srnicek, 2017: 6). Moreover, as Zuboff (2019) has highlighted, even companies that are not traditional digital companies, such as insurance, healthcare, finance, transportation, and retail, are shifting towards data extraction as
As such, despite some of the important points of potential critique that have been identified in Zuboff's (2019) analysis,
Behavioural surplus and the imperative to collect and connect
The core to the business model of surveillance capitalism is the appropriation of behavioural surplus. This involves learning increasingly more about individuals’ actions so as to be able to predict and potentially control future behaviour. Yet, it involves much more than simply collecting data from existing activities. To collect this data, digitally networked companies increasingly format interactions and social practices to maximise data collection. This involves formatting environments of interaction so as to create situations in which action is more
Achieving the maximisation of behavioural surplus based on the principles of understand, predict, and control generates two key
This imperative to connect – flowing from the imperative to collect – works in several key ways. First and foremost, it involves an imperative to connect as many users and as many of their activities as possible to digital networks. As has been highlighted in terms of big data, quantity is fundamental, but it is not the only key – the diversity of data points and the velocity of the data are also particularly important (Mayer-Schönberger and Cukier, 2013: 199). It should be noted that Kitchin (2014) has also highlighted other key dimensions of big data, though velocity, volume, and variety continue to be fundamental characteristics in addition to these others (including resolution, relational, and exhaustiveness). These other dimensions also strengthen the imperative to collect and connect, though for the sake of exposition, they have not been each individually analysed for their contribution.
This process of making more and more of social and material life susceptible to the collection of data so as to maximise the effectiveness of big data and the predictive algorithms built on them has been thus described as ‘datafication’ (Mayer-Schönberger and Cukier, 2013; van Dijck, 2014). As such, datafication entails not only the intensive collection of data but also the intensified interconnectedness between all of these different points of data collection. The result of this imperative to collect is thus ‘an ecosystem of connectivity where all online platforms are inevitably interconnected, both on the level of infrastructure as on the level of operational logic’ (van Dijck, 2014: 204).
As should be emphasised though, this imperative to collect is not simply an individual idiosyncratic preference of some corporations, but rather it is built into the logic of surveillance capitalism. The
This imperative to collect is thus an imperative for firms surviving and thriving in surveillance capitalism. It has thus generated an imperative to connect to gain as large and as fine-grained data sets as possible in a variety of domains of social life. This imperative to connect, imposed on individuals, as individuals, consumers, participants in political deliberation, and as workers, can be seen across society. 6 As Zuboff (2019) highlights, it is not only social media use by private individuals where the imperative to connect functions. In fact, it is in our activities as workers that many of the most extreme cases of invasive data collection and connection occur so as to both enable employers to control their workers and to further develop these technologies. 7 As Zuboff (2019) highlights well, the social imperative for connection via social media use has become a key prerequisite to achieve ‘effective life’ (Zuboff, 2019: 53; see also Vaidhyanathan (2018) and Wu (2018)). As the costs of attempting to avoid social media continue to grow, the intensity and extension of connection of our life with social media networks grow.
The internet of things (IoT), both for consumer goods and the industrial internet of things (IIoT) and associated use of cloud robotics (see Greengard (2015); Schneier (2015, 2018); and Couldry and Mejias (2019)), likewise presages a massive intensification of data collection and connection. In the case of consumer IoT, while some product rollout is driven by consumer convenience, many others appear to be driven primarily by companies that are seeking to collect ever more detailed information on consumer practices. For at least some products, such as TVs and automobiles, it is now almost impossible to find new products that are not also digitally connected and data collection devices. 8 Despite its already impressive reach, the extension of IoT for private consumer goods still has much further to go. It is also increasingly embedded in environments where it is not even nominally voluntary, such as work environments, but also urban environments, in ‘smart cities’ (see Townshend (2013) and Albino, Umberto, and Dangelico (2015)). In these environments, the aim is to increasingly pursue real-time governance based on big data-trained predictive algorithms, which are then used to optimise urban governance systems. Again, most of the systems that are developed and used are developed and owned by private corporations (for a recent example, see Curran and Smart (2021)).
Ever more intensive data collection and various attempts to impose financially beneficial control over the choice situations of individuals in surveillance capitalism lead to a massively interconnected network that ineluctably collects data and then feeds them into proprietary algorithms. As such, the imperative to collect data and the imperative to connect result in immense interdependencies with digital networks. The following section discusses the logic of the imperative to connect and how this relates to the potential systemic risks that can emerge from these digital networks.
Imperative to collect and connect and the growing vulnerability of digitally interconnected networks
There are three primary principles of cybersecurity: confidentiality, availability, and integrity (Schneier, 2018). Violations of confidentiality include cases in which hackers are able to gain illegitimate access to confidential data (see the Sony, Target, Equifax, and Capital One hacks), as well as when systems are not secured and encrypted so that confidential data are unintentionally left on the internet open to be accessed by others (see Facebook (ENISA, 2019: 66)). Availability is the characteristic of a system in which its functionalities are available to authorised parties when needed (Pfleeger, Pfleeger, and Margulies, 2015). Integrity involves only authorised parties making changes to a digital system. Integrity violations can involve modifications of an existing data set, as well as modifications to the functioning of software (Schneier, 2018). 9
As argued above, surveillance capitalism and its imperative to collect likewise generate an imperative to connect. These twin imperatives tend to massively increase the systemic risk emerging from digitally interconnected networks. This intensification of risk occurs in two key ways. First, it increases the vulnerability of the digital networks. Second, insofar as these digital networks grow in importance and replace other modes of social provisioning, 10 then society is more dependent on and hence vulnerable to any digital network failures. Processes associated with surveillance capitalism, including the increasing dominance of social media in social life, the growing reach of IoT and ‘smart cities’, and the more general networked digitalisation of life, consequently mean that breakdowns in digital systems pose a key threat to ‘effective life’.
The argument that the imperative to collect and connect intensifies systemic risk is pursued below through identifying key factors intensified by these dual imperatives that also intensify systemic risk. However, before addressing these key factors, a brief discussion of
Systemic risk
The following section builds on literatures on factors identified as intensifying systemic risk (Perrow, 2007; Haldane, 2009; Vespignani, 2010; Haldane and May, 2011; Goldin and Mariathasan, 2014) and on the key principles of cybersecurity and insecurity (Schneier, 2003, 2015, 2018; Nunan and Domenico, 2017; Clarke and Knake, 2019; Schiller et al., 2022). It shows how the dual imperatives to collect and connect increase the risk of digital networks experiencing systemic failure due to
Complexity
As has been emphasised in both the systemic risk (Haldane, 2009) and the cybersecurity literature (Schneier, 2018), increased complexity tends to amplify potential cascading risk. The surveillance capitalist imperatives of collection and connection increase the number of nodes in the networks that collect data while also massively increasing the size as well as variety and velocity of the data collection practices. These all tend to increase the
The massive spread of networks associated with the ethos of collect and connect exponentially increases the ‘attack surface’ of digital networks, thus making it much more difficult to effectively defend the network. Highlighting the importance of this development, a recent leading cybersecurity report identified as its number one trend in cybersecurity that the ‘attack surface in cybersecurity continues to expand as we are entering a new phase of the digital transformation’ (ENISA, 2020: 10). This increase in the attack surface in turn intensifies vulnerability to security failures of our increasingly digitally networked infrastructure of everyday life (ENISA, 2020; see also Schneier (2018)). The size and vulnerability of the attack surface are in particular magnified as surveillance capitalist imperatives become intertwined with IoT, smart cities, and the general networked digitalisation of life (see Greengard (2015), Schneier (2018), and Curran (2020)).
Complexity also increases the number of points of interface between humans and computers (see Greenfield (2017)), which makes computing systems much more vulnerable to ‘human engineering’, which is the accessing of systems through mistakes of individuals rather than through coding mistakes, such as zero-day vulnerabilities. 14 The collect, as much as possible, ethos of surveillance capitalist firms creates situations of complexity in which even when no one is actively seeking to access these data sets, they may be unintentionally made available for access on the web, as has happened with both Facebook and Twitter (ENISA, 2019: 66). In the case of Facebook, a weakness in the ‘Search’ capability led to the platform exposing approximately 2 million persons' data, while for Twitter, a glitch in the password handling procedure led to the possibility of all users’ passwords being accessible (about 330 million) in plain text in an internal log (Al-Heeti and Ng, 2018; ENISA, 2019: 66). In this way, confidentiality, availability, and integrity risks are all intensified by the complexity amplification associated with the dual imperatives of surveillance capitalism.
Target richness
The increasing size, variety, and intimacy of the data that are collected by surveillance capitalist firms generate an increasingly
Larger networks provide ‘economies of scale’ for returns for attackers. This is particularly true for data breaches. Despite the large hacks of Capital One, Equifax, and Target in the 2010s, large-scale breaches continue to occur. In August 2021, a sophisticated cyberattack led to the theft of 7.8 million records of current T-Mobile customers, as well as 40 million records of former or prospective customers (ENISA, 2022: 105). In this case, simply being a
Weakest link
The combination of the amplification of complexity, dynamism, and size emerging from the dual imperatives of collect and connect also increases the threat of what might be called the ‘weakest link’ principle of networks (see Haldane (2009), Schneier (2018), and Schiller et al. (2022)). As has been previously emphasised, from a security perspective, one is not as strongest as one's strongest link – but rather as strong as one's weakest link (Schneier, 2003: 103–106). Attackers do not have to achieve unauthorised access to all of one's points but only one's weakest (Schneier, 2018). Undoubtedly, defence in depth is an important principle of cybersecurity (Schneier, 2003); nevertheless, additional layers of security still leave what might be called a ‘weakest path’ of defence, which is most vulnerable to attack, even if more than one failure is necessary. The greater the size and complexity of digital networks, the greater the number of links and paths to unauthorised access. As such, this tends to increase the number of weaker links and paths. In this way, the increasing size and complexity of digital networks increase the number of parts of a system that must function for the whole system to function properly. This vastly increases the number of necessary conditions for the whole network to successfully function. Size and complexity interact in many ways to intensify vulnerability, as increasing size leads to the greater likelihood that some of the parts of the network will not function as intended, while enhanced size and complexity together also make it increasingly difficult for network protectors to diagnose where these vulnerabilities actually are. In fact, as Haldane (2009: 16) has emphasised in a different context, the complexity and uncertainty associated with heightened interconnectedness can lead to a situation in which ‘spotting the weakest link became impossible’, thus making existing networks even more fragile.
An illustrative example likewise can help clarify the role of this principle. The paradigmatic surveillance capitalist company, Equifax, is a large company that trades in the personal data of individuals. In September 2017, Equifax acknowledged that months-long access to its credit report databases by hackers had led to the breach of personally identifiable information of over 143 million people (Fleishman, 2018). In total, 148 million people's confidential information was accessed. Despite the vast amount of data accessed, it was the failure of a single Equifax employee that had left one internet-facing web server with out-of-date software that led to the successful hack (Fleishman, 2018). It took Equifax 76 days to realise it had even been hacked (U.S. House of Representatives Committee on Oversight and Government Reform, 2018). The prodigious complexity and size of its database combined with the ‘weakest link’ principle enabled such a large hack of personal information. While undoubtedly, the single employee did make a mistake, the primary failure here is that of designing systems that are so important and fragile that single mistakes can lead to catastrophic consequences (Perrow, 1984, 2007; see also Vaughan (1999)).
Eggs in one basket
The size, variety, and intimacy of data collected not only make cyber breaches and breakdowns more likely; they also significantly increase the damage they cause. In what might be called the ‘eggs in one basket’ principle, the creation of ever larger data sets and the networks of collection and connection that they rely upon not only increase the likelihood of errors – they also increase the likelihood of these errors generating catastrophic damage across society (see also Perrow (1984, 2007) and Schneier (2018)). Insofar as digital systems replace other means of social provisioning and reproduction, then when they do malfunction or fail in some way, they cause much greater damage due to the increased social dependence on these systems. Insofar as these digital systems increasingly become the necessary condition for more and more effective social life, then failures in these systems are much more likely to create a ‘domino effect’ 15 of damage across society. 16
The scale of the impacts of the systemic threat of confidentiality breaches is still somewhat difficult to understand. As Zuboff (2019) highlights, building importantly on Beck's (1992) discussion of risks, sometimes, when we create new systems, they create new risks that cannot be easily integrated into our existing frameworks of understanding. As such, we cannot adequately apprehend the harm to social life that they pose. In this way, the drip, drip, drip of aggregated numbers of confidentiality breaches of Yahoo, Target, Sony, Facebook and Cambridge Analytica, and healthcare records, alongside the individual, seemingly idiosyncratic cases of identity theft and phone hacking, does not easily generate a determinate picture of harms, even if it is clear that damage is being done.
In the first half of 2018, data breaches compromised 4.5 billion records (ENISA, 2019: 65). Still, the cumulative impacts of such risks may be much more significant than the discussion of any single number of breaches can allow for. Ultimately, there is massive damage done to people's lives. The reflexive, continued perception of the need to ubiquitously use digital systems that contain our most intimate thoughts and details – but which we fundamentally cannot trust – threatens to create the risk of a new ‘iron cage’ of harm, social mistrust, and fundamental damaging of privacy, which is core to human development and interaction (Zuboff, 2019; see also Arendt (1958) and Bauman (2000)). The growing emphasis on ‘zero trust’ in cybersecurity (ENISA, 2021: 65, 86) again instigates social changes driven by the increasing ‘eggs in one basket’ infrastructural imperatives of surveillance capitalism. Digital systems that are so fragile while also being so foundational, thus increasingly make ‘zero trust’ a necessity, as even small, inevitable errors can have catastrophic consequences.
Likewise, with violations of availability and integrity, the tendency of surveillance capitalism towards an ever-constant replacement of social functions provisioned by non-digitally networked means with digitally networked means not only increases the likelihood of failures; it also exponentially increases the costs of these failures. Illustrative examples may aid in concretising some of these risks. The WannaCry cyberattack of 2016 affected over 100 countries worldwide. It was based on the identification and exploitation of a single key vulnerability in Microsoft software (Larson, 2017). It had a series of significant impacts across the world, with one-third of the UK's National Health Service (NHS) rendered inoperative, and over 1000 computers at Russia's interior ministry were disrupted, as were businesses, such as FedEx and Telefónica. In total, it is estimated that over 230,000 computers were infected by WannaCry (Thomas, 2019), and the costs of the attack are estimated at somewhere between $4 and $8 billion (Greenberg, 2018).
The following year (2017), the NotPetya ransomware was unleashed. It is considered the most costly attack yet, with estimates that it cost over $20 billion in damage while also shutting down key infrastructures (Clarke and Knake, 2019: 18). Exemplifying well the above principles of complexity, target richness, and weakest link, it was the exploitation of vulnerabilities in the update servers of a Ukrainian software company, Linkos, that provided a back door to thousands of computers in Ukraine and then through Ukraine to the rest of the world (Greenberg, 2018). The exploitation of this single vulnerability ended up causing approximate damages of $870 million to Merck, $400 million to FedEx, $384 million to the French construction company Saint-Gobain, $300 million to Maersk, $188 million to Mondelēz, and $129 million to Reckitt Benckiser (Greenberg, 2018). The case of the global shipping company, Maersk, is particularly instructive. The damages to their operations were extremely extensive, with the malware destroying all of the data containing the inventory of their ships around the world. While this ended up causing several days of delay in their global shipping processes, it could have actually been much worse. Luckily, earlier in the day, their Ghanaian office had went offline due to a blackout and so was not on the internet when the attack struck, thus leaving a single copy of the company's domain controller unaffected by the malware. Over the course of the following days, Maersk flew this backup to the UK and then slowly rebuilt their copy of their inventory of what was in their shipping containers (Greenberg, 2018; Curran, 2020). This case is particularly instructive because what limited further damage was having a non-digitally interconnected backup, an alternative path to ‘effective life’. And it is these alternative paths to effective life that surveillance capitalism's business model continually works to undercut.
Nevertheless, irrespective of how significant the costs of the failures of digital networks have been yet, this is just a fraction of what is intended in terms of digital interconnection and dependence. Core to the imperatives of surveillance capitalism is the ever-deeper integration of digital networks in necessary and fundamental aspects of our lives, and this involves creating new needs and new ways to mediate every aspect of life with digital interconnectedness. The growth in particular of IoT and digitally networked cities in terms of ‘smart urbanism’ presages much greater potential costs of systemic digital risk (see also Kitchin and Dodge (2019)). The recent growth of IoT has been exponential, and these trends are expected to continue. In 2020, globally, there were approximately 31 billion IoT devices, with it being estimated that this number will more than double, to 75 billion devices, by 2025 (Schiller et al., 2022: 2). One major company involved in IoT, Cisco, has identified a final goal of having 99% of existing devices as digitally connected and collection devices (Greengard, 2015: 13). Though it is always necessary to distinguish between a corporation's intentions and eventual outcomes, in one sense, it underestimates the potential proliferation of IoT because of its neglect of how these new technologies not only colonise existing goods but also proliferate needs for new objects and functionality (for colonisation, see Couldry and Mejias (2019)).
Conclusion
In light of Elon Musk's attempts to develop X, the ‘everything app’ (Bradshaw, 2022), the rise of surveillance capitalism and its associated ‘infrastructural imperialism’ (Vaidhyanathan, 2011), that is, its attempt to secure for themselves the position of
In light of the 2008 financial crisis and the COVID-19 pandemic of 2020, we have seen first hand the immense risks that come along with highly interconnected social systems. Yet, social learning from systemic risk has not been adequately transferred to other domains, in particular in terms of more effective regulation and governance of the risk potential of digitally networked industries. The primary business model that provisions our increasingly dominant modes of communication and coordination is beholden to surveillance capitalist imperatives. Yet, again and again, there is little more than cosmetic efforts to do security better, which while important are nowhere sufficient to adequately address the systemic, structural risks emerging from the basic business model of surveillance capitalism. Yet, as Zuboff (2019) has presciently shown, insofar as surveillance capitalism embodies powerful business imperatives, self-regulation or light-touch regulation is completely inadequate to address the massive risks embodied in the dual imperatives of collect and connect (see also Schneier (2018)).
A recent case illustrates this well. A cyberattack in 2021 on Colonial Pipelines led to the shutdown for several days of a series of pipelines that provide 45% of total oil supplies to the US East Coast (Sanger and Perlroth, 2021). The results were significant – panic buying led to significant lines in many states, price spikes, a state of emergency declared by President Biden, and a $4.4 million ransomware paid (Morrison, 2021), all of these from one leaked password (Kelly and Resnick-ault, 2021). This new ‘blended threat’ of combining cyberattacks that affect both computers and critical infrastructure is still only in its infancy, yet it has already shown itself to potentially have massive consequences (Sanger and Perlroth, 2021). 17 Moreover, in this case, it appears that much of the damage was not intended by the hackers; they had only tried to apply the ransomware to the billing system, but because of the complexity of Colonial Pipelines’ digital network, Colonial Pipelines felt the need to shut down the whole network. In a system as complex and fragile as our digital networks are becoming, consequences can cascade out of control beyond the intentions of either those who design the systems or even those who seek to exploit them (117th Congress, 2021; Morrison, 2021).
Yet, despite clear warning signs and other near misses (Perrow, 1984), the system of surveillance capitalism proceeds undiminished. This type of
