Abstract
I Introduction
Algorithms oppress (Noble, 2018); algorithms are violent (Safransky, 2020); algorithms can be ‘mad’, ‘aberrant’ and ‘unreasonable’ (Amoore, 2020); algorithms work, algorithms anticipate, assess risk, algorithms have social power (Beer, 2017); algorithms match us – to partners, services, properties – algorithms do many things. These busy algorithms have not escaped the attention of geographers who have looked at both the pernicious effects of algorithmic acts and to some extent the potential benefits they may also bring. Much of this work on algorithms has emerged as part of a critical ‘digital turn’ in geography (Ash et al., 2018). The recognition that technologies, software and algorithms do ‘work’ in the world is evident across the discipline – from mediating and governing cities to producing space (Kitchin and Dodge, 2011) to influencing markets (Fourcade and Healy, 2017), nature (McLean, 2020; Adams, 2019), housing (Fields, 2019) and the everyday, algorithms are working for, with and against us.
Despite this work, Del Casino et al. (2020: 606) argue that less attention has been paid to thinking through ‘the reimagination of human-nonhuman relations, subjectivities, and potentialities that come to be possible’ in algorithmic life (Del Casino et al., 2020: 606). Indeed, it is undeniable that geographers have predominantly paid attention to the more harmful effects of algorithms, producing necessary and important critiques, yet in doing so overlooking the many generative and exciting possibilities of algorithms, such as their potential for care, or alternative ways of seeing. Geographers have yet to seriously embrace the possibilities of algorithmic life by engaging more deeply with the epistemological effects of algorithms, as has emerged in related fields of design anthropology, Human Computer Interaction (HCI) and digital humanities, with the effect being that we continue to overlook the more generative algorithmic potentials, practices and epistemes. While Rose (2017: 789), referring to smart city research, identifies a lack of openness in geography to other posthuman forms of agency, comparatively few inroads have been made in this direction. This should be rectified as a deeper engagement with algorithmic effect and the questions of intelligence and agency this entails will, as Lynch and Del Casino note, increase the relevance of geographers to important contemporary debates including on AI, machine learning and robotics (2020: 388).
Thus, this paper progresses human geography in two significant ways. First, by moving attention on algorithms beyond harm to considering practices of care, we can reframe our relationship with algorithms in ways that are potentially generative. Paying attention to care in the context of algorithms as relational, opens discussion to our own responsibilities in relation to algorithms, the way we care through them and the reciprocity of care they themselves demand. What does caring for and with an algorithm look like? Decentering our relations of care in this way is necessary because we still predominantly conceive of care as something done by humans, but care is not a ‘human-only matter’ (Puig De la Bellacasa 2017: 2). This reframing of relationships is also central to decentering the biocentrism of thought fundamental to the second contribution this paper makes, that of an algorithmic epistemology which also has implications for methodology.
Building on contributions in the digital humanities (Hayles, 2017; Parisi, 2021), I suggest that our engagements with algorithms are increasingly more than just ‘working with or on them’ but increasingly reflect an epistemological and methodological rupture. Decentering our understanding of cognition and sapience, algorithmic thought – in a generative and speculative sense – becomes a possibility. The notion of taking algorithmic perspectives seriously may seem fanciful, but there are key philosophical and scientific underpinnings of thought and cognition that make this relevant. Hayles’ work on nonconscious cognition as performed by both humans and machines as well as Parisi’s challenge to the biocentric nature of thought are instructive here. Both show that intelligent machines are not merely extensions of humans but capable of unthought as a form of sapience.
Now is an appropriate time to start embracing algorithmic ruptures. The role of algorithms in geographical work has been heightened by the pandemic forcing us to rely more heavily on technologies to do research as our access to the field, workplace and beyond has become increasingly restricted. But more than just a technological tool, algorithms can present a methodological rupture – we can conceptualise algorithms as co-researchers (Maalsen, 2020: 1544–1545). This has thrilling conceptual and methodological implications, granting us access to new ways of seeing the world. Algorithmic thought, in its refusal of biocentrism, allows us to speculate in new ways, ask different questions and importantly assist in critiquing the colonial and capitalist properties of techno-science epistemologies (Parisi, 2021: 17).
The paper proceeds as follows. First, the paper situates the algorithm as used in geographical work, which predominantly takes a relational approach that productively conflates technical terms. Next, I identify three ways in which algorithmic agency is informing our work: algorithmic power and harm; algorithmic care; and algorithmic knowledges. The first theme has been well established in the literature, so this section briefly summarises the key arguments before moving on to a deeper analysis of algorithmic care. The third theme, algorithmic knowledges, introduces insights from design anthropology, HCI and the digital humanities to situate algorithms as co-researchers or co-performers. Bringing this perspective into geographical work is novel, generative, and can invigorate geographical methods. Rather than algorithms as tools or extensions of human agency, I instead advocate for an algorithmic epistemology in which human and algorithmic agency unite to produce new knowledge. Reconceptualising algorithms as co-researchers, co-ethnographers and collaborators can help geographers understand spaces in new ways and to encounter new spaces. It also engages us with debates on care – for others via algorithmic technologies and for our own duty of care towards algorithms and their consequences. Organising work on algorithms along these three lines, opens the space between algorithms as a political technology of control and the potential for more speculative and care-full futures, as well as pioneering new fields and ways of doing research.
II What are we talking about when we talk about algorithms?
In its technical form, the way a programmer or computer scientist conceptualises it, the algorithm is a set of instructions or a calculative procedure used to solve a well-defined problem or accomplish some end (Introna, 2016: 21; Safransky, 2020: 200). Yet, increasingly we discuss algorithms and their effects in disciplines beyond computer science, including geography, which conceptualise algorithms in a relational rather than technical manner. Dourish (2016: 2) in what is one of the most considered interrogations of this topic yet refers to the work of Wirth whose influential formula, algorithms + data structures = programs, raises important questions for thinking about algorithms beyond computer science (Dourish, 2016: 2). Dourish makes two main points: that algorithms and programs are separate entities although programs implement algorithms and that algorithms should be understood as relational, animated only by the wider data structures and computational forms within which they sit (2016: 2). Following Kitchin (2017: 14) and Gillespie (2014), all digital technologies constitute ‘algorithm machines’ as software is comprised of algorithms. There is, Dourish argues, ‘within Wirth’s formula, an analytic warrant for a relational and differential analysis of algorithm alongside data, data structure, program, process, and other analytic entities’ (2016: 2). This parallels with Crawford’s argument that understanding algorithms as merely calculative machines and autocratic decision makers limits more complex interrogation of the “political spaces in which algorithms function, are produced and modified” (2016: 79).
It is in this relational approach rather than the technical processes that algorithms are spoken about in critical geography. Liu and Graham (2021) refer to this as an ‘algorithmic sociotechnical assemblage, engaged and entangled with diverse sociomaterial actors that contribute to its ontological status through their performances, beliefs, and interpretations’ (2021: 11). Such a relational approach has allowed geographers to problematise the ‘black boxed’ nature of the algorithm and the implications this opacity has for the world. This has prompted arguments to ‘open the black box’ by focussing on the institutions, politics and human actors behind the algorithm (see Pasquale, 2015; Kitchin, 2017; Seaver 2017) or alternatively, to focus on the ‘potentiality, slipperiness, and movement’ of algorithms via practices of counter-mapping, tracing and proxying (Fields et al., 2020: 462). A similar argument for embracing the ‘opacity, partiality, and illegibility’ of algorithms rather than striving for an impossible transparency is advocated by Amoore’s work on cloud ethics (Amoore, 2020: 8–9). Understanding the partial and opaque nature of algorithms as part of their ethicopolitical life, Amoore proposes three routes into understanding algorithms: algorithms as arrangements of propositions; as aperture instruments; and as giving accounts of themselves (2020: 10). This tripartite of pathways into understanding algorithms emphasise their relational nature – for Amoore, ‘algorithms come to act in the world precisely and through the relation of selves to selves, and selves to others’ (2021: 7).
The relational context is also apparent when geographers talk about algorithms in the space of platforms or ‘smart’ things such as smart cities and smart homes – the ‘algorithm machines’ Gillespie (2014: 14) refers to. This reflects both the black boxed nature of the algorithm as we see not so much the algorithm but rather its effects, yet it is also because this is where these algorithmically produced reconfigurations raise important questions. In doing so, we also conflate ‘algorithmic’ and ‘automated’ among similar terms, although I argue that this is a productive conflation because it enables us to ask important questions of their effects as well as allows us to think of them agentically.
Algorithmic epistemologies are still (currently) co-produced with humans but offer us generative ways for thinking about digitally mediated life. The agentive capacity of algorithms is situated in their ability to cognise nonconsciously (Hayles, 2017) and to do so in a way that produces an algorithmic knowing that may be an artificial mentality but one which is not diminished by biocentrism (Parisi, 2021; Maalsen, 2020). For Hayles, the rapid transformation of a planetary cognitive ecology requires us to rethink cognition, especially because of a recursivity between human and technological nonconscious cognition. Hayles critiques the notion that machines cannot truly think because they do their work in programmed nonconscious ways. Humans are always already engaged in nonconscious thought, which is integral to higher-level cognition and increasingly we see technologies performing nonconscious complex and social interconnections with other nonconscious technical systems (Hayles, 2017: 214–215). Hayles’s argument lies in breaking the link between thought and cognition, acknowledging that nonconscious cognition is already in play in human cognition, meaning that external cognisors (such as algorithms and computation) are performing tasks that humans also unthinkingly do – for example, pattern recognition, data inferring and decision-making about ambiguous or conflicting stimuli (2017: 215).
Parisi’s (2021) provocation is based in a challenge to the biocentrism of sapience. She eschews both Heideggerian views that position technology as a means of capital extraction, as well as cyborgian metaphysics that neutralises and reterritorializes technologies as extensions of human agency. Instead, by continuing to challenge the biocentric nature of thought, Parisi proposes a techno-sign that ‘coincides with a fractality of know-hows that open algorithmic functions to a transcendental condition for meanings of another kind’ and which can take up the opportunity ‘to radicalize the critical epistemologies that refuse the Promethean myth for which machines are merely extensions of Man’ (Parisi, 2021: 16; Chude-Sokei, 2015). Such a perspective move debates beyond positioning technology and algorithms as an extension of human agency, and instead decentres the biocentrism of sapience and acknowledges ‘unthought’ to form an artificial mentality (Parisi, 2021; Hayles 2017). We see such a ‘non-biological intelligence’ referred to by Cugurullo, when he questions the implications of trusting an AI’s not-human logics for making decisions about cities, threatening the notion of urbanism itself (2021: 189).
When thinking about algorithmic activity in this way, we start to refer to algorithms as possessing agency, as doing things, as being ‘magical’ and ‘possessing a level of agency that humans would have previously seen as a form of sorcery’ (Kavanagh et al., 2015: 6). There are links here with new materialism that has inspired work that considers the agential and ‘lively’ capacities of digitally networked objects (see, for example, Lupton, 2018; Sumartojo and Lugli, 2022), but in putting forth an algorithmic epistemology, I want us to take seriously the idea of an artificial thought that can generate new potentially novel ways of seeing the world. There has been a trajectory of digital epistemological possibilities which have led to the current moment for theorising algorithms as epistemological and agentive. The rise of cybernetics in the 1950s propelled the ascent of knowledge produced via computing technologies and systems analysis – producing ‘ “information” and an informatic worldview’ which ‘displayed an ambivalent relation to the material world’ (Galloway 2021, 120). The ‘cybernetic hypothesis’ (Tiqqun 2001) is a precursor epistemology to algorithmic epistemologies, in which ‘systems or networks combine both human and nonhuman agents in mutual communication and command…and has come to dominate the production and regulation of society and culture’ (Galloway 2021, 120).
Cybernetics gave way to the ‘networked age’ in which the dematerialisation associated with cyberspace was reconsidered to show that information networks were facilitated by material and sociotechnical relations (Luque-Ayala and Marvin 2020, 10). This brought attention to the way that software and code sorted and produced space (Kitchin and Dodge 2011) and highlighted the ubiquity of computers in everyday life.
The ubiquity of computing that emerged in the networked age facilitated the shift to big spatial data analytics as a way of understanding and knowing the world. This underpins what Kitchin (2014) refers to as the data revolution that is radically transforming data collection, storage and analysis practices. Such data practices enable the ubiquitous calculating that governs and makes specific futures present (Anderson 2010, 783–784). The entanglement of digital technologies and praxis with knowledge production has therefore had a long and colourful history but we are currently facing the next big moment, with the influence of AI and increasingly pervasive networked digital objects in everyday life, which lead me to posit the emergence of algorithmic epistemologies.
In what follows, I show how geographers have attributed both harm and care in algorithmic actions but, as yet, the affective capacity of algorithms and their arrangements have been underutilised in the way that we think about doing research with algorithms.
III Algorithmic effects: Power and harm
Geographers are attuned to the work that algorithms do in the world and take interest in algorithmic effect. In a special issue on the social power of algorithms, Beer (2017) and his fellow contributors elaborate on the complexity of the debates on power and algorithms as decision makers. Algorithms are influential across social, environmental, political and economic life. In social media news feeds, algorithms decide what is visible and to whom, shaping experiences and world views (Beer, 2017: 6; Bucher, 2012; Willson, 2017; Gieseking, 2017). Neyland and Möllers show how algorithms of surveillance technology enact a form of distributed agency, where the conditions and consequences of the IF… THEN algorithmic logic are brought into being through associations between people and things (2017: 46). It is these algorithmic associations through which Neyland and Möllers see algorithms as having social power (2017: 46).
Power is also seen in the way algorithms assess risk (Amoore, 2011, 2018; Liu and Graham, 2021); in the way they make decisions on environmental management with algorithmically underpinned technologies and analytics generating new understandings of nature and new ways of managing conservation (Lockhart and Marvin, 2020; Adams, 2019); and how they automate and influence banking (Fourcade and Healy, 2017). Algorithms set things in motion and become doubly agentive, both constructing meanings as well as being themselves shaped by meanings (Roberge and Melançon, 2015: 308). This becomes even more complicated when we start to think about AI that is beginning to generate unintended or unpredicted outcomes, the aberrant, mad and unreasonable algorithms Amoore (2020: 108) refers to. In making decisions, Amoore claims algorithms become part of ‘an “enlarged community” of posthuman knowledge comprising an amalgam of humans and algorithms as “knowing subjects”’ (Amoore, 2018: 150; Haraway, 1997).
The COVID-19 pandemic has also heightened our awareness of algorithmic power and potential. The increased surveillance and social sorting enabled by the algorithms responsible for contact tracing, quarantine enforcement and monitoring of movement have led to concerns not only of privacy but also of the uneven experience of infringement and punishment and has set the stage for future surveillance creep (Datta, 2020a; 2020b; Chen et al., 2020; McElroy et al., 2020; Kitchin, 2020). Contemporaneously, there is evidence of algorithms enabling people to cope with pandemic life, whether that be new patterns of working and learning from home (Maalsen and Dowling 2020; Burns, 2020), sharing stories, supporting and maintaining connections via social media and digital story telling (McLean and Maalsen, 2021; Maddrell, 2020), and in providing ‘just in time’ pandemic modelling to inform policy and responses (Brunsdon, 2020).
Notwithstanding these algorithm-supported coping capacities, most of the existing work on the power or effect of algorithms is broadly considered with their harmful implications. Onuoha (2018) posits ‘algorithmic violence’ as term that reflects digital and data-driven inequity. It is ‘the violence that an algorithm or automated decision-making system inflicts by preventing people from meeting their basic needs. It results from and is amplified by exploitative social, political, and economic systems, but can also be intimately connected to spatially and physically borne effects’ (Onuoha, 2018). This follows a line of thought put forward by feminist scholars on the discriminating nature of algorithms, which code in the existing prejudices of the social, economic, racial and political structures within which they are produced (Noble, 2018; Eubanks, 2017).
The underlying basis of these arguments is that algorithms treat people unequally because of the bias that is coded into them via the individual and institutional ideologies behind them. Because computer programming and technology is a middle-class male dominated profession, then the algorithms they produce ‘see’ the world in similar ways and thus perpetuate the same discriminatory practices, further amplified by the entanglement of technology with economies and governance. Understanding algorithmic power and harm therefore requires acknowledging the uneven power relations in which they are produced and which they in turn perpetuate.
IV Algorithms that care?
Algorithms do not only hinder but can also offer many benefits. In a recent article, Koch and Miles (2021) have shown how digital technologies are influential in mediating stranger intimacies creating new geographies of digital encounters. In my own life, algorithms and platforms have helped me find housing, connected me to flatmates and eventually even matched me with my partner (although there were many dubious algorithmically mediated encounters before him). In facilitating these encounters, could we conceptualise algorithms as helping and enabling care?
Care is a productive lens through which to look at algorithms. My use of care is heavily informed by feminist approaches which situate care as a relational practice that sustains life but also recognises that it is not always benevolent or beneficial (Tronto, 1993; Power et al., 2022: 10). Indeed, care can harm both those who give and receive care and is not a practice that we should accept uncritically or romanticise (Power et al., 2022; Puig De la Bellacasa 2017). More than that care manifests in a multitude of ways – it can be labour and maintenance, expressed as affective and ethical engagements, and enacted as a politics, which sometimes exist in tension, and which make it a nebulous thing to define (Puig De la Bellacasa 2017, 5). What we can say about care, however, is that it is characterised by entanglements of humans and non-humans – we care for and with other things, and it is this from this position that I situate care as a practice, politics and ethics that is bound up in our encounters with algorithmic technologies and which have implications beyond ourselves. Thus, we can care for algorithms and algorithms can care for us in ways which can be politicised. In geography, we see this primarily emerge in feminist and more-than-human engagements that try to understand our relationships with technology and our environments.
Politicising care is something which feminist geographers have been practicing for considerable time, and they have been using digital technologies to illuminate this. D’Ignazio and Klien’s (2020) work on data feminism shows how we can politicise care at the intersection of the digital by making this care work visible through digital media. Similarly, D’Ignazio et al.’s (2020) work on improving data collection around femicide in Latin America using machine learning to partially automate collection and monitoring practices is situated within data feminist approaches. Although acknowledging its limitations, the project is a response to the ‘missing data’ on gender-based violence which hinders policy and advocacy efforts. Using algorithms and digital media to visualise care in this way is also utilised within feminist GIS practices (see Kwan 1999, 2002a, 2002b).
There is another reason that thinking about algorithms within a framework of care is useful, and that is in addressing the negative effects of algorithms. As Puig de la Bellacasa notes, ‘politics of caring have been at the heart of concerns with exclusions and critiques of power dynamics’ (2011: 86), and this is central to thinking about redressing some of the biases and harm that algorithms have been shown to enable. Care does not mean the absence of critique. Instead, thinking with care requires ‘knowledge construction without negating dissent’ and acceptance of the ‘unavoidably thorny relations that foster rich, collective, interdependent, albeit not seamless, thinking-with’ (Puig De la Bellacasa 2012: 205). Caring for and with algorithms may not be easy but it may offer ways forward.
A more-than-human entanglement, care and care work, as Wiltse (2020: 14) notes, are part of daily life and something we often delegate to objects. Objects support us in our daily work and projects, and sometimes they engage us emotionally but as Wiltse observes, ‘to create things that can do care work for us in a way that feels caring to us, we must care for them as well’ (2020: 14). To illustrate this potential, I focus on two areas which have seen both significant digitalisation and which have attracted the attention of geographers – the urban and housing.
1 Urban care
In the city those same platforms and technologies that harm can also fulfil a social function. Barns (2018) and Leszczynski (2020), while acknowledging the importance of work that critiques the dystopian attributes of platforms, remind us that in practice platforms are sites of ‘mundane connectivity and interaction’ rather than sites of ‘value extraction and capital accumulation’ (Leszczynski, 2020: 190). Algorithms help us connect, mediate relationships and even form relationships with people we have not met. Smartphones become our ‘digital companions’ accompanying ‘their users throughout the day, ever ready to fulfil tasks’ (Carolus et al., 2019: 915). In what follows, I outline examples of geographical work that illuminates everyday urban digital encounters as care, primarily facilitated by the algorithmic affordances of smartphones, platforms and bottom-up collective responses.
Platforms that collect and share data on incidents of harm making it easier for victims to report or access support, and identify unsafe spaces, are doing caring work and are common ways in which we see algorithmic urban care materialise. One way in which this is enabled is using volunteered geographic information (VGI) systems and applications to report and map crime. For example, Nicolosi et al. (2020) discuss the hate incident reporting system app (HIRS), recognising the possible benefit such apps can have in filling data gaps around hate incidents while also being cogniscent of their limitations. A proclaimed substantial benefit is that using an app to report hate incidents ‘eliminates political and police-level barriers to hate crime reporting’ and potentially reduce ‘victim level barriers, such as fear of police, feeling that the incident is too trivial, and the time and effort associated with filing a police report’ (2020: 7). Such apps are limited, however, around concerns about the accuracy of crowdsourced data, geographical coverage and funding (Nicolosi et al., 2020: 7–8). Additional concerns are epistemological with algorithmic ways of seeing being critiqued for their disembodied and disaggregating nature, a scopic approach that conceptualises bodies as modular packages of data upon which risk assessments are made (Amoore, 2011; Elwood and Leszczynski, 2018: 632).
Datta (2020a) elucidates this more problematic relationship between the technological fix of safety apps and the experience of violence. Critiquing the smart safe city approach in Delhi, Datta shows how the techno-solutionism of smartphone safety apps entangles women in the algorithmic surveillance of the smart city, and yet fail because of incompatibility with the temporalities and experiences of violence, and unreliable infrastructure (2020: 1320). Slow download times, app crashes and lack of infrastructure compromised the use of the apps, compounding the existing inaction of police (Datta, 2020a: 1331). The immediacy and connectivity of the mobile phones themselves, however, was preferred as instant and direct connection to family and friends, while also a tool to access and curate safe space (Datta, 2020a: 1331). As our digital companion, the smartphone that is used to report these incidents, is there for us representing the closeness, preoccupation and trust, that characterises the relationship with our phones that Carolus et al. (2019) described. The phone becomes not just an algorithmic machine but an algorithmic friend.
The algorithmic friend is also visible in the sociotechnical relations that underpin infrastructures of care in post-Apartheid South Africa during the COVID-19 pandemic, that Nancy Odendaal observes. The communication platform, WhatsApp, was central to community action networks facilitating assistance at the neighbourhood level through the collective mobilising of care, a bottom-up response that is a legacy of the relations between the State and local that are a result of South Africa’s colonial past (Odendaal 2021: 394–395).
The bottom-up affordances of digital technologies also enable many forms of citizen sensing. Gabrys et al. (2016) show how low-cost digital technologies are used by citizens to enact environmental care. Discussing citizen tracking of air pollution, they show how ‘just good enough’ data generated by citizen sensing activities not only is useful for monitoring and accountability but provides a different way of thinking about environmental problems to be collectively addressed and potentially realise environmental and social justice (Gabrys et al., 2016: 12). Here, the algorithms that underpin the data collection and analysis have democratised not only the collection of data but its analysis as well, providing a counter-narrative to the extractive nature and unwieldiness of big data.
Algorithms and digital technologies also care for non-humans. Adams advances the concept of ‘conservation by algorithm’ to describe how the tracking and surveillance capacity of digital technologies are revolutionising conservation, automating data collection and monitoring as well as identifying new kinds of data (Adams, 2019: 343). In the context of conservation, algorithms are able to do surveillance, monitoring, analysis and identification work, alongside and beyond their human fellow conservationists. In turn, they support conservation policy, practice and decision-making, and are becoming increasingly integrated within climate change governance (Adams, 2019: 344; Scoville et al., 2021). Their growing influence in conservation and care make the politics of their propriety even more important (Adams, 2019: 346). Considering algorithms as things that we can care with and through, therefore, helps us to expand our discussions on the work that algorithms do in the urban beyond a threatening and violent presence, to one that also considers the way they can protect and reassure.
2 Caring in the home
The home has long been a site of care, and increasingly this care is being given by algorithms. Reid (2021a), for example, shows how smart devices are ‘implicated in care and caring practices’ (2021a: 86). The benefits of assisted living devices have real consequences, enabling people to stay at home with a level of independence for longer, providing a level of care that was previously not accessible in the home. Smart home technology has the potential to enable people to age in place, affording them independence and autonomy for as long as possible and providing a feasible, less expensive and preferable alternative to institutional care (Carnemolla, 2018, 2; Reid, 2021a; Reid, 2021b). The ability to be ‘algorithmically cared for’ at home has become more accessible due to the proliferation of smart home devices. The automisation of care has implications for considering what constitutes caring subjects and for the production of caring spaces as noted by Del Casino in relation to robotic technologies (2016: 852; see also Schwiter and Steiner 2020: 7).
Although there are limitations due to disconnect between some technologies and their fit with older people’s lives, support needs and technologies and services available (Carnemolla, 2018: 2), there is evidence of the ability to provide security and care in new ways. For example, Carnemolla shows how automated lights used in conjunction with a handrail made moving from the bedroom to the bathroom safer for a person who had declining balance, or how installing a video doorbell for someone with mobility issues changed the way that care could be provided, by allowing a family member to remotely screen people at the door (Carnemolla, 2018: 13). The technological mediation of informal care practices among families caring for older relatives is discussed by Reid (2021a; 2021b: 85), who notes the uptake of easily accessible technology enabled care devices, such as Google Nest, but highlights the tension that exists between care and risk, considering the capacity for surveillance these devices entail (see also Maalsen and Sadowski, 2019).
Being cared for by technology in our homes is not limited to those who need assistance to live independently. Many of us are cared for by household technologies in our daily lives. For example, personal assistants such as Alexa and Siri are also used to help in the home and are on call ready to respond (Strengers and Kennedy 2020), and Carolus et al. (2019) have conceptualised smartphones themselves as digital companions, helping us through our daily lives.
The care enacted by our smart devices is not necessarily a one-way street and can influence our own caring behaviours. Indeed, Michelfelder (2020: 44) suggests that ‘Alexa users have an opportunity to develop their own capacities as caring individuals by caring for Alexa in a distinctive way: namely, by helping to train her algorithms’. Because Alexa learns through machine-based learning algorithms, each time we ask her a question she becomes more familiar with pronunciation, content and context. According to Michelfelder, ‘we can care for Alexa in a moral sense when we help her to learn and so to get better at caring for us’ which further ‘opens up to us the possibility of cultivating a broader array of care-related virtues’ (2020: 49).
The ways in which algorithms care can also be seen in the more mundane use of common communication platforms. Maalsen (2022, 2021), for example, has posited that platforms such as WhatsApp and Messenger are central to building and maintaining relationships within the home. Relatedly, Horst et al. (2020) show how social media platforms including WhatsApp, WeChat, Facebook and LINE enact a form of friendly surveillance. These platforms had mediated intimacy and kinship among family members and friend groups, across time and distance. Drawing upon Marwick’s (2012) concept of social surveillance characterised by a reciprocity, de-centralised and micro-scale interactions between individuals, Horst et al. (2020) show that platforms and common technologies such as the smartphone were central to acts of care and at times over-care. Locative social media technologies were a conduit to care, facilitated closer relationships between family and friends and used for mutual monitoring of safety (Horst et al., 2020: 70). Yet they also had the potential for ‘over-caring’, allowing for the constant monitoring and expression of care that was perceived as intrusive and interrogative by those being over-cared for (Horst et al., 2020: 72).
Thinking about algorithms through a lens of care – across both the urban and the home – opens us up to the more generative possibilities and the reimagined subjectivities prefaced earlier. First, it helps us to widen debates beyond the predominant focus on harm, but second, considering care as the ‘species activity’ that Fisher and Tronto (1990: 40) describe as something algorithms can enact brings us closer to the posthuman agency that Rose (2017: 789) laments geographers have shown little openness to. In turn, thinking about caring with, through and for algorithms provides a conceptual bridge to wider possibilities of algorithmic agency which has broad implications, including for our academic work of knowing the world via research.
V Algorithmic knowledges: An algorithmic epistemology?
What would happen if we thought of algorithms as our colleagues, collaborators or co-researchers? What if we were interested in their perspectives and ways that they saw the world? We can learn from algorithms just as algorithms can learn from us. The aim of this section is to posit the algorithms’ perspective as valuable and to highlight the novel contributions it can make to methodology. The fields of design anthropology, digital humanities and HCI have been engaging with these questions for some time and I draw in some of this work here, yet similar questions are not being asked to the same extent in geography. Perhaps, this lack of curiosity is an artefact of the view of human and digital agency as supplemental to each other rather than already co-produced, that Rose (2017) identifies. This is a missed opportunity as asking such questions offers exciting possibilities to look at algorithms and digital geographies in new ways. In some sense, geographer’s have started to identify algorithmic outbursts and misbehaviour as a site of understanding digital impacts – the glitches (Leszczynski, 2020), the madness (Amoore, 2020) – but rarely do the opportunities that these events present get taken up as a way of working
This is more than a flattening of ontology, but an ontological reworking, which as Giaccardi et al. (2016) note is especially relevant for the networked objects associated with the Internet of Things, being objects that ‘acquire perspective and agency through the data they collect, the stories they reveal, and the interventions they make in the lives of the people that use them’ (2016: 237; McVeigh-Schultz et al., 2012). Similarly, Lupton’s (2018) work on lively data illuminates the agency of personal data generated through such connected devices which are inherent in the human-data assemblage. Despite this, a focus on human-centredness means that we often overlook the mutually constituted ‘dialogue’ between digital things and their users, an interaction which could be generative of ‘new relationships and value’ (Giaccardi et al., 2016: 237). Rather than algorithms being an extension of humans or distributors of human agency, we can entertain the idea of new type of relationship with them.
There are real benefits here for research. In most academic positions, finding time for months and years in the field is increasingly challenging, and as COVID-19 has shown us, there are limits on the type of fieldwork we take for granted (Howlett, 2021). Stay at home mandates and travel restrictions combined with increased access to digital technologies allowed many of us to rethink and access the field via a digital lens, or to shift to online field sites, something which has been the focus of many digital geographers for some time prior to the pandemic (Ash et al., 2018). But what is less often considered in this adoption of digital methods and sites is valuing the ability of an algorithmically underpinned technology to communicate unique insights from its
AI is an obvious place to start because of its ability to learn and translate data and logics into a series of ‘intelligent’ outputs. We can ask things of it, to which it can respond in a manner that we understand. In an exploratory work, Pereira and Moreschi (2020) ran an image identifying AI through an art collection to see how the AI interpreted the artworks. Based on the assumption that these types of AI have been trained in a product-focussed and profit driven sociotechnical system, they understood that its responses would be informed by its training background (Pereira and Moreschi, 2020: 1). They argue it also makes the AI an untrained viewer of art, and as such it can offer an innovative institutional critique of the art world, and what is considered art (Pereira and Moreschi, 2020: 1). Understanding, interpreting and valuing art is inherently entangled in habitus and cultural capital (Bourdieu, 1984), both foreign to this particular AI. Pereira and Moreschi’s argument however rests on this ‘untrained eye’ revealing the ‘inner workings of the arts system through its glitches’ (2020:1). The glitches they refer to here are the incorrect readings the AI makes of artworks – identifying them instead as objects that it is familiar with from its training – such as windows and door frames.
The AI’s misreadings of art are productive. Harnessing ‘their “glitchy” capacity to level and reimagine’, Pereira and Moreschi (2020: 1) argue that these misinterpretations ‘can also serve as a new way of reading art’. Being interested in the way the AI views art brings a ‘fresh denaturalised set of “eyes”’ to art, and because AIs can only interpret the world based on their experiences, could its interpretations of art tell us something about its experiences (Pereira and Moreschi, 2020: 2)? By asking what it sees, we can potentially glimpse inside its black box but more than that we also are offered a different perspective.
Pereira and Moreschi (2020) are not the first to position the glitch as a means to seeing in different ways. The glitch is at the core of Russell's (2013; 2020) ‘glitch feminism’, which reframes glitches as errata or corrections, ‘happy accidents’, that resist existing structural binaries. For Leszczynski and Elwood (2022), the glitch is an epistemological vector for engaging with and producing knowledge in digitally mediated cities. Considering that Leszczynski (2020) sees in the glitch an entry point into a more hopeful platform politics, we can also contend that glitches offer opportunities for algorithmic care. Glitch feminism acknowledges the simultaneous ability for error and erratum in digitally mediated formations. Essentially, each rupture offers an opportunity to correct for a different and better outcome. The glitch like Haraway’s (1988) trickster becomes revolutionary.
In Pereira and Moreschi’s work, the AI’s glitchy perspective illuminates different conceptualisations of the world that can productively unsettle ‘the relations between what we see and what we know in new ways’ (Cox, 2017: 14; Pereira and Moreschi, 2020: 3). Here, Pereira and Moreschi have essentially used the AI as a co-researcher, asking how it sees art, and by extension, how its perspective could help us see the world in different ways. As a co-researcher, it resists the real/virtual dualism to provide valid insights. The AI’s answers critique the art world and its institutionalisation of value, but we can apply this logic more broadly to think how digital things may provide us with different perspectives than our own. It is a conduit to the different perspectives that may, as suggested in Puig de la Bellacasa’s (2012) work, contribute to dissent, but which are necessary for collective knowledge.
Algorithms aren’t only useful for glitchy points of view – they contribute interesting insights when behaving as programmed. For example, Pip Thornton has leveraged the possibilities of algorithms to ‘see’ and critique digital capitalism. Thornton’s artistic intervention, {poem}.py, a union of Python code and Google Adwords Keyword planner used to calculate the price of poems, including William Stafford’s
In a wonderfully evocative piece, Giaccardi et al. (2016) employ the humble kettle, equipped with an autographer, as a co-ethnographer (an autographer is a camera that feeds information from its inbuilt sensors to its algorithm which then decides when to take a picture (Github n d). Programmed to take pictures at automated intervals, the ‘kettle’s eye view’ revealed a new perspective on the relationships between people and things, and they argue, such views can ‘ultimately present news ways of framing and solving problems collaboratively with things, which have different skills and purposes from humans’ (Giaccardi et al., 2016: 245). As our devices become increasingly networked, we can ask further questions of the world from their view.
Giaccardi et al.'s (2016) research raised some interesting questions. Not only did the autographer viewpoint show things that the human participants didn’t mention during interviews, and in some cases even contradicted what the participants reported, it revealed that things were creating and making visible different temporalities (Giaccardi et al., 2016: 242). The way things created time, variously seen as people filling in time, prompt us to consider who is involved in the creation of temporal structures (here it is not the human) and allow us to trouble the human-centredness of phenomenological approaches to feelings created by ‘empty’ time, by also paying attention to the thing’s perspective (Giaccardi et al., 2016: 242). Paying attention to the thing perspective highlights the co-constitution of the relationship between people, thing and place, and is an invitation to consider who is making who do what at different times – filling time while waiting for the kettle to boil is a point where the kettle is compelling the person to do something else (Giaccardi et al., 2016: 242).
The data that the kettle provides through its autographer is ethnographic. It is undoubtedly participating and observing, and it is adept at showing the discrepancy between what people say they do and what they do – an age-old classic of the value of such ethnographic observation. The kettle also revealed a broader ecosystem of relationships and practices between people and things that occurred in the kitchen space, which otherwise would have gone unnoticed such as multitasking (Giaccardi et al., 2016: 243). For Giaccardi et al., enrolling things as co-ethnographers had them question their original goals by providing them ‘access to data worlds we have never accessed before, see what we could not see, and call attention to what we thought was marginal or irrelevant’ (Giaccardi, 2021: 124). Employing things in this way does not make the researcher redundant but rather things work with the researcher to provide an additional point of view. Here, things are colleagues who ask important questions giving us different answers to what we expected.
The examples I have elaborated on here, Pereira and Moreschi’s (2020) work which aimed to see art from an AI’s perspective and Giaccardi's (2016; 2021) work on things as co-ethnographers, show the value of ‘working with algorithms’ and taking their viewpoint seriously for research. There is more work in this area but my selection of these examples highlights the possibilities for geographers. Of course we can ask questions of many objects – not just digital technologies underpinned by algorithms – but the value of this perspective for algorithms and networked objects is apparent, as Giaccardi et al. (2016) noted, in data they collect, the perspectives they bring, the stories they can tell and the way they communicate with us. There is also a difference in the acceleration of use of and spaces in which networked devices exist, which make them more broadly useful than many non-networked devices that help us in our research. The pervasiveness of networked devices therefore makes this a critical moment to think about the potential of algorithmic thought in our work.
There are practical benefits for enlisting algorithms and digital technologies as co-researchers and collaborators, especially in the current climate where our access to fields in which to work in, and the time in which we can stay in them, is restricted. We can send an algorithm into the field, utilise a platform to assist our research and ask what our devices saw. Of course, this raises ethical issues, but researchers already use ethnography apps, sensors and other devices to gather data. Taking this epistemological reconfiguration seriously, we would also have to ask what is ethical for the technology involved. What would, for example, life look like from the point of view of your smartphone, and would it want to share it with us?
Are there certain things we should not ask it just because it can find out? Much of the critique of algorithms and their harmful effects are based in questions of inherent bias and the algorithm’s aura of objectivity. As Powles (2018) notes however, diverting all our attention to bias can have perverse consequences. Addressing bias by training algorithms on under-represented groups, often minorities, can leave them more readable by machines and further harmed by algorithmic judgements. Powles uses the example of facial recognition systems that have difficulty identifying women of colour as they are under-represented among system designers and training data – retraining them to ‘see’ these women could lead to worse outcomes (Powles 2018). Working with algorithms in ways that care require thinking critically about what subjects we are asking them to interrogate.
Should we re-evaluate our expectations of algorithmic collaborators and treat their insights as we would those from our human colleagues, including recognising their fallibility? We can look to Amoore’s (2020) cloud ethics for guidance, particularly in the acknowledgement of the doubtful and partial subject – a subject that is never fully recognisable – meaning that algorithmic decisions are never wholly beyond doubt (Amoore, 2020: 151–152, see also Haraway, 1988). Relatedly, there is a need to stay with the opacity of algorithms, which following Haraway (1988: 586), Amoore argues is not limiting the accountability accorded to transparency but rather holds us to account for that which we learn to see – we in this sense being the composite subject that is produced because of human and machine learnings (Amoore, 2020: 166; Haraway, 1988). Perhaps we need to ask our algorithmic collaborators better questions.
Working with algorithms is not always going to be easy. Returning to Puig de la Bellacasa’s (2012, 205) thoughts on thinking with care as encompassing unavoidably thorny relations, we must prepare ourselves for the debates and negotiations that will be entailed in working with algorithms. These tensions will be between both people and algorithmic collaborators, but we need to be open to dissent and conflict to generate new knowledge and more hopeful algorithmic futures. Our relationships with algorithms exist on a continuum, and just as we can critique them for their harm and extractive capabilities, we can also benefit from our interactions. An algorithmic epistemology opens us up to the speculative efforts that Puig de la Bellacasa observes are necessary if we are to decentre care relations to more than humans without being confined by traditional humanist categories of thought (2017: 16). Situating algorithms as collaborators with their own albeit imperfect knowledges does some of this speculative work encouraging us to think about relations of obligation and reciprocity beyond human-centred techno-science.
VI Conclusion
Geographers are well equipped to deal with the various roles, behaviours and personalities of algorithms. As illustrated above, there is comprehensive and excellent work addressing both the harmful and, at other times, beneficial outcomes of algorithms. To bring us back to Del Casino et al.’s (2020) observation however, we could spend more time reimagining the subjectivities, relations and potentials of algorithmic life (2020: 606). Paying attention to these lively potentialities, I identified algorithms that ‘work with us’ as an emergent area of geographical study, which can be inspired by work being done in design anthropology and HCI. Acknowledging the effect of algorithms can help us to reframe our dealings with them, think differently about them and perhaps ask new questions of them. In doing so, we can gain new and exciting insights into the way they shape the world. The idea of algorithms as collaborators and co-researchers opens possibilities for understanding the entangled practices that produce either the harmful outcomes or alternatively outcomes more attuned to care, discussed above. Given this, I want to sketch out two ways in which thinking about algorithms as collaborators and acknowledging both the potentially harmful and careful work they can do, can progress geographical thought and the debates it can contribute to.
First, turning our attention to practices of care, what if reframing our relationship with algorithms demanded a reciprocity of care? The resurgence in research on practices of care in geography (see Power and Mee, 2020; Power et al., 2022) offers a hitherto underexamined approach to understanding algorithms. We could apply this perspective to our relationships with algorithms across a broad spectrum. We know, for example, that AI is resource intensive and that our practices of production and consumption of algorithmic objects are unsustainable (Crawford, 2021). What opportunities would there be to counter this if we truly cared for our algorithmic co-workers (and the things that make them)? Although changing a culture of resource extraction and consumption underpinned by planned obsolescence is a mammoth task, as individuals we could show care by running updates, looking after and repairing our devices, so that we do not need to replace them as frequently.
Both harm and care are entangled, and this also comes into focus in the way we train algorithms to learn and the questions we ask of them. As Michelfelder (2020) observed, our interactions with digital assistants helps them to better help us, so how could we devise better lessons and ask better questions of them? Would this help us negate some of the harmful impacts?
Second, and perhaps most thrilling are the epistemological implications and the methodological opportunities that reframing our relationship with algorithms brings. By thinking of algorithms not just as tools but as co-ethnographers or collaborators, we are given access to a world previously not open to us (Giaccardi et al., 2016). What situated knowledge can the algorithm bring? This is more than a reframing, but an epistemic shift that demands we decentre our perceptions of cognition as conscious and sapience as human. Doing so opens us to exciting new perspectives and ways of seeing the world. It becomes part of a bigger project on algorithmic thought that refuses biocentrism and which can generate space for speculative statements that not only allow us to ask different questions but can help to further critique and actively decolonise techno-science epistemologies (Parisi, 2021: 17). Related to this is a question of ethics and how much of its worldview we should ask and expect an algorithm to share with us. Geographers should endeavour to ask these questions in their work and to consider where they are best placed to bring on an algorithmic collaborator. As mentioned earlier, our responses to continuing research during the pandemic have created ample space for working with algorithms.
Finally, and perhaps grappling most with the concerns around harmful algorithmic effects, what can we gain from thinking about algorithms as both as interesting and fallible as our human colleagues? The partial subject sketched out by Amoore’s cloud ethics is instructive. We don’t expect our colleagues to never be wrong but at the same time, we don’t want to ‘unblack box them’ to explain why they made a mistake, even if their imagined black box could tell us. Accepting that our algorithmic colleagues bring a particular situated viewpoint and realising that this is only one perspective, we can bring this as an additional voice to our empirical material. In doing so, we also push back against claims of algorithmic objectivity.
Perhaps, we should not ask questions of algorithms that we expect will help us argue why they are biased, good or bad. Rather, acknowledging that they too have knowledge that is situated, means that sometimes they cannot answer, or it is not the answer we want. And we must become comfortable with that.
Supplemental Material
Supplemental Material - Algorithmic epistemologies and methodologies: Algorithmic harm, algorithmic care and situated algorithmic knowledges
Supplemental Material for Algorithmic epistemologies and methodologies: Algorithmic harm, algorithmic care and situated algorithmic knowledges by Sophia Maalsen in Progress in Human Geography.
Footnotes
Acknowledgements
Declaration of conflicting interests
Funding
Supplemental Material
Author biography
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
