Abstract
Introduction
The site description of the YouTube channel @FilmThe RobotsLA asserts it has been ‘Putting delivery robots in their place since 2022’. Following small four-wheeled delivery robots in Los Angeles neighourhoods, each machine with their own individual names and front-facing lights designed to look like eyes, the narrator speaks to them in teasing tones. In one post from late 2023, titled ‘Maximus lacks intelligence’, a robot is reminded of the time ‘he flipped himself over’ trying to mount a curb. A flashback to the incident shows Maximus trying to roll up a step while the commentator attempts to dissuade the machine. ‘Maxiumus, your wheels cannot get you up that step … look at him, his wheels are spinning … you better reverse out of there …’. Maxiumus reverses and tries again: ‘You’re going to try again?! It’s not going to work!’, and as the hapless Maximus tips back onto his rear side, the narrator says ‘Maximus, what the hell did you do?! He looks like a f**king dead beetle looking up at the sky’ (@FilmThe RobotsLA, n.d.). The video ends with a shot of Maximus lying in the street, wheels spinning helplessly.
The scrapes that Maximus and other delivery robots get into illustrate some of the challenges that robots face as they begin to enter the complex, unpredictable, inhabited and material spaces of our shared cities. It also displays the range of affective registers that people might respond to them with – the disdain and ridicule of robots in @FilmThe RobotsLA shows how robotic function is not a straightforward matter of the technologies ‘working’ or ‘not working’, but rather a complex and emerging tangle of material, digital, spatial and affective qualities and forces. For one author, watching Maximus being teased by the narrator clearly highlighted that the autonomy of the machine was vulnerable to the conditions of its operation.
Accordingly, in this article we propose that a concept of
To do so, we think with robots, a form of technology that is becoming embedded in automated and autonomous infrastructures of many different types. We treat robots as networked, semi-autonomous machines that can sense something of their surroundings and respond independently to those conditions, and we focus on robots that are present (or that will be present) and often mobile in our shared urban spaces. This is to distinguish them from the many other uses of automated and autonomous technology (such as chatbots) that are also referred to as robots, a distinction we make to foreground the material and physical presence of robotic bodies and the spatial effects and affects they can prompt. While we do not develop an in-depth ontology of robots in this article (although see Sumartojo and Lugli, 2021), we treat them here as physical, spatialised objects and as material participants in complex relations enacted through digital, algorithmic, spatial and socio-cultural forms of connection and power.
This article develops an account of what we call robots’
Our work is on the spectrum that runs from automated technologies that have little autonomy, even if they have some sensing and response capacities (such as factory manufacturing or processing machinery) to much more autonomous technologies with artificial intelligence (AI)-driven decision-making and action (such as autonomous driving systems). However, we do not seek to limit our arguments to either fully automated or complete autonomous applications, but rather offer the concept as one that can be used in multiple settings.
We have developed our concept of contingent autonomy through a 2020–21 study of robots in public space (Sumartojo et al., 2022; Tian et al., 2021). In this interdisciplinary project, we combined an ethnographic focus on what had happened (or was happening) in people’s lives, with a design orientation to speculative and inventive futures through processes of making robot behaviours by programming them. We explored people’s thoughts, feelings and ways of imagining the future of robots, a feature of how design ethnography can attend to ‘the anticipatory and future-focused modes through which we live … [and] the uncertain, experimental and experiential characteristics of our … engagements with the world’ (Pink et al., 2022: 2).
This approach was well suited to surfacing people’s feelings and understandings of robots (in our study’s case), along with people’s interactions, manipulations and negotiations with these technologies. That is, by programming simulated and real robots for various scenarios, participants were able to learn about robots and the extent of their capacities, and the research team was able to reach insights about how people understand robots, and how these technologies might be productive of the spaces where they are deployed. By locating our inquiry in an imagined public space, we sought to move away from the more controlled or limited settings where robots have historically performed the best, such as factory floors. Indeed, new generations of robots are now in use in much more complex and less predictable settings, and so must manage a more uncertain and contingent world. For example, recent research has explored their roles as and in infrastructure at sites including airports (Lin, 2021) and city centres (Cook and Valdez, 2021), in industrial applications such as mining (Bissell, 2021), and as part of how care is enacted in health settings (Del Casino, 2016).
The study took place during strict coronavirus disease 2019 (COVID-19) lockdowns in Melbourne, Australia, which meant that the study needed to take place online because of restrictions on the movements of both research participants and researchers. However, because robotics are not yet common in the lives of our participants, the approach we adopted relied on speculation because for most people, robots are still understood through imagination rather than practical, first-hand experience (Sumartojo et al., 2021). Indeed, the uncertainties that came with the COVID-19 pandemic, and the multiple ways that it played out in people’s working and family lives, health, mobility and feelings about the future, only highlighted the contingency of the world in which robots already or will exist.
In the next section, we bring together understandings of technological autonomy to show where the insights from our robot study might contribute to this scholarship. We then discuss some of the ways that our research participants reckoned with aspects of robotic autonomy, the limits and capacities they explored, questions of responsibility, and the terms in which they described this aspect of robots. In the final section, we return to a notion of contingent autonomy, and show what it implies for our relations with automated infrastructure.
Agency and autonomy
The COVID-19 pandemic has been a powerful example of how uncertainty can play out in myriad ways from the individual to the global scale. Its unsettling of many previous certainties about our everyday lives included highlighting the precarious and contingent qualities of infrastructure. As planes stopped flying, state borders closed, and healthcare systems struggled, these systems were revealed as contingent, shaky and unpredictable. The pandemic offered both a reminder and an example to pay attention to the function and design of the often-invisible infrastructures that organise and shape our everyday lives, even when they are not at the forefront of our attention.
As an example, throughout much of 2020 and 2021, Australia’s national borders were closed which made international travel outside some limited exemptions almost impossible. Those people who could enter the country were subject to quarantine rules that differed depending on the point of entry. The new system required new physical and technological infrastructure of isolation, as well as the provision and surveillance of new arrangements – including proposals to use robots to monitor hotel corridors where recently arrived people were quarantining. These arrangements changed quickly and unpredictably depending on the spread of COVID-19 and its virulence in different parts of the country. Along with being a social, personal and emotional strain, this was felt as an
Our arguments for a notion of contingent autonomy grew from the profound and heightened uncertainty that typified the experience of COVID-19 for many people around the world. They also are inspired by accounts such as Rose’s (2017) that consider how AI and other automated technologies are ‘intelligent’ specifically in terms of their agency. Rose (2017: 781) asserts that ‘digital technologies in cities are radically reconfiguring agency both technological and not’, highlighting the importance of posthuman agency that shows how it is ‘always already co-constituted with technologies’. Indeed, the notion of posthuman agency highlights the entanglement of technologies with people, where ‘machines have made thoroughly ambiguous the difference between natural and artificial, [and] mind and body’ (Haraway, 1991: 152). Braidotti (2013: 91) likens this to a ‘machinic vitality’, and advocates for a vision of machine-human relations that foregrounds ‘becoming and transformation’ rather than ‘inbuilt purpose or finality’. Focusing on built environment design, Parisi (2013: x) discusses the use of algorithms as similarly enlivening the design process with their own forms of agency: ‘automated processing is not predeterminate, but rather tends toward new determinations.’
Drawing together these conceptual traces, we can sketch an account of technological agency that is entangled with human (and non-human) activity and thought; that is apprehendable through how it touches and is articulated in actual places like cities; and that holds open the possibility of ongoing change, rather than a final goal or pre-determined end game. Moreover, we make a sideways move from agency to autonomy, conceptualising autonomy and the perception of it as, first,
Taking this approach, we extend notions of robotic autonomy beyond those that rely on the inbuilt and designed properties of the robot itself, akin to what Suchman (2002: 92) describes as ‘a view of objective knowledge as a single, asituated, master perspective’. If autonomous technologies are conceptualised abstractly outside the spatial contexts or moments in which people and robots encounter each other, then we risk missing the complexity of what is happening in these moments, and what people think and care about what is happening. Indeed, Sumartojo and Lugli (2021: 8) argue that ‘autonomous or smart technologies … are always reliant on and indeed constituted by their contexts’. Moreover, and as COVID-19 reminded us, certainty about what might happen in the future is always impossible. It follows that uncertainty plays out and is experienced spatially, and therefore our account of robotic autonomy is contingent on the things that happen
Second, we concur with the recent focus on the unique moments of encounter with digital systems, challenging often implicit assumptions that digital systems function the same across time and space. This helps reframe perceived intelligence not as a trait of a system but as a contingent and relational achievement within an encounter.
Together, this work unsettles purportedly ‘smart’ technologies, arguing that their apparent intelligence is not an abstract quality, but rather is always a matter of how, when and where they are encountered. Rather than operating in seamless or uniform ways – a perspective that centres the technology itself, by prioritising the conventional narratives of programmers, designers and engineers that laud technological capacity – automated and autonomous technologies instead are located and produced in the specific moments when and where we encounter them. The circumstances and conditions of these encounters also shape how we make sense of robots as automated, autonomous, or somewhere in between. This approach, as Bissell (2021: 368) points out, can help develop ‘more multiple and transitional understandings of automation, rather than something that is fixed or known.’ The frame of encounter also allows us to consider the different subjectivities at play in such encounters and reflect on ‘whose perceptions of AI systems get to matter in dominant discourses’ (Lynch, 2022: 2). Encounter also offers rich methodological potential, and while we do not discuss methodology in-depth in this article, the research that we draw on was intended to enable direct encounters between our research participants and a robot that people would seldom encounter in their everyday lives in the context where the study took place (for a detailed discussion on methodology, see Tian et al., 2021).
This gestures towards our third point:
Bringing this to an account of robotic autonomy, who or what is responsible for what robots do, and the outcomes of those actions, is similarly contingent. That is, responsibility depends on multiple factors that emanate from the conditions of robots’ design, production and programming to the spatial contexts in which they are situated and where we encounter them (and where they encounter the world), and the computational orders that structure their behaviours. An implication is that, as robots become more autonomous, the accumulative processes by which the algorithms that help to govern them act will become diffracted through the ongoing actions of the algorithm, and the increasingly massive data sets that help them make decisions. Moreover, this form of responsibility unspools over time, as actions accrete, or new understandings are brought to past activities through algorithmic writing and rewriting. Add to this the varied environments and encounters that feed the data sets that extend algorithms into the world, and the contingency increases.
Since we cannot possibly predict or anticipate everything that robots will make actionable, risk reduction approaches that seek to prefigure the world (Kinsley, 2012) are unreliable strategies for accountability. A more complex rendering of responsibility relies on and emanates from the other two factors we are interested in: the contexts where robots operate, and the conditions of the encounter that bring them into proximity with people. We are not suggesting that it is impossible to take responsibility for robotic actions, that responsibility will inevitably be diluted over time or through the algorithmic hunger for new data points, or that no-one or nothing can be held accountable. However, who or what can be held responsible for robot actions, as we will show, cannot be understood in isolation from the emergent contexts and encounters through which they operate and are made sense of.
In the next section, we turn to the research materials that have helped us compose this account of contingent autonomy. We explore what forms of relationality robots introduce, and in particular how people understand them as somewhere on a spectrum of automated and autonomous. First, however, we offer a brief explanation of the study’s methodology.
A speculative co-design workshop for investigating robots
This article is based on an interdisciplinary project that investigated how people make sense of robots encountered in public space (Tian et al., 2021). We used a range of digital interfaces (see Figures 1 and 3) – Zoom-based video, a new robot programming interface and a simulated environment – and then a real Pepper robot (Figure 2) in later phases of the project, asking our research participants to engage with these remotely.

A robot programming interface (L) run by one of the researchers in response to participants’ instructions, and the Pepper robot (R) responding to the programming. This screen shot is from the researcher view as we tested the workshop set up.

A Pepper robot.

An interview with a research participant with their screenshot of a Pepper robot in a simulator. The programming interface is on the right side.
The phase that this article considers was an online workshop during which 12 research participants worked in groups of 3–4 to imagine a public space scenario where they might encounter a robot and decide on a role for it within that scenario. Participants included university staff and students in different disciplines, most of whom had little experience with robots or programming. We used a Pepper (Figure 2) robot as an example to think with, because the team had the requisite programming expertise. Pepper is a 120 cm-tall semi-humanoid robot with expressive arms and hands and a mobile, rolling base. Designed to be ‘friendly’ in its interactions with people, it has voice and face recognition capabilities and a touchscreen display on its chest, made of white, shiny plastic. Pepper is designed as a platform technology – an interactive robot that can be readily programmed and used in a range of experimental settings.
Examples devised by the research participants included cleaning tables at a busy cafe or helping to reunite lost children with their carers in a shopping centre. With the help of one of the research team acting as a ‘translator’ to help them quickly implement their ideas, participants used a simplified programming interface specifically created for people with little programming knowledge to implement the roles they had imagined. They watched their programmed behaviours in an online simulator, and across several iterations, refined their programmed behaviour. In a later phase (see Figure 1) their programmed actions played out on a real Pepper robot, but in a university lab that was remote from our participants – that is, they could see the actions on a real robot, but were not in the same room. While this required a lot of imagination on their part, because the simulator environment was a simple grid, and the material and sensory qualities of the robot were distanced in later workshops, it enabled rich conversations about what could possibly happen if a robot was deployed in a real public space setting.
The workshops were intended to probe participant’s feelings and assumptions about the robot’s capabilities and to explore what they thought it should be used for. Since most people did not have first-hand experience with robots, they were developing and articulating their views on robots as they emerged through the making process. That is, programming-as-making helped people explore the robot’s abilities within particular imaginative contexts and also consider what they felt was appropriate robot tasks and behaviours, as well as how they felt when the robots did not do what they expected.
During this process, we asked participants to take a handful of screenshots at moments during the workshops that they thought were most interesting, and then video interviewed them a few days later via Zoom with their screenshots. In these follow-up interviews, we asked them why they took the screenshots and what the images showed, and this prompted a discussion about what they found out about robots during the workshop. We chose this approach to explore how participants thought about the possibilities of robotic capacities as they developed behaviours for them, rather than finding out their existing understandings of robots. These interviews form the basis of the discussion on this section (see Figure 3).
The robot programming resulted in behaviours that were autonomous, and that could be sequenced together by research participants, and the programs made use of machine learning or AI in the object and face recognition components. Therefore, despite the simulator environment, this helped us understand the contingency of autonomy because most people were not aware of the actual capacities of robots, so the extent to which they are autonomous was not clear. Instead, this autonomy was made sense of, as we will show, through the contexts in which robots are situated and encountered.
The intention of this research design was to see what would happen when people were asked to ‘make’ robot behaviours. Inspired by practice-based design research approaches, it built on the principle that making is a form of creative and unpredictable coming-to-know about the world, where people engage directly with the raw materials of things (in this case, programming codes, computer screens, simulated and real robots and a group Zoom call) and learn through practices of creation. Ingold (2013: 6) describes this as an ‘art of inquiry’ where ‘the conduct of thought goes along with, and continually answers to, the fluxes and flows of the materials with which we work.’ In this sense, the technologies that we used to conduct the project were productive of what was actually able to be not only articulated, but imaginable. They were a crucial aspect of the encounter with robots that research participants engaged in, and the insights we were able to reach as we joined them in that encounter.
In our workshops, speculation about possible futures through programming-as-making enabled new forms of relation to come to light – where the making process introduced possibilities that may not have been thought of before. Speculative making here is not an outcome of positivist pre-figuration, where we think about what might be ‘out there’ in the future, and then materialise or visualise it. Rather, it is a way of going along with the world to find out how the future might feel, if things come together in particular ways. Here, speculative making is not akin to prediction, but to open experimentation with possibilities. The looseness of speculation as a stance towards the future, following Dunne and Raby (2013: 2), ‘is to use design as a means of speculating how things could be … to create spaces for discussion and debate about alternative ways of being, and to inspire and encourage people’s imaginations to flow freely.’ The somewhat improvised nature of the on-screen encounter that participants engaged with – a mash-up of a simulator and programming interface within a zoom screen-share where participants could also see each other – resonates with Ash et al.’s (2018: 167) definition of an interface as ‘relational systems composed of multiple parts, each of which communicate with one another and the user to create a range of affective, habitual and often un-reflected upon responses’. While the experience of the interface is not the focus of this article, the design of the workshop certainly relied on screen-based engagement that was spatially, materially and affectively distinct from the ‘real-life’ encounters we had planned before COVID-19 restrictions scuppered our initial research design.
Our exploratory methodology, where participants decided on the best robot behaviours and then sought to enact them in the simulator or a real (but remote) robot, supported the emergence of the arguments in this article about contingency precisely because it worked in a speculative mode. In this way, we learned that even lab-based experiments, if designed carefully, can generate wider conceptual insights. By leaning into uncertainty – when the robots ‘failed’ or the programming did perform as participants expected – we were able to think with forms of contingency that are folded into everyday life and our mundane encounters with multiple forms of technology (see also Pink et al., 2022: 172).
Encounters with robots
The participants in our workshops had a range of experiences with robots and programming. For some, this was the first time they had tried programming, and to help with this, we used a new interface called RIZE (Robot Interface from Zero Expertise) that presented on the screen as a set of interlocking pieces that could be built up into a sequence that drove the robots’ actions (see Figures 1 and 3) (Tian et al., 2021). One member of the research team with robotics programming expertise was assigned to each breakout group as a translator, to help quickly move through the programming steps.
This interface was the process through which participants came to understand the robots’ capacities. In Ingold’s (2013) terms, it was an important material with which people could think about what a robot was capable of, and what those capacities depended on. For example, one participant, John, explained what he had learned about robots from this process: I think the act of trying to build programs really did a good job of drilling in just how much you can’t assume in building out these behaviours. Because they really are these discrete step by step ‘if/then’ algorithmic actions. And so it requires really thinking about every building block of what might otherwise be a really simple action. What triggers it, what happens in what sequence, all those kinds of things.
Another way that our research participants explored this idea was by discussing how they encountered the I was kind of impressed by how much we have to break down the actions when we’re programming the robot. So something that for humans is basically understood as a single action, like someone comes for information and you give information, actually if you’re talking to a robot the robot has to ‘robot raise your head and look at the person’, ‘robot say hi’, ‘robot do this’, ‘robot do that’. So this is something that caught my attention. There were a lot of minor actions that were part of the bigger action. And those small things can only be perceived once we ran [the simulation].
Daniele went on to discuss how her understanding of robots changed during the workshop, and this is where the matter of autonomy came up explicitly. Maybe [the workshop] changed more my perception about people who make robots than the robots themselves. Because it’s really up to people to determine the way that they’re gonna act. And the fact that you don’t necessarily have experts in social interaction, designers, or anything like this, programming these robots, maybe the flaws in interaction, they come from people not considering this minutiae, so I wouldn’t blame the robot. I would always blame the people who told the robot it was ok. You don’t blame the kid, you blame the parents if it’s misbehaving.
However, the trajectory of this dynamic – programming based on some necessary level of prediction, deployment in an uncertain world, and the potential for diminishing of capacities as the world it is in changes – does not necessarily result in ‘failure’ as understood by the people who interact with the robots. As above, Daniele likened robots to children, in the sense that when they misbehave, the parents take the blame. Thus, she does not necessarily see the robot as failing because of some inherent flaw, but crucially, because it is not ultimately responsible for its actions. Instead, she holds the programmers accountable. An implication is that the robot’s autonomy is therefore understood as contingent on the programmer’s decisions, even though the programming often occurred before the deployment of the robot.
Moreover, that programming was also understood as specifically We found it quite limiting because Pepper was in the humanoid form, and very much constrained by this tight set of behaviour patterns and expectations … it was a bit disappointing that we weren’t able to express ourselves fully through Pepper, and express what we wanted to. Researcher: What did you want to express that you couldn’t? I think anything beyond just the basic behaviours. So, I guess it’s that question of is it a robot if it doesn’t think for itself. And in that way Pepper’s got senses so she can sense when people are there and respond to certain questions and pick up on certain things. But it’s similar to those smartbot chats that you can have where you can ask them a question but if it’s anything outside of their comfort zone they won’t be able to respond to you.
Contingent autonomy
Together, the discussion with John, Daniele and Sue offers insights into how the autonomy of robots – and by extension the autonomy of other automated technologies – plays out in the encounter amongst people, robots and the sites, such as the speculative scenarios in our study,. It shows how, while programming relies on a logic of predictability driven by computational and engineering processes, in the real world, it is inevitable that these processes will not always play out perfectly, and indeed people have ways of understanding this as acceptable. As we have moved away from robots performing fixed tasks in highly controlled settings, the contingencies and unpredictabilities of their environments have increased. Robots can potentially learn from previous experiences and make improvements, changing the limits of its ‘comfort zone’ (as Sue put it) with new encounters. However, the unlimited diversity of possible experiences means it will keep encountering novel situations where its capabilities are limited.
People can accept a robotic affordance of not always ‘working’, even if it is accompanied by a sense of frustration, confusion or disappointment. Robotic failure might feel like a let down, but it is also broadly acceptable, as when Daniele compared robots to ‘kids’. However, while in a research setting such glitches might not be taken too seriously, the acceptability of these becomes more fragile with the development of autonomous technologies such as self-driving vehicles.
This suggests that, rather than thinking of robots as succeeding or failing, we instead need to find ways to think about their autonomy in the context of a world in process. This contingent autonomy begins in the imaginations and skills of programmers and designers, is stretched by the affordances of the technologies themselves, is articulated in the specific settings where robots operate, and unfurls in encounters with people in all their complexity. Not least, this underscores Suchman’s (2002) critique of developers’ ‘view from nowhere’ that shapes the capacities of autonomous technologies. It also is an argument that is made possible by thinking along with the materials and processes by which robots are made and work (Ingold, 2013). As we discuss in the final section, it implies that current conventional ways of designing and programming robots could use the notion of contingency to shape their efforts.
As we have been arguing, thinking about robots as contingently autonomous means attending to how they are situated or emplaced, how people encounter them, and the ways in which we might hold them responsible (or not) for their actions. Their autonomy is contingent in terms of their surroundings, which means we must recognise the emergence of the ongoing worlds that they inhabit along with other people, living species, technologies, forms of matter, and more. The perceptions of people who encounter them are also important, and these people might test their capacities in various ways, such as a study of children gradually attacking a robot in a shopping mall (Brščić et al., 2015) or even our own experiences as researchers standing in front of robots to see if they can sense us and to find out what they will do when confronted by a sudden obstacle. Indeed, not knowing what an object can do invites experiments with our own bodies, routines and imaginations, all of which emerge as entangled with the autonomy of robots in the moment of encounter. This is because technologies we encounter for the first time have ambiguous and unfamiliar capacities, so we often do not even know the extent of their ability to act autonomously; for Sue, above, this felt disappointing.
The point here is that the autonomy of robots does not sit outside how we encounter and perceive them. It is a relational and dynamic extension of emplacement and encounter, rather than a fixed capability inherent in the robot, its programming or networks. A version of this is recognised, for example, by people who design and develop autonomous vehicles, who rely on extensive ‘situated’ data collection, and do this through pilot deployments of vehicles to collect data and learn how to operate in a world inhabited by people, other cars and more, although the ethical, legal and environmental implications of gathering massive amounts of data are highly contested (Bhuiyan, 2023; Dave, 2023). However, at the same time, most of these research programs do not examine the perceptions of people who encounter these autonomous vehicles, beyond observing them as obstacles to avoid.
It follows that we can think of robots as subject to forms of ‘machinic vitality’ (Braidotti, 2013), where they are always becoming in and with the settings they inhabit, rather than somehow ‘finished’ when they leave the factory or even when they are updated by programmers. Indeed, as we have argued elsewhere: In seeking to anticipate what the machine will encounter, and to be ready with a response to those conditions, robot programming treats the world's becoming as able to be known in advance – and in so doing, calls particular futures into being by defining at least some of the parameters of what can happen. (Sumartojo et al., 2022: 61)
In this sense, our research participants understood the limits of robotic responsibility not simply in terms of robotic capacity, but rather stretched these to include how people should also be held responsible in two distinct ways. First, people are responsible for the limits of what they could imagine the robot would encounter and therefore program it for. Second, people also might be part of how the robot operates, as unpredictable encounters between people and robots generate new understandings of robotic capabilities. Together, these reinforce our arguments for context and encounter as important aspects of how robots ‘work’.
Implications
This project was an interdisciplinary collaboration between colleagues with backgrounds in cultural geography, design and engineering, who together improvised an experimental research design in response to COVID-19 distancing rules in Australia. It was intended to investigate robots in public space, and find out how people respond to them, and to the possibility of their presence in the future. The opportunity to work in an emergent speculative mode, exploring technological possibilities and adapting the research design as we went, reflects the very contingency that we discuss in this article. Indeed, our original research project, designed and funded before COVID-19 struck, was a more straightforward qualitative study of people’s responses to robots in public space. However, our iterative, future-focused and making-based approach made contingency apprehendable where more conventional or rigid research approaches may not have.
One implication of the work is that it shows how a speculative research mode is not only about materialising or stabilising possible futures, but instead a use of creativity to feel our way forward towards the forms of relation we want for the future. For speculative designers such as Dunne and Raby (2013), as in this work, speculation is less a positivist end point about future states, and more a
Accordingly, and to conclude this article, we want to reflect on some implications for the design and development of robotic technologies. We build on the argument that abstracted and decontextualised ways of designing technology are ‘closely tied to the goal of construing technical systems as commodities that can be stabilised and cut loose from the sites of their production long enough to be exported en masse to the sites of their use’ (Suchman, 2002: 95). This stabilisation reduces technology to a standardised set of functions that promise to perform in the same way, no matter the context, which disregards both the reality and the crucial importance of context in how people determine whether technology actually
Moreover, automated technologies cannot fulfil the promise of perfectable efficiency, replicability or predictability, which so often characterises how people think about them or want them to work. This is an important point because it contradicts how robotics are conventionally designed, where the settings where they are intended to work are generally understood and defined solely in terms of the intended task of the robot. Working in a ‘simplified reality’ helps robots to ‘to perform the right behaviour’ according to their intended aim, but it does not account for much of the complexity of the world (Babuska, n.d.). For example, in the lost child scenario developed by our study’s research participants, parts of the robot behaviours were designed to amuse a child while human help arrived. However, these behaviours did not account for where in the shopping mall it might be, such as near a loud and distracting food court, or what else might be happening in the space. Although this is a simplistic example, it illustrates how difficult it is to account for the complexity of the contexts in which robots operate, and shows the simplification of the world that can be reflected in robot programming. We are not arguing that this is a result of programmers’ limited imagination, but rather of the impossibility of this task – this is why autonomous technologies work best and most safely in controlled environments. Moreover, prediction forecloses and limits possibility - and despite the promise of predictive analytics, even if this is possible, it is not desirable.
However, treating autonomy as contingent helps open new possibilities for meanings, use and design of technology. Even when designers or programmers have to artificially freeze the world, or design to address a set of known problems, designing
Although this study was focused on robots’ immediate responses to a speculative scenario, with the use of more powerful AI in autonomous decision-making, a complex notion of contingency as layering over time is also required. Amoore’s (2020: 22) notion of the ‘authorship of the algorithm’ touches on this when she posits that it is ‘multiple, continually edited, modified, and rewritten through the algorithm’s engagement with the world.’ Over time, therefore, loops of writing and rewriting accumulate and create their own contingencies of how algorithmic processes related to autonomy are laid down. As robots become better at learning about their surroundings, and the writing of AI becomes more powerful in their operation, the extent of what their autonomy is contingent
We also question whether robots can be enrolled into existing regimes of accountability, or if they require a new rendering that is located in contingency. Could we explore a way of thinking about our interactions with robotic and other technologies as a form of ongoing negotiation, of processes of understanding and insight, but also of confusion and frustration – not a stable relationship, but one of constant change? This could include, as Bissell (2023: 1) avers, a sensitivity to ‘Dispositions [that] form more subconsciously over time through repeated encounters that give rise to specific bodily sensations, rather than being deliberately weighed up in conscious thought’. While this is quite different from the step-by-step programmatic logics (and productivist narratives) that most robots are currently governed by, as these technologies become more and more a part of our everyday lives and our everyday places, we need ways to think about them that do not bow to these logics but rather recognise robots as participants in the world without denying its uncertainty.
Robots are an example of how technology can pre-figure the world through programming, and thereby subtly work to bring a world that suits them into being – exemplified by proposed urban modifications such as dedicated lanes for automated driving vehicles. We instead need versions of robotic autonomy that do not seek to predict or control, but that take the world as it actually is. In discussing a feminist version of objectivity, Haraway (1988: 583) reminds us that the work we must do is about ‘limited location and situated knowledge, not about transcendence and splitting of subject and object. In this way we might become answerable for what we learn how to see’. This hints at the approach that we argue for.
Foregrounding spatial context in thinking about the design or function of robotics, and remaining open to the unperfectability of technology within those contexts, is important in relation to robots in public space precisely because such spatialities are themselves always in emergence. At present, robotics design is understandably focused on the function of robots, with an implicit assumption of some level of predictability about the contexts in which they will operate. However, we need robots that can operate
Footnotes
Acknowledgements
Thank you to the research participants who shared their time and insights with us.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The project was funded by the Monash University Data Futures Institute.
