Abstract
Keywords
This article is a part of special theme on Analysing Artificial Intelligence Controversies. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/analysingartificialintelligencecontroversies
Introduction
In March 2023, the San Francisco-based start-up OpenAI released their latest large language model (LLM) Generative Pre-Trained Transformer 4 (GPT-4) to their subscribers, including an accompanying document warning about the potential negative consequences of this technology, 1 which led to a wave of media articles highlighting the real dangers of this ‘controversial’ technology for society. 2 In previous years, OpenAI had issued similar statements: in February 2019, the company initially opted against releasing the GPT-2 model (despite ultimately making it available several months later), due to ‘concerns about LLMs being used to generate deceptive, biased, or abusive language at scale’. 3 Similar concerns were made public in May 2023 by the Chief Economist of OpenAI’s parent company Microsoft, who went on record to state that ‘I am confident that AI will be used by bad actors, and yes it will cause real damage’. 4 Such affirmations of the dangers, harms and costs to society of this new wave of so-called generative AI by their developers and backers can be called ‘disruptive’ in their own right, as they break with unwritten conventions that have underpinned public communications about controversial science and innovation in the last few decades: in notable controversies about technoscience in the 20th century, concerning nuclear power, climate change, and genetically modified (GM) food, the companies involved consistently denied, refuted, deflected or severely qualified claims advanced by civil society actors about the harms and risks that their products created for society. What is the significance of the seemingly irresponsible affirmations by AI developers and backers of the disruptions, risks and harms that are created by their innovations, and what are the implications of this for the role and status of public controversies about contemporary AI?
One of the striking features of recent corporate ‘warnings’ about the dangers that AI poses for society is that they echo similar warnings that have been made by AI critics during the last 5 years. When Geoff Hinton, the ‘Godfather’ of AI went public with his concerns about AI in May 2023, and cited these as a reason to end his employment at Google, it was pointed out by many, in newspaper editorials and on Twitter, that he had in 2018 failed to show support when the computer scientist Timnit Gebru and others on Google’s Ethics team, were fired or left the same company citing closely related concerns, namely that Google’s LLMs present a source of significant harm to society.
5
In this regard, Hinton’s actions can be interpreted as a discursive strategy of appropriation—a way of disarming the criticisms of AI voiced by actors in society by adopting a modified version of those criticisms as one’s own, and mobilising one’s own authority as a credible spokesperson of science and business to occupy the channel of ‘public concern’ with AI, to the exclusion of out-group critics.
6
Indeed, this public communications strategy can be understood as reflective of wider, institutionalised relations between industry and state, in which Silicon Valley tech-company lobbyists seem assured that their ‘confidence’ that AI will cause widespread societal harm will not be sufficient cause for the state to restrict its development in ways that are adverse to their interests (McGoey, 2021). However, even if the affirmations of AI’s harmfulness for society by its developers and backers have featured
To this end, we have worked with UK-based experts in ‘AI and society’ to identify, qualify and evaluate the most important
How to uncover the capacity of AI controversies for problematisation across the science/non-science binary?
Recent scholarship has drawn attention to the appropriation of critical discourse about technology and society by the contemporary tech industry. Phan and colleagues (2022) describe how, when AI scientists at Google like Timnit Gebru publicly contested the company’s lack of commitment to addressing the societal harms and risks associated with the LLMs it was developing, the effect was not to endanger the company’s reputation. Instead, issues of risk and harm became ‘ethicised’ and corporatised: they were reframed by the company as concerns that could be dealt with via internal processes centred on values rather than regulation. Appropriation of the vocabulary of societal risks and harms of technology can also be discerned in contemporary scientific discourse on AI. For example, the computer scientist Percy Liang observed during his introduction to the ‘Foundation Models’ workshop at Stanford University in 2021: ‘what I am seeing today is the very beginning of a paradigm shift, [in the area of large language models] and I think this paradigm shift is going to have profound implications […] there will also be heavy social consequences that will result from each technological decision’.
7
Such affirmations that LLMs and indeed subsequent generative AI models create new societal risks and harms by AI proponents point towards a radicalisation of a recent argument by Geiger and colleagues which states that in contemporary capitalist societies, controversy about science and technology increasingly serves as a
In Geiger et al.’s accounts, techno-scientific controversies are increasingly deployed towards promotional ends. In the last decade or so, controversy has acquired a ‘positive’ association with the tech industry's notion of disruption (Geiger, 2020), the Schumpeterian idea that innovation done well is destructive of existing societal arrangements and will challenge social conventions. An example of this is Google Glass, which was widely reported in the media as controversial because of the fundamental concerns with privacy and surveillance raised by these networked wearables equipped with Facial Recognition. Reviewing these controversies, the tech magazine
In other words, the strategic use of AI's controversiality threatens the role of public controversy as a force of the democratisation of research and innovation, and risks to undermine its capacity for inclusion and problematisation. The reasons for this are surely complex. For one, artificial intelligence (AI) research has been marked
This situation not only has implications for our understanding of AI controversies as occasions for the democratisation of knowledge; it also has consequences for the methodology of
How, then, to analyse public controversies about AI under these conditions? In this context, we propose, the task of controversy analysis is not to ‘describe’ the positions and relations of actors involved in AI controversies, but rather to undertake an active search for problematisations that AI controversy may still be giving rise to today. That is, our hypothesis is that AI controversies today continue to provide opportunities for the democratisation of science and innovation, by expanding the range of actors involved in AI debates beyond a narrow range of insiders and for the shared articulation of complex problems. However, such problematisations seem today in many cases crowded out in public discourse by promotional deployments of AI's controversiality, which use the language of AI harms and risks to consolidate the authority of techno-science and the reality of AI. We ask: how to uncover controversy’s capacity for the
Methodology: From the observation to the elicitation of AI controversies
It was clear to us, then, that if we wished to attend to societal processes of the problematisation of AI, we would need to diverge from predominant approaches in controversy mapping. Contemporary AI controversies do not fit the model of a generative empirical occasion: prominent media and expert debates about AI cannot be relied upon—in and of themselves—to bring to the surface qualifications of the societal consequences of AI or to make explicit actors’ relations and interests. The usual research strategy of controversy mapping, which is to query online media for controversy terms such as ‘Nuclear waste’ or ‘GM foods’, and then to document the positions that actors take on the issue on Web pages and in social media, does not work in this case (see also Munk et al., this volume). Indeed, Twitter data that we collected using the query ‘AI’ between December 2018 and March 2021—which came to 36 million tweets—seemed to us unsuitable for controversy mapping: on initial exploration this data set appeared to contain lots of publicity and little debate. If we were to use this data to create a ‘controversy map’, there was a real risk our analysis would merely end up reproducing solutionist and doomsaying claims about AI—and their associated authority and reality effects—without surfacing articulations of AI as a collective problem. To locate the latter, we decided, it was necessary to shift from the description to the elicitation of AI controversy. Declining to adopt the empiricist posture of patiently tracing contestations about AI wherever they emerge, we instead decided to draw on design research methodology to devise strategies to actively elicit a distinctive type of controversy about AI: controversies with the capacity to problematise AI across the science-non-science divide.
9
Design researcher Donato Ricci offered this helpful summary of this methodological re-orientation of controversy mapping: If we [typically] spend a great deal of time to seek the sources, detail the protocols and produce visual manipulation[s] to identify “
To structure this process of controversy elicitation, we adopted the following three-step research design: first, we actively configure a context for AI controversy elicitation, by inviting a specific community of experts to assist us in the identification of relevant controversies. As it is our objective to surface problematisations of AI across science and non-science, we decided to consult UK-based ‘AI and society’ experts. In a second step, we qualify AI controversies: we
Convening an extended expert community in the UK: What are the most important and possibly overlooked controversies about AI in the last 10 years?
To initiate our analysis of AI controversies, we began, as is customary in controversy mapping, with a ‘query’ (Rogers, 2017). However, our initial query did not involve selecting and submitting a set of keywords to an online platform API, such as Twitter's, but rather took the form of an invitation email that we sent to 250 UK-based experts in ‘AI and society’. Our decision to focus on this community was in part informed by (1) our role in the international
Our respondents included civil society advocates and activists (Amnesty, EDRi, Article 19), civil servants from AI-related government units (Information Commissioner’s Office, Centre for Data Ethics and Innovation, NHSX), academics with backgrounds in digital humanities, social science and computer science and industry (AstraZeneca, DeepMind), the arts (Serpentine Galleries; Ambient Information Systems) and journalism (BBC, TechCrunch). That is also to say, our research design did not specifically target institutional outsiders, but rather focused on actors who are involved in the articulation of AI harms, risks and benefits in ways that cut across the science/non-science binary. To counter-act authority effects, our consultation explicitly encouraged respondent to give their own perspective on what makes AI controversial asking: (1) ‘What
We identified three different types of responses to our consultation, which broadly correspond to the three questions we asked. A first set identifies (1) ‘controversial topics’, broad areas in which controversial AI developments take place, such as ‘Algorithmic decision-making’, ‘AI warfare’ and ‘Environmental costs’ (see Figure 1). A second set of responses mentions specific sites, incidents, events and objects of contestation, concrete instances in which AI caused disruption, trouble and, in several cases, demonstrable harm in society (Amazon’s biased hiring tool; data transfers without consent between the Royal Free Hospital and DeepMind; the persecution of Uighurs in China). We named these latter instances (2) ‘frictions’. Building on Meunier et al.’s ((2021); see also Shaffer Shane, 2023) work on ‘algorithmic trouble’, we use the term ‘AI friction’ to denote instances of AI-related harms occurring in specific environments in society (roads, hospitals and schools), as distinct from more abstract and de-localised societal risks. In a third and last set of cases, respondents offered (3) problematisations of AI, which we loosely defined as answers to the question ‘what is the problem with AI?’; and more specifically, we understand as attempts to articulate underlying contestations, difficulties and suffering associated with AI that arise at the limit of discourse (Barry, 2021) and which it may be challenging to name (two examples from our consultation: ‘The claim that AI exists today’; ‘Deploying non-transparent systems to make decisions that directly affect people’s lives’).

Frequency of occurrence of AI controversy topics in the UK expert consultation (Autumn 2021).
Especially striking in our results was the prominence of the aforementioned ‘AI frictions’ among the responses. While our consultation asked respondents to identify ‘controversial developments’ and ‘controversies’ in and about AI, a significant number (41 out of 53 respondents) identified concrete incidents involving AI in society: traffic accidents involving automated vehicles, the use of racially biased facial recognition by the South Wales and Metropolitan police, the downgrading of GCSE marks by algorithms during the UK ‘exams fiasco’. It suggests that in the area of AI and society, ‘controversy’—in the sense of the staging of public disagreement about a specific knowledge proposition—may not be the main form of contestation of AI, with the emphasis placed, instead, on the demonstration of concrete instances of AI-related harm to specific groups and institutions in society (students, BAME, the National Health Service (NHS)). Also striking, we found, is the comprehensive range of societal domains identified in the responses, which ranged from law enforcement, health, transport, the welfare state, education, media, democracy, the economy (including recruitment and work) and sports. This broad range of application domains—
Our initial consultation results suggested to us that AI may qualify today as a super-controversy: AI controversies do not only take the form of public contestation of specific techno-scientific propositions, but arise from the linkage of specific instances of harm and risk arising in society with technical propositions and wider, structural concerns. However, it is important to note that the consultation also surfaced several knowledge controversies in the more familiar sense of the term, that is, public contestation of expert claims. The most frequently mentioned controversial development was facial recognition, a topic which gave rise to public disagreements in the UK from 2018 onwards as civil society and academic experts asserted that facial recognition systems in use by the Metropolitan Police fail to meet accuracy standards, while the Metropolitan Police released an evaluation report which concluded that its facial recognition systems are accurate and not racially biased. 12 Mentions of other notable topics in the consultation, such as tracking and targeting, data and corporate research culture, equally include references to expert disagreement as in the case of the legality of data transfers without consent between the NHS and DeepMind, and to debates about controversial research papers like the Stochastic Parrots article by Bender, Gebru et al. discussed below.
Next, we made a distinction between controversy topics in terms of the degree to which they elicited detailed problematisations in the consultation, as opposed to only being indicated as a keyword, such as ‘surveillance’. What stood out here for us is that, while some respondents focus on
To pursue the exploration of AI as a ‘super-controversy’, we proceeded to delineate AI controversies by identifying couplings of (1) Correctional Offender Management Profiling for Alternative Sanctions (COMPAS): a controversy about algorithmic discrimination in judicial systems sparked by the ProPublica (Angwin et al., 2016) report ‘Machine Bias’. NHS + DeepMind: a controversy about data sharing between UK public sector hospitals and big tech sparked by the Powles and Hodson (2017) paper on Google DeepMind research in the Royal Free. Facial recognition (Gaydar): a controversy about the use of machine learning-based image analysis to predict sexual orientation sparked by Wang and Kosinski (2018). LLMs (Stochastic Parrots): A controversy about bias in large neural network models for encoding and generating text, sparked by Bender et al. (2021) ‘On the Dangers of Stochastic Parrots’. Deep learning (DL) as a solution for AI: a controversy about the capacity of DL—the use of trained multilayer neural networks with large numbers of weight parameters—to sustain the claims of artificial intelligence research (Marcus, 2018).

Friction-topic couplings identified in the expert consultation ‘what is controversial about AI?’ Autumn 2021.
AI and society controversies on Twitter: Levels of disagreement, actor composition, forms of engagement
Having identified these five controversies, our next step was to analyse whether and how these controversies enabled the problematisation of AI across the science/non-science binary. Follow-up interviews with participants in the online consultation offered some indication of the relevance of this focus. In relation to the controversial collaboration between the NHS Hospital the Royal Free (London) and Google subsidiary DeepMind, one respondent noted: ‘I think that's what happened with the NHS thing… [i]t sort of broke out of the confines of people who were interested in AI and privacy into something which had more general currency’ (Interview 14EH); another noted that ‘[T]he Google DeepMind Royal Free thing applies to my university. And I've spoken with people at my university here and I speak to people at DeepMind, basically, they don't see what the fuss was about… [s]o there's a really interesting… important point about who actually does learn lessons from this stuff’ (Interview 22JS). To ensure we captured the unfolding of AI controversies across societal domains, we decided to conduct the next step of our controversy analysis with Twitter data. Not only is Twitter a tried and tested setting for controversy mapping (Burgess and Matamoros-Fernández, 2016; Madsen and Munk, 2019), but at the time we conducted this research it was still a notable site of intersection between academic, journalist, activist and industry debates. Our interviewees confirmed this understanding of Twitter as a prominent forum for tech controversies during the relevant period (2012–2022), alongside discussion forum and instant messaging platform Reddit and Discord. They referred to Twitter as a site that ‘has enough experts, it’s open, [and] general’ (Interview 12SL); ‘[i]t gives you a feel for an issue’ (Interview 14EH), although we were also reminded that ‘there’s a huge amount of virtue signalling in terms of what you argue with people about [on Twitter]’ (Interview 13CR).
We created tailored Twitter data sets for each of our five controversies, by designing Twitter API queries to capture discussions about the controversial publications identified through the consultation (see Figure 3).
14
Next, to make sure our data sets were pertinent to our selected AI controversies, we manually classified ‘conversations’ in the datasets as

Twitter data collection for five AI research controversies.
We begin by coding conversations for controversy topics, which first of all reveal significant shared concern across all controversies with ethics, knowledge and social justice. 16 Other frequent themes are political economy as well as data and data protection (see Figure 4). Speaking generally, this topic distribution is not dissimilar from our consultation findings. On Twitter, too, the impacts of AI deployments on disadvantaged communities featured as an especially prominent topic, as did the corporate ownership of AI infrastructures and related barriers to the evaluation and regulation of AI in society. Data featured prominently both in debates about rights (privacy) and regulations (including GDPR) as did epistemic challenges of reliability and transparency of data processing within AI-based infrastructures. Epistemic concerns about the quality of outputs of AI systems, and lack of adherence to standards of scientific rigour in AI research also featured in many conversations, topicalising scientific quality. All controversies, finally, gave rise to society and justice debates, which define harmful impacts of AI on society in terms of discrimination, entrenched privilege, racism and societal inequality.

Most frequent themes across the five Twitter controversies.
We next tried to establish the degree of controversiality of the identified topics, which we did by coding all in-scope conversations for the level of disagreement, and then assigning these codes to the topics addressed in these conversations (see Figure 5). 17 Perhaps unsurprisingly, our analysis shows that there is a general trend towards disagreement: topics in all controversies tend to be placed towards the more contentious rather than non-contentious end of the spectrum. We note that the level of disagreement is correlated with the volume of tweets produced on a certain topic: In COMPAS, the three largest topics by volume—bias, prediction and racism—tend to appear in conversations with higher levels of contention. To further qualify the controversiality of topics, we considered the ‘levels of engagement’ for each conversation: the degree to which conversations elicited long and/or wide threads of replies on Twitter (more about this below). We found that contentious topics do not automatically correspond to a high level of engagement. It is true that in COMPAS, Parrots and DL as a solution for AI, the top topics by engagement tend to lean towards higher levels of contestation. However, we found the highest level of disagreement for science-related topics that belong to the theme of knowledge (light green in the visualisation), and secondly, in topics relating to society and justice (in black), political economy (in blue) and bias (in orange).

Distribution of the most frequent topics per controversy according to their level of disagreement.
Next, we examined the extent to which our AI controversies on Twitter facilitated interaction among heterogenous actors, that is, exchanges across the science/non-science binary. To this end, we categorised the Twitter accounts contributing to each controversy using basic occupational categories (see Table 1). 18 We found that only a very small number of Twitter accounts appear in more than one controversy, suggesting that our five controversies mobilised different Twitter ‘communities’, which is noteworthy in the light of the high topic similarity between AI controversies on Twitter in Figure 4. Considering the size of actor categories across controversies, we can see that some controversies lean more towards the ‘activist/media side’, such as COMPAS, and others more towards ‘scientific/research’, such as DL as a solution for AI, and some are hybrid, such as the Parrots and Gaydar controversies which bring together researchers and activists. NHS/DeepMind is the one controversy where professions (health) are prominent (see Table 2), and policymakers are relatively absent in all the Twitter controversies. As such, these findings demonstrate a truism from the sociology of knowledge, which states that the content of a controversy aligns with actors’ positions in society (Barnes, 1977), more about which below.
Actor categories across controversies according to a zero-shot actor classification using a GPT-3.5-series model (see Footnote 18).
Main topics of disagreement, main actors and overall form of engagement in selected AI research controversies on Twitter.
Finally, the calculation of ‘levels of engagement’ for conversations enabled us to explore the extent to which AI controversies on Twitter enabled a widening of engagement with the issues at stake (inclusion) and the degree to which they instigated processes of problem articulation (problematisation). We took the ‘width’ of conversations

Forms of engagement: the ‘shapes’ of AI controversies on Twitter.
As to the overall findings of our Twitter analysis, we are especially struck by the strong emphasis on societal problems (racism, inequality) and political economy (market concentration, data appropriation) in the selected AI controversies, which aligns with the findings of our consultation. However, on Twitter, we also found a strong engagement with epistemic issues, in the form of concern with scientific integrity and the science/politics tension (research vs. advocacy) in AI research. The focus on these topics correlates with the social positions of participants in the controversy: epistemic concerns are topicalised in controversies with prominent participation of researchers, while controversies with strong activist engagement are more concerned with issues of regulation, ethics and justice (Table 2). While this alignment between discursive content and social position is something social studies of scientific controversy would lead us to expect (Barnes, 1977), a different aspect of AI and society controversies on Twitter does not align with this approach. Controversy analysts in STS have argued that techno-scientific controversies surface complexity and thereby disrupt received scientific and societal problem definitions (Callon et al., 2011). But AI and society controversies on Twitter rather seem to mobilise established scientific and societal issue frames (transparency, pseudo-science, racism, inequality). Should we conclude that AI and society controversies on Twitter consolidate entrenched problem definitions? In the conclusion, we will reflect on the significance of this for our understanding of AI and society as an area of super-controversy.
Materialising AI controversies: shaping controversies with design-led participatory methods
We cannot forget of course that Twitter analysis provides a
A central feature of our methodology of controversy elicitation is to mobilise the standpoints of a situated community of experts to activate a collective process of problem articulation, which is a key affordance of controversy. In a design-led collaborative workshop that we organised in March 2023 in London (Figure 7), we took this process of situated elicitation of controversy one step further by inviting 35 UK-based AI experts from science, government, industry, activism and the arts, many of whom were respondents to our initial consultation, to a design-led participatory workshop. Drawing loosely on methods of evaluative inquiry (Marres and de Rijcke, 2020), we designed a diagnostic exercise, in which we worked with participants to assess the five selected AI controversies in terms of inclusion and problematisation: the extent to which they offered opportunities for participation, made visible problems with contemporary AI, and enabled shifts in the balance of power within the wider domain of AI and society. During the workshop, the invited experts worked together in five small groups to evaluate the five controversies, supported by material props.

Shifting AI controversies: shaping and reshaping AI with experts in AI and society, Friends House London, 10 March 2023.
After having been introduced to the five controversies, participants worked with a diagnostic tool that we had designed specifically for this purpose—dubbed the ‘controversy shape shifter’—to determine the ‘shapes’ of the selected AI controversies. Asking participants to cut paper strips to different lengths that corresponded to values assigned to controversy parameters, we invited them to determine: the relevance of the issues addressed, the degree of participation, the location of the controversy (situatedness), and the allocation of responsibility for addressing the problem (power and solvability) (see Figures 8 and 9). We offered the participants an explicitly normative overall framing for this evaluative exercise, by asking them to tell us: is the controversy in question in good or in bad shape? To support their evaluations, we provided participants with controversy ‘dossiers’, which included a timeline of events, an actor list, key documents and the Twitter analysis for each controversy. On this basis, they determined the controversy’s ‘shape’ by assembling long and short strips of cardboard guided by the evaluative grid.

The evaluative grid composed of five parameters: relevance, situatedness, power, participation and solvability.

Shaping of the DL as solution for AI controversy by AI and society experts.
Participants were also encouraged to add notes to their shapes—annotating the cardboard with pens—summarising their evaluations based on either the dossiers provided or their own knowledge and experience.
For the purposes of this paper, we want to highlight one important result of this process of shaping AI controversies. Many participant annotations drew attention to the fact that AI and society controversies operate on two levels at once. On the one hand, they uncover highly specific issues raised by AI systems (e.g., a flawed data sharing agreement between public sector and industry organisations; a problematic internal approval procedure for academic publication in the tech industry) but, on the other hand, they expose major structural problematics (the politicisation of science, abuse of power and inequality). While the COMPAS case, for example, focused on the racial bias of this scoring software and its underlying data against ethnic groups, the controversy equally exposed how the use of algorithmic systems in the public sector amplifies entrenched racial inequalities. The NHS + DeepMind case dealt with technical requirements on the legality of data sharing agreements (which were not adhered to in this case), but simultaneously flagged a major structural challenge of AI to both national and international public policy communities, namely the appropriation of public sector data by private companies, as well as growing Big Tech's control over the creation of public sector data infrastructures.
During the workshop, debates about DL as a solution for AI highlighted the role of everyday publics in the creation training datasets for neural networks through their digital participation. Also, what appears as a technical debate about the ‘architecture’ of predictive models turns out to have massive legal ramifications regarding the applicability of contemporary copyright law to data infrastructures in society. The Gaydar case's controversy about the use of neural networks to predict sexual orientation raised a major issue of societal harm by demonstrating that AI analytics could be used to expose people's vulnerabilities, but also highlighted that scientists seek to take advantage of specific ‘hype’ dynamics to attract attention to their publications. Finally, the Stochastic Parrots controversy was perhaps the most dense in its articulation of intersecting societal problematics, highlighting ecological impact (the energy cost of training LLMs), the marginalisation of ethical considerations in research debates, and the roles of women in corporate AI research. This suggests to us that AI and society controversies, while not necessarily unsettling established problem definitions in science and society, nevertheless articulate unsettling
Conclusion
Our examination of AI and society controversies via an online expert consultation, Twitter analysis and an evaluative workshop has enabled the
We tentatively draw the following conclusions regarding the capacity of AI and society controversies to facilitate inclusion and problematisation across the science/nonscience binary. On the one hand, the AI knowledge controversies that we identified on Twitter demonstrate a degree of inclusion, in so far they mobilised experts across science, journalism, industry and activism on Twitter, However, representatives of affected communities did not feature prominently in them. It seems that it is especially through the type of controversy topics articulated in the controversies in question that the expert/non-expert boundary is crossed: these range from issues of social justice to scientific methodology, politics of knowledge, ethics and political economy. We note that classic societal—
We therefore conclude that AI, despite the specificity of its techno-scientific arrangements and many of the associated problems, has acquired the status of a ‘super-controversy’ through AI and society disputes. Contrary to the expectation that public controversies about techno-science surface
In such a super-controversy, problematisation does not just proceed through the creation of heterogeneous association between specific actors and entities, as actor-network theory suggested, but arises from the forging of connections between techno-scientific propositions (AI)-situated troubles and entrenched societal problems. The demonstration of AI frictions seems to play an enabling role in this. In this regard, their significance should not be understood solely in terms of the particularisation of AI-induced harm by associating these with specific persons, experiences, deployments and environments. Equally distinctive about AI frictions is their connective capacity: frictions demonstrate to wider publics how ‘AI’ as a complicated techno-scientific domain of application is, nevertheless, closely and intimately connected with society-wide phenomena, such as structural inequality. Through controversy, the firing of a single researcher can topicalise complicated connections between the political economy of knowledge production, social justice issues and epistemic issues of what constitutes scientific rigour. It is also to say that, in the face of the strategic affirmation of the controversiality of AI by its developers and backers, advocates and experts in AI and society have undertaken the problematisation of AI according to a different logic. We are tempted to call this logic ‘sociological’ insofar as it involves the demonstration of connections between specific technical propositions, contextual frictions and structural problems across social domains.
Footnotes
Acknowledgements
Declaration of conflicting interests
Funding
References
In 