Abstract
Keywords
This article is a part of special theme on Analysing Artificial Intelligence Controversies. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/analysingartificialintelligencecontroversies
[P1]
[P2]
[P3]
After a short pause:
[P4]
[P5]
This discussion between artificial intelligence (AI) practitioners happened during a workshop we organised in Paris in 2022. Despite its somewhat amusing tone, it gives a good idea of how diverse and divergent conceptions of AI can be. From a global entity that can be ‘shut down’, or ‘unplugged’ – if it even exists? – to local associations of human and non-human entities, that should be democratically assessed – with what moral compass?
It is a good snapshot of the tensions we witness on a global scale. From the pause suggested by tech giants (Pause Giant AI Experiments, 2023), to the permanent international scrutiny of both breakdowns and successes of computational technology innovations, we are witnessing a great cacophony in AI accounts and we endure a continuous flow of conflicting news surrounding AI. Even academic studies of AI put us in a somewhat schizophrenic position, both affirming and denying its reality (Jaton and Sormani, 2023). In this noisy context, how to make sense of AI controversiality, and from whose perspective? This paper investigates how to reclaim ownership over the framing of AI-related problems to bring forth meaningful and pressing issues.
As part of the ‘Shaping AI’ project,
1
our research is interested in ways of participating in the development of AI technologies. Although AI studies have taken a ‘participatory turn’ (Delgado et al., 2023), we argue that participation as a
To grasp the interplay between the different scales and sites of AI development, we chose to ground our study in the French context. In addition to an easy access and intimate knowledge, we believe that France offers a valuable standpoint to pluralise the problematisations of ongoing AI developments, since many AI studies focus on North American settings. While France is not a leading country in the production of AI models, it is known for its high standards in mathematics education, with a deplored brain drain towards North American companies, as recent success stories attest (e.g. Hugging Face or Mistral AI). To maintain a ‘national sovereignty’, President Macron, who praises the start-up model of Silicon Valley, launched a national strategy in 2018 (now in its second phase). This strategy fostered and financed AI research clusters made out of public and private organisations, and has equipped the country with a supercomputer (called ‘Jean Zay’, housed at the French National Centre for Scientific Research (CNRS)). There is a strong political push to experiment AI in all sectors, which reconfigures workplaces and economies in a fast pace, starting with state services. Lastly, the whole French tech ecosystem actively took part in the European effort to draft the EU AI Act.
With France as a point of departure, we used a two-fold methodology to account for AI problematisations. First, relying on issue mapping methods applied to a corpus of news articles spanning over ten years, we inspected how French media narratives frame AI problems. Our results led to the identification of four typical forms of media accounts and we discuss their performativity at different scales.
Then, to gain plural perspectives on AI situations, we designed an ‘accounting
We conclude by discussing how this two-step study contributes to (a) re-equipping a multi-scalar understanding of AI developments and (b) discussing how a participatory turn in the study of AI could enable a genuine reopening of its trajectory.
How media narratives account for and perform AI
‘AI is always inherently accompanied by narratives, fantasies, and promises. (…) For instance, the narrative I carried out as a journalist was that AI is powerful – whether positively or negatively. An example: an article I wrote in 2017 about the final match of AlphaGo, where I mentioned that DeepMind would focus on new challenges like curing diseases, reducing energy consumption, and inventing revolutionary new materials. (…) However, over the course of my career, I have seen the media’s treatment of AI change direction. We started to see topics like algorithmic biases and AI replacing or eliminating jobs. So, the narrative I participated in is one of AI’s power and people’s fatalism in the face of that power.’ (L., journalist)
Studies on the media coverage of AI argue that media play a major role in setting the agenda and shaping public opinions as they define expectations and issues associated with emerging technologies (Fast and Horvitz, 2017; Chuan et al., 2019). They contribute to perceptions of AI, by smoothing or roughening technological or scientific details, and amplifying voices and tropes while silencing others (Bareis and Katzenbach, 2022; Hansen, 2022). In line with these works, we started to investigate how media narratives account for AI and its problems. How is AI depicted and what entities are associated with it? How are they configured in practice and what agency do they have over concrete AI developments?
The term ‘AI’ encompasses multiple technologies and it gained traction because of its ability to bring together such a wide range of entities in a single discourse (Katz, 2017). Inspecting AI in this sense means being able to identify and disentangle from its floating and monolithic interpretation, the plethora of novel objects endowed with agency and capable of exerting influence in a specific social context (Latour, 2007).
We applied key principles of issue mapping (Rogers, 2013; Marres, 2017; Venturini and Munk, 2022) to analyse how AI has been both accounted for and constructed by the French press over the last decade. The national press is a well-defined initial source that constitutes a relevant proxy to define how and where different interest groups stabilise the nature, functions and future of techno-scientific objects.
Inspecting themes and their tonality
We created a corpus of press articles
3
using the 300 sources available on the Europresse service, comprising a broad variety of newspapers (general and specialised, national and local), and spanning the period 2011-2021 (
Previous research on technological innovations shows that media tend to report and amplify both the positive and negative impacts of AI (Nguyen and Hekman, 2022; Cave and Dihal, 2019). We took advantage of this amplification effect to generate a clearer picture of predominant AI issues in media accounts, according to both frames. So, we made a preliminary categorisation of our articles in terms of their
The corpus reflects 11 main themes, analysed with a co-occurence network. 6 (Figure 1-top part). A topological analysis shows that most themes (8) are closely connected, with at the centre, the theme that links AI to society. In contrast, themes related to health, education and justice are more peripheral.

(a) Semantic network (6802 nodes (n-grams); 75,432 links (co-occurrences)). (b) Cluster projection by tone.
In terms of tonality, each term of the network got a score projected onto the graph relative to its frequency in the ‘promises and benefits’ or ‘critiques and threats’ sub-corpus 7 (Figure 1). We observe a significant polarisation of our corpus: themes associated with critique mainly occupy the right side of the network, plus a few localised pockets in other parts of the graph. Critical themes are primarily related to education, justice, defence/security, but also to ethical issues regarding the development of AI in society. In contrast, themes on the left side of the network are more closely associated with promissory narratives, mainly related to AI R&D in general as well as in specific sectors, i.e., healthcare, art, research, commerce, and finance.
These results, both in terms of themes and tonality, are in line with similar studies (Cools et al., 2022; Chuan et al., 2019), and generally, we note a widespread emphasis on the positive impacts of AI. More precisely, when compared to the Anglophone press (Crèpel and Cardon, 2022), themes such as digital labour or automated weapons are similarly prominently criticised in France, while others, such as autonomous vehicles or health applications, take on a positive tone. In addition, the peripheral position of some themes (health, law and education) may be a good indicator of controversies and issues specific to French cases. But, more interestingly, we complemented this first inspection with an analysis of the entities present in each cluster, which led to identifying genres of media narratives.
Four genres of media narratives staging diverse entities
Pursuing further the investigation of our corpus, we extracted and manually annotated the entities present in each cluster. The following categories emerged through an iterative coding process: Technical entities, data entities, people, public figures, companies, institutions and topics.
Although the clusters are not homogeneous in the way they account for AI, analysing the arrangements of the entities in each one revealed four typical narratives of AI, i.e. four genres. These four genres are organised on a two-axes matrix, from

Four genres of artificial intelligence (AI) narratives present in French media, organised in a matrix (from projection to realisation and from negative to positive). The genres are described according to their clusters, topics and narrative structures.

The performativity of four genres of media narratives.
Excerpts from articles present in our corpus, that illustrate four genres of AI narrative present in French media.
Now in the bottom part of the matrix, narratives focus on specific technologies deployed in society and detail how they reconfigure local entities in richer and more complex ways.
On the right side, we acknowledge a genre encompassing themes related to health, web tech, finance, in which detailed narratives are built around innovative solutions and applications brought to the market that optimise domain-specific activities through computing technologies. In the case of health for example, many individual practitioners are staged promoting current AI developments (mainly devices or products from French start-ups), highlighting their benefits for specific individuals (‘patients’, ‘clients’). It relies on the authoritative voices of well-defined groups of specialists (French hospitals and research centres under the control of national institutions), while specifying data entities (‘sugar levels’, ‘insulin’, ‘hormones’) that feed AI models, justifying their collection and exploitation with regards to the alleged benefits.
With similar attention to specific local configurations, but focused on actual denunciations of technological negative effects, the last genre provides extremely rich and structured arrangements. Prominent in isolated themes in the network, constituted around specific French controversial cases (Education, Justice, Labour), such narratives account for the rare occasions of concrete
Synthesising media narratives’ performativity
The top of the matrix (Figure 3) characterises
Highly particularised human actors emerge once narratives account for
These genres, largely controlled by big-tech and media players, grant existence to particular entities and arrangements and perform AI both as a global issue (AI monolith) and as specific configurations (AI situations). Each genre develops its own agency and puts in motion different operations (Figure 3): A
Reading the matrix vertically (Figure 3), the analysis of the genres also suggests that the issues of AI are constituted and dealt with through two main modalities: either through
These wide-spread narratives create the main frames that account for AI issues in France, structuring the field of AI and stabilising its ‘thingness’ (Suchman, 2023). But, such predefined and limited views offer little room for other players to take part, and for genuine bifurcations in the developments of computational technologies. So, what if we were to develop renewed grip over AI trajectory?
A third path to participation towards problematisation
‘I believe there is a kind of third wave in AI studies. Some, in the social sciences today, following a techno-critical approach criticise part of Latour’s successors and consensus conferences…What someone like Fressoz is saying is that this generation of STS researchers acts as if, since Ulrich Beck, the issue of socio-technical risks was addressed. Fressoz is reminding us that modern societies, at least since the 19th century, and in the face of industrialisation, have always been reflective. There were struggles; they were just completely invisibilised by history. In fact, highlighting them and narrating this history is a way to remind us of the contingency of the order we are in.’ (Q., researcher and activist)
Escaping the double-bind of participation in socio-technical developments
Participation has gained traction (Magassa et al., 2017; Young et al., 2023), whether it be in AI research (Rahwan, 2018; Birhane et al., 2022), AI systems developments (Martin et al., 2020) or AI governance (Gilman, 2023; Tabassi, 2023; Lee et al., 2019). Numerous initiatives creatively implemented meaningful participation to address public relations with AI, and several literature reviews synthesised these efforts (DataJusticeLab, 2021; Delgado et al., 2023; The use of public engagement for technological innovation, 2021) – with careful accounts of ‘participation washing’ mechanisms (Ahmed, 2022; Sloane et al., 2022).
But despite a call for ‘more participation’, ways of seeing participation remain largely informed by normative and instrumental traditions (Chilvers and Kearnes, 2020). Inherited from two distinct disciplinary streams (political science and STS on the one side, PD and interaction design on the other), participation is described and analysed primarily through external categories (‘democracy’, ‘society’, ‘technical systems’) that tend to replay the classical divide between the social and the technical. This binary vision ends up engaging different voices either in policy-making processes (referred to as public participation) (Callon et al., 2001), or in the design of technological systems themselves (Bødker and Grønbæk, 1991).
Recent convergences in STS and PD research tend to bridge these two approaches by renewing with a pragmatist heritage. Focusing on socio-material practices (Marres, 2012), and insisting on the relational and co-productive dimensions of participation, which contrasts with conventional argumentative/deliberative perspectives, they aim at
AI is a textbook example of the continuous co-production of the social and the technical: not only does it reconfigure the relationships between science, technology, and society, but it ‘co-opts the world.’ (Barocas, 2019). To understand this co-production, we advocate for an expansion of social science tools, in the vein of previous pleas (cf. Sociology of testing (Marres and Stark, 2020), Remaking participation (Chilvers and Kearnes, 2020)), towards a pluralisation of accounts. Collectively accounting for AI situations, examining and valuing diverse experiences with AI, will vary the forms and definitions of AI as an object of inquiry, casting issues in a new light. Questioning the consistency of an object along with its issues is what we refer to as
The loop of AI practices: A heuristic tool for collective inquiry
Given a broad definition of AI – a computational problem-solving method (model) that transforms diverse inputs (data) into optimal outputs (instances) ‘to achieve goals in the world’ (McCarthy, 2007), we chose to represent the diversity of data and computational practices that contribute to the continuous co-production process of the social and the technical as a loop (Figure 4).

The heuristic loop of data/compute-intensive practices.
The left-hand side of the loop (from the world, where data is extracted and fed into computational models) situate the practices related to the problem-formulation phase of AI and to the data-intensive work. The right-hand side (from the models to their implementation into instances, such as products and interfaces, that feed back into the world) situate the practices associated with the deployment and integration of computational systems into the fabric of people’s activities and refers also to actual experiences with these computational instances.
This representation echoes a framework proposed and used by institutions like the OECD (OECD Framework for the classification of AI systems, 2022; Tabassi, 2023). Here, it does not serve as a descriptive model, but as a heuristic tool to represent in a shared space a plurality of situations, practices and operations through which AI is realised and circulates. As part of our accounting
The soucis of AI: A situated problem space
Our second methodological operation consisted in a participatory inquiry to
Enrolling ‘soucieux practitioners’
One of our first challenges was to enrol co-inquirers. How to proceed? Who to target, using which criteria? Where participatory processes usually divide experts from non-experts, we were interested in engaging with ‘practitioners’ to focus on their experiential knowledge, without
In addition to listing individuals mentioned in our news corpus, we used Twitter as a probe to iteratively refine our criteria: We built an initial dataset of 236
Examples of two tweets showing engagement.
Several criteria guided our choices: Activities, profession and status, gender, background, type and level of concerns. Many of these individuals have multiple backgrounds and activities in different settings, which enrich AI accounts. As Chateauraynaud and Debaz (2017) argue, the more actors multiply their positions, the more they have a ‘grip’ on a problem: they control networks and master, even shape, tests and regimes of justification. But, we deliberately excluded individuals who already benefited from a significant degree of media attention.
We reached out to a total of 56 individuals, out of which 25 (19 men and 6 women) eventually engaged in the research process, as anchor points to ground, historicise and re-problematise AI. The co-inquirers’ practices, all related to AI, are characterised by a striking degree of heterogeneity, e.g. reporting on the digitalisation of public services or pushing it, creating deep learning models applied to sound, building a community of AI developers, optimising AI models to reduce their environmental impacts, or studying regulations of AI (Table 3 details three of their profiles).
Examples of co-inquirers’ profiles.
Grounding AI in practitioners’ experiences (first encounter)
According to a pragmatist perspective, our inquiry process sought the creation of a common object of inquiry (Zask, 2004). The first encounters consisted of two hours of individual, in-person, lively discussion about the co-inquirers’ personal histories. They started with a formal prompt: ‘Can you describe your activities related to AI and how you came to engage in them? How is AI realised within these activities?’ Some follow-up questions focused on co-inquirers’ attachments and concerns, to delineate current problematic situations. These conversations accounted for incredibly detailed life trajectories and, as the excerpts from Table 4 illustrate, biographical storylines revealed key episodes of the French AI history. They account for the entanglements of global and local elements interacting within
Excerpts from conversation with co-inquirers.
Our team extracted many concrete items from the transcripts: from events to papers; from projects to laws, and from data repositories to tweets and memes. We carried out extensive online research, looking for traces and evidence of these specific items, ending up with an archive of over 1000 documents that rematerialise AI history into concrete episodes. Figure 5 presents four of these documents to give a sense of their materiality and diversity.

(a) ‘Blog post (Benesty (2016)) about judge impartiality. (b) Download section of CamemBERT website, a French language model, developed by INRIA and Facebook AI Research (Muller B , n.d.). (c) Portfolio of projects supported by the LabIA at Etalab (Portefeuille des projets - Etalab, 2001). (d) Personal certificate of an online course (MOOC) on Machine Learning proposed by Stanford University, taught by Andrew Ng. Source: personal documentation, date: 2016.
At this stage, we suspended any interpretation, focusing instead on the activation of these documents as ‘material to be used’ (Zask, 2004) for problematisation.
Problematising AI from practitioners’ soucis (second encounter)
For each co-inquirer, we printed out the documents gathered from their biographical accounts (

Setting of the second meeting at the Cité des Sciences et de l’Industrie, with the wall of documents and the working table.

Screenshots from 4 different video recordings, where we see the co-inquirers using (grouping, pointing, sorting out, discarding) the documents they picked from the wall of documents.
This setting (the wall, the documents, the table) offered the co-inquirers the opportunity to objectify elements that matter from their personal vantage point and address them as problems. We use the French word
Our co-inquirers discussed how their
Grounding AI in intimate stories, schemes of activity, workflows, organisational routines and norms, such accounting
We ended up with 62 hours of video material. After an iterative coding process using a grounded theory approach in a video editing software,
8
19
Generating a problem space
How do co-inquirers problematise the indeterminate and ambiguous situations in which they participate where AI is realised? Qualitatively, we listed all acting entities of our 19 video montages and identified recurring patterns that reveal conflicting ways of problematising AI, all rooted in specific practices. Making use of the loop (see Figure 4) to map these patterns, dividing lines separate

Coding scheme used on the videos – detail for the
Repeating the operation for all

French Problem Space of artificial intelligence (AI), derived from 19
The
The objective to develop further computational technologies is clearly shared and performed through practices that benefit from it – represented in the two first spaces,
The
Verbatim illustrating the second type’s fault line.
This opposition plays out in the three situated
The

A third space remains stable at the bottom of the loop. In strong opposition to the first two ways of problematising, it resists a hegemonic computational logic and advocates for other kinds of solutions carrying other values. For example, one co-inquirer mentioned Hito Shteyerl’s artwork ‘
The
Verbatim showing the fault lines present in the third type (in the case of the souci ‘Working with AI’).
Lastly, the
Notion of ‘commons’ as discussed in the fourth type.
Our analysis of the four types of
Participatory Problematisation: A generative perspective
We now discuss our contribution to the fields of controversy and AI studies. In a way, our research invests participatory means to revisit a classic question anew: What and who contributes to problematise the development of technical objects – from accounting to framing their issues?
Thick accounts and power dynamics
Asking this question is especially relevant in the case of AI where big players largely control the controversiality of AI by spreading both fear and fascination narratives, while at the same time, presenting themselves as willing – and best equipped – to handle problems (Luccioni and Bengio, 2020). An empirical approach has the ability to desexceptionalise AI by resituating its developments in concrete histories and courses of actions. Our results suggest that rich and plural accounts of situated practices help renew and sharpen the types of questions that AI progressive constitution raises, thereby redistributing some grip to less prominent actors.
First, situated accounts (both from media narratives and from co-inquirers’ practices) avoid the reduction of AI issues to a set of off-the-shelf problematisations. Instead, they multiply the entities at stake and the situations in which they are embedded, adding layers of intertwined story lines. But in addition to a welcome ontological reconsideration of what constitutes AI, elicited
Indeed, each
Secondly, echoing the feminist saying ‘the personal is political’, we argue that a focus on AI practices offers both a
Here, we need to acknowledge that the demonstration of thick accounts’ potential reaches one main limitation: the richness and density of each
Now, about the generative dimension. The problem space we drew is not case-specific but bridges and contrasts pluri-situated experiences. Collective inquires, when designed to take advantage of these contrasts, have the potential to both reveal and reshape the network of perspectives that organises the very experiences of the problems at stake – thus, redefining them. Being attentive to others’ standpoints, co-inquirers could engage in unexpected alliances, and thereby generate public problems that stand a chance to matter in socio-technical unfolding. This leads us to our future work: ‘infrastructuring’ (Karasti, 2014; Le Dantec and DiSalvo, 2013) a public of concerned practitioners over time to go beyond short-term engagement and sustain vigilance, update AI problematisations and alert on specific developments.
A shared and pressing participatory concern
Despite the fault lines of AI problematisation, there is a common concern that emerged from all co-inquirers’ perspectives: the lack of ‘democracy’ or ‘participation’ in AI developments. It is a transveral
A case reported by one co-inquirer offers a striking illustration: In 2020, the Ministry of Justice announced the experimentation of an algorithm for assessing bodily injuries based on the automated processing of judicial decisions (‘DataJust’ project). It was first authorised by a decree from the Prime Minister and the Minister of Justice, but caused an outcry among legal professionals. These critiques remained unheard, and led to several associations filing an appeal before the Council of State, pointing out the violation of privacy laws among other issues. The Council of State validated the decree anyway, but the experimentation project was eventually abandoned by the Ministry itself due to a lack of technical and human resources to implement it.
Problematic indeed in terms of participation, this situation does not necessarily lead to similar focuses. Some are worried about tests being conducted into society without any form of consent or consultation, while others advocate for greater means to conduct meaningful tests (instead of limited or bugged POCs). Some rather put into question regulatory mechanisms themselves that eventually, even when experiments are blocked at first, always extend further the legality of systems threatening fundamental rights. Although they point to different mechanisms, all bring to the fore pressing concerns about the loss of genuine democratic processes, i.e., the possibility to have a say regarding technological innovations and their heavy infrastructures. References are made to other technical innovations for which explicit and consensual refusals have hit a wall; for instance, the case of 5G network in France, which development was rejected by the
If experiments are presented by some as a risk mitigation tool, others have documented their lack of evaluation. The opaque but important presence of large international consulting firms is seen as a concerning loss of public agency and adds to this institutional distrust. Pushing the development of computational technologies forward in an accelerated pace, these companies exert a major influence on AI trajectories, as they act altogether as lobbyists, strategic advisors to both public and private actors, educators, and solution providers (let alone the widespread phenomenon of high-ranking civil servants joining them).
In line with previous work that analysed social acceptability as one crucial objective for actors who oversee the development of technologies (Angeli Aguiton, 2014), our co-inquiry documents that any form of reservation, resistance, or protest is invisibilised and disqualified as radical by dominant public discourses and public authorities. Even legitimate demands for information are very often dismissed. In France, we witness an increasing inefficiency of traditional counter-narratives and counter-actions (Chateauraynaud, 2022), reinforced by a severe repression of militants, and this general feeling of loosing grip is very much present when it comes to AI developments (see for instance, the unprecedented approval of facial recognition in France).
In face of such a big challenge, we are humble about the potential effects of our participatory endeavour. But, this novel participatory experience in France has given some room to discuss anew the development of computational technologies among the practitioners that we are. It calls for a field of open counter-inquiries that will hopefully ripple, starting within the situations co-inquirers inhabit and that ultimately shape AI.
