Abstract
Introduction
Browsing the website of
Since algorithmic-based digital platforms have become infrastructural elements of everyday life (e.g., Bucher, 2018), these types of questions have become common in studies concerned with the consequences on human behaviours and relationships of algorithmic systems, which are continuously refined by companies to trap users in feedback loops of content consumption (Seaver, 2019a) and data extraction (Couldry and Mejias, 2019). In this scenario, scholars have highlighted that it is crucial to investigate how algorithms are designed in order to unpack the working activities and socio-cultural constructs underlying their production and, hence, their potential implications (e.g., Seaver, 2018).
This article is situated within this stream of research and focuses on the practices and relationships that surround the design of algorithmic models, within a corporate environment. Specifically, it investigates the activities of tech workers, the hierarchical relationships between them, and how these individuals frame algorithms and contribute to their design. At the methodological level, this work draws on a multi-sited ethnography and takes the form of a case study, by examining the activities of key professional figures involved in the production of algorithmic models at
Findings highlight the hierarchical organization of tech work and the subordination of operative figures, such as data scientists (DSs) and colleagues, to the goals imposed by business clients (BCs) and to both internal and external forms of control (e.g., Mumby, 2015; Dorschel, 2021). In particular, it is shown how the subalternity of tech workers is materially and discursively constructed and forms of causal, dispositional and facilitative power exerted on them (Clegg, 1989; Rowlands and Kautz, 2022). In this environment, frictions, negotiations as well as concealing strategies by tech workers regarding the design and meaning of algorithms emerge, thus showing their cultural, contingent and multiple compositions (Seaver, 2017). Within the framework of Giddens’ structure/agency cycle (1984), I discuss how everyday working activities and relationships, naturalizing rituals, controls and routines, contribute to the reproduction of hegemonic arrangements in the workplace (Mumby, 1997), and how these hegemonic arrangements are at the core of algorithmic production, thus playing a key role in the framing, construction and enactment of these systems.
The empirical contribution of this paper is to provide an in-depth analysis of a case study that can offer important insights into the production of algorithms. Specifically, this study aims to contribute to the growing area of critical algorithm studies but also to the research stream focused on digital labour and, more specifically, tech workers.
The article is structured as follows. In the next section, I discuss the role of algorithms as cultural artefacts and the role of tech workers and organizations in their production. Then, the selected case study and the methodology adopted are described. The fourth section presents the results of the research, focusing on three key themes. Finally, the theoretical implications of the results are presented in the discussion section and the contributions better identified in the conclusions.
Beyond the black-box metaphor: Algorithms as cultural artifacts
The role and impact of algorithmic processes and the extractive models underlying digital platforms have received an increasing interest from researchers within the interdisciplinary field of critical algorithm studies (e.g., Beer, 2017; Airoldi, 2022). Several studies focused on different types of platforms, such as social media (Gillespie, 2018), search engines (Noble, 2018), streaming services (Seaver, 2019a), delivery platforms (Yu et al., 2022), advertising services (Kotras, 2020), in order to understand how the technical workings of these architectures, their design and uses, play a key role in the construction and organization of social life. A commonplace regarding algorithms is that they can be considered ‘black boxes’, i.e., systems ‘whose workings are mysterious’ (Pasquale, 2015: 3). This tenet is favoured by the ways in which companies construct the opacity of these systems on a technical and narrative level, in order to protect trade secrecy and make algorithms seem inaccessible and impossible to scrutinize (Bonini and Gandini, 2020).
However, some scholars have started arguing that we need to go beyond the idea of algorithms as black boxes. Bucher (2016: 94) claims that this metaphor is ‘an epistemological limit’ impeding researchers to focus on key issues, such as the ways in which algorithms are designed and interpreted within the social realm, and how these processes embed socio-cultural values, norms, and prejudices (O’Dair and Fry, 2020). Specifically, recent contributions have focused on the cultural context where algorithms are designed. As noted by Wajcman (2019: 1276), in fact, ‘all artefacts (…) reflect the culture of their makers’ as they are the outcome of particular decisions made by specific individuals in a specific space-time, hence, also algorithmic technologies can be considered as ‘crystallizations of society: they bear the imprint of the people and social context in which they develop’.
In this regard, an interesting conceptualization of algorithms has been the one of Nick Seaver (2017). Drawing on a practical approach to culture (Mol, 2002), the anthropologist argues that researchers should consider algorithms ‘as culture’, in other words, cultural artifacts composed of several, multifaceted human practices and, more specifically, enacted, i.e., constantly brought into being at the material level by human activities, perceptions and interpretations (Seaver, 2017). In his words, algorithmic models ‘are not standalone little boxes, but massive, networked ones with hundreds of hands reaching into them’, hence, as scholars, ‘[w]e need to examine the logic that guides the hands’ (Seaver, 2019b: 419). This idea points directly to the human decisions, cultural assumptions and power dynamics involved in each stage of algorithmic design.
Following Seaver's approach, authors such as Bonini and Gandini (2019), Sachs (2020) and Kotras (2020) investigated the work and culture of individuals designing and tuning algorithms. In a similar fashion, other empirical contributions suggested investigating algorithms ‘in action’ (Geiger, 2017) or ‘in practice’, thereby addressing ‘the question of context by examining the work practices that surround algorithmic technologies’ (Christin, 2017: 11). Overall, all these contributions argue that to better understand which are the principles shaping digital platforms and their unfolding at the material level, it is necessary to investigate the people, settings and actions involved in their design. This tenet can be considered directly or indirectly in continuity with the tradition of media production studies (e.g., Banks et al., 2015), but the novelty is the focus on an emerging grouping of professionals, i.e., tech workers, ‘a new middle-class fraction’ holding ‘inscription power in rendering and encoding the digital technologies that shape the spheres of work and life’ (Dorschel, 2022: 1303). If algorithms are ‘opinions embedded in mathematics’ (O’Neil, 2016: 21), the specific cultural values and behaviours of this ‘coding elite’ (Burrell and Fourcade, 2021), of their superiors and their companies, and their following interrelation within the hierarchical setting of the workplace, play a role in each stage of algorithmic design and can shed light on how algorithms are enacted.
Critical organizational scholars (e.g., Deetz, 1992; Mumby, 2015) have highlighted for decades how the workplace is a privileged setting where power arrangements take shape and cultural schemes are reproduced through routines and rituals. More specifically, Dennis Mumby (1997; 1998) has applied the concept of ‘hegemony’ to interpret how uneven relationships and hierarchies are socially constructed and maintained through diverse discursive and nondiscursive practices in everyday professional activities. Drawing on Gramsci (1971), Mumby (1997: 344) defines hegemony ‘as noncoercive relations of domination in which subordinated groups actively consent to and support belief systems and structures of power relations that do not necessarily serve (…) those groups’ interests’. Within this framework, cultural beliefs, educational experiences, gender identities and socio-economic backgrounds can intersect in how hegemony is reified, especially in working settings (Mumby, 1998). Furthermore, the construct of hegemony relates to the one of culture. Indeed, ‘culture is habit writ large’ (Markham, 2021: 388), in other words, hegemony is primarily reproduced through cultural, routinized, taken-for-granted activities. Despite the potential role of micro-level, work-related, hegemonic practices in how algorithms are molded, values inscribed, and power asymmetries reproduced, little attention has been paid to the relevance of these structural arrangements in algorithmic design. If human relations are involved in each phase of the construction of algorithmic systems, which never work ‘without a human in the loop’ (Seaver, 2018: 378), exploring how these relations play out in the workplace is crucial to better understand how algorithms are culturally enacted.
Finally, it should be noted that, within an STS perspective, algorithms can not only be examined as the result of human practices and relationships but also in organizational terms as emerging in relation to other human and non-human actants that participate in complex socio-technical arrangements (Lee et al., 2019; Dahlman et al., 2021). Although algorithms can be considered as social agents that participate in society and participated by it (see Airoldi, 2022), this paper does not focus on the relational properties of these technologies within complex socio-technical arrangements but rather considers algorithms as socio-cultural objects which are the result of specific human practices, sensemaking processes, organizational arrangements and values that can be examined. If ‘algorithms are not one thing but many and have to be understood as such’ (Dahlman et al., 2021: 3), this contribution adopts a material and cultural approach to focus on how algorithmic models, such as content recommendation systems, are framed, defined and produced within the setting of a tech/media company. By looking at how different professional figures relate and participate in their production, in fact, I aim to shed light on how algorithmic design practices unfold within the socio-cultural and working context where they take place.
Live Tv. Case study and methodology
To explore how algorithmic systems are produced within a corporate environment, this paper follows a case study design. The case under scrutiny is
Methodologically, to investigate the practices and relationships surrounding algorithms within the company, I undertook what can be described as a multi-sited ethnography (Marcus, 1995), in a production studies fashion (Bonini and Gandini, 2019). To gain access to the field, I established contact through personal connections with a trusted ‘broker’ working at
In the end, a total of 148 pages of interview material and 109 pages of documentary material were collected. Then, I employed the software programme Atlas.ti to code and thematically analyze data. Specifically, open coding techniques, commonly associated with a grounded theory approach (Corbin and Strauss, 2008), were used to identify key categories and relations between them. Theoretical concepts were also integrated during the analysis, within a continuous and iterative interaction between empirical data and speculative elaboration (Charmaz and Belgrave, 2015).
Despite the aforementioned limitations connected with the access to the field, my methodological strategy follows the tenets of a multi-sited ethnography, which implies doing fieldwork not only on site but also ‘by telephone and email, collecting data eclectically in many different ways from a disparate array of sources’ (Hannerz, 2003: 212). This flexible notion of the field is typical of internet research in that it stops focusing on the field ‘as an object, place or whole’ to think ‘more about movement, flow and process’ (Markham, 2013: 438), in order to scrutinize ‘the circulation of cultural meanings, objects, and identities in diffuse time-space’ (Marcus, 1995: 96). Within this framework, ‘fieldwork is likely to be interview-centric’ (Seaver, 2017: 7). Thus, I followed the ethnographic tactics conceived by Seaver (2017) to examine ‘algorithms as culture’. Specifically, considering the different places and networks where data collection takes place as ‘entry points, rather than sites’ (Burrell, 2009: 190), Seaver (2017: 6–8) claims that access should not be considered ‘a precondition’ for the production of knowledge and thus encourages ethnographers to ‘scavenge’–i.e., to collect data from various sources in diverse manners; to ‘treat interviews as fieldwork’–i.e., as forms of cultural action which ‘do not extract people from the flow of everyday life, but are rather part of it’; and to ‘parse corporate heteroglossia’–i.e., to examine the contradictions and narratives emerging in interviews, corporate documents and public statements. Although these tactics and the overall methodological approach do not examine the role of nonhuman agents, they can be fruitful in analyzing the human practices and relationships around algorithmic production.
The production of algorithmic models
Drawing on the ethnographic material gathered over a year of research, for analytical purposes, I categorized the results into three themes: (a)
The hierarchical organization of tech work
Several figures participate in the design of an algorithmic system at …the actors who contribute to the development of the model are different. The firsts obviously are the business colleagues (…) [that] indicate what they would like to improve.
Figures and tasks in the production of an algorithmic model.
The workflow is strongly supervised, coordinated and directed by a project manager, a delivery manager and a senior business analyst that decide priorities, coordinate workers and respond to the requests of executive business figures. Managers are helped by the so-called BT, i.e., intermediary figures that help technically manage the projects, organize working activities and, especially, contribute to the ‘translation’ of the business requirements in technical terms. The practical construction of the model and the database is then carried out by ‘operative’ figures, such as DSs and data engineers (DE), with the help of data analysts, data/ML engineers and data architects. The adjective ‘operative’ discursively constructs the underlying subordination of these workers to the corporate goals expressed by BCs, which are ‘high-level’ figures producing ‘high-level’ requirements that will be fulfilled by the material work of figures ‘below’ within the corporate hierarchy. As this BT put it: …the people I interact with are more ‘high level’ (…), a little above compared to mere coding (…). These people conduct an analysis and then we send it to people ‘below’, more operational, who write the code as they see fit.
This hierarchical division of labour can be identified in different instances. First, business requirements cannot be changed on a qualitative level, as they reflect the decisions and business goals of the organization. Then, during the coding process, operative figures have to constantly showcase their work to BCs, to whom the capacity of the model to fulfil the initial requirement is shown. Although there are differences and hierarchies also among the technical workers involved in the process, the feelings of subordination of DSs and colleagues are frequently reinforced by the impression to be at the end of the decision-making chain and the obligation to follow a specific plan within the terms set by the business unit. … the analytical part (…) comes with a very high-level requirement very often. We can do very little with a few details and we can even imagine many things but at the end of the day the business client … requires us [to follow] this planning.
… there are many people who collaborate (…) on these projects (…). They must be governed, coordinated, left autonomous, as long as (…) they operate within shared frameworks.
As highlighted by a manager in this last excerpt, individuals have a certain degree of autonomy in their work, but they must respect plannings, frameworks and objectives imposed by ‘high-level’ figures. These frameworks may become constraining for DSs and colleagues. Despite the increasing importance of analytical workers in the company and their growing engagement in the ‘refinement’ of business requirements, in fact, corporate goals are set before consulting them and having verified the feasibility of a project. This puts operative figures in potential difficult situations, with little room for manoeuvre and diverse technical problems to address. … the business idea (…) should be evaluated from the beginning also on a technical level. (…) IT must no longer be a simple accessory of the company but must be an active member in the decisions (…). In all the companies (…), [IT] is always seen as a cost, and therefore (…) we only consult IT at the end.
… if you have projects in progress, you have supervisory meetings with the business where our analytical machines, our data analysts de facto, draft the requirements …
DSs and colleagues are considered mere ‘analytical machines’ by their superiors, within a hierarchical structure that frames them mainly as instruments that must be employed to attain business goals. The instrumental role of operative figures is highlighted in the meetings with BCs in which their competencies are necessary to understand how the business goal can be better accomplished. Furthermore, the advancement of their operative work is constantly checked by managers in daily walk-through meetings, where potential issues with the projects must be reported and solved. Everyone is controlled, the controller is also controlled, but necessarily…
Thus, the control of people, their work, the timing of their activities and the following achievement of corporate objectives is a crucial element of how the company works and, therefore, is maintained throughout the production process, also through metrics. For example, managers can check dashboards where the percentages of achievement of each project on which data teams are working are indicated. Similarly, when the model is put into production use, i.e., deployed as a live application extracting users’ data, its performance can be checked through key performance indicators (KPIs) by BCs that, therefore, set and control the goals of a model in different stages of the production process. In this scenario, control is exercised both directly, through software programmes and meetings, and indirectly, through the discursive construction of subalternity.
Frictions, negotiations and concealing strategies
If hierarchies are clear to the people involved in algorithmic design, results highlight that, during the process, several frictions emerge between BCs and operative workers. The first site of conflict is language. As explained by these DSs: …we struggle with these things every day. (…) [we need] to lose some technical vocabulary and get closer to business vocabulary (…) to speak a language that they understand. Now they seem like aliens but sometimes we are really in trouble.
…the business’ idea must be translated into
Different vocabularies make the communication between figures with different backgrounds complicated. BCs express their requirements and desired outputs in a non-operative, more qualitative jargon, which can be difficult to translate in analytical terms and materialize in code for DSs and colleagues. Despite the role of computational operations in the achievement of business goals, these objectives take shape following qualitative discursive constructions. Moreover, business requirements are put forward without prior thought to their technical feasibility, thus making the tasks of operative workers more demanding and complex. … I'm talking to you about the personal relationship with non-analytical people which is the biggest clash that exists in the company for us, clashing with requests that (…) can be made, but you must always look at them technically and understand if they are feasible because it is not obvious.
… there is some complexity, they fail to understand pros and cons. (…) they will never understand the technical part, they don’t know what exactly you do behind it.
Businesspeople are described as incapable of understanding computational operations and complex processes with long-term goals. Thus, operative workers feel they cannot explain their activities properly. This lack of technical competency by people working in the business unit results in requests that are difficult to turn into a deliverable project and this can cause pressure and stress. Indeed, the discrepancy between the business requests and what can be done by tech workers may generate uncertainty and the feeling of not being understood but obliged to comply with the requests. This situation also undermines some technical aspects in the development of the project, such as the construction of the dataset for model training and the KPIs that will be made available to businesspeople. The fear of providing results not in line with the requirements may entail the production of superfluous indicators and the use of unnecessary data. … maybe tons of KPIs are launched in development without knowing which ones are priority (…), very often, for fear of making a mistake, we fill ourselves with a thousand performance indicators that all seem equivalent, when the truly fundamental ones are few …
The business client would tell you: it's obvious that I want to see it that way, but it's not obvious if you don't tell me. You (…) have been in the business world for twenty years and (…) there is another type of analysis which is obvious to me.
Tech workers also have doubts regarding how data outputs should be shown to BCs which differently frame projects and goals. Here language remains a key site of conflict as operative workers have to adapt to a more business-oriented jargon and the demands of the business unit, while BCs have not to understand technical issues. In this scenario, the use of a certain language and the differences between business requirements, technical feasibility and the activities and frameworks of operative workers, it is explained in terms of competing mentalities. Specifically, BCs are deemed to carry an ‘old’ mindset that ill-fits the goals of a media/tech company. As this manager put it: … the requirements (…) continue to be ‘copy and paste’ of those made in previous years (…) if [business] clients do not change, they will always ask you the same requests because it is easier for them, they’ve been used to working like this. (…) Sometimes, (…) it's an odyssey, there are very long meetings in which everyone maintains their positions. (…) Then, you have to go to the boss to change his mind, it's very complex. This is somewhat the greatest friction: the client's mentality.
Maybe the business wants this, and the technical side says: okay, but they put restrictions, so you have to go back to the business: We have this type of restrictions, and the business: no, I absolutely don't want these restrictions. Let's say it's a bit of a compromise game …
If misunderstandings and negotiations are crucial elements in the production of an algorithmic model at If you talk to them about a regressor or something like that, they say: Regression? What is it? There are things that cannot even be discussed. There are words [and] speeches that cannot be done in certain places and must remain technical. I have to cast a veil on some things, (…) there is complexity, but it cannot be brought to all tables.
As this DS put it, there is a ‘veil’ which must be cast on certain discussions, in other words, the fear not to be understood and the risk to be criticized favours a concealment of some operations. Following these concealing strategies, some stages of the production of algorithmic models escape the control and understanding of BCs. Furthermore, within a post-industrial organization of work, DSs have the opportunity to coordinate their tasks internally. … each of us always has a predisposition for something. (…) we know it within the team, (…) if there is a model to be built, maybe there is that person who has worked more on that type of model, (…) it is more our shared internal knowledge, they are not figures really recognized in other parts of the company …
Within the team, skills in performing specific activities are recognized, given the shared common background. The resulting self-organization is a site where potential issues can be addressed outside of the control of BCs and shows different spaces where algorithmic production is actualized, negotiated and, hence, meaning produced.
Multiple meanings enacting algorithms
If algorithmic models are the result of negotiations and frictions, it is in those instances that the multiple meanings associated by different workers to an artifact emerge. Contrasting meanings enact algorithms in diverse manners, revealing their multifaceted character, such as when workers differently frame the functioning of algorithmic models. In their accounts, in fact, interviewees supported that BCs are generally more concerned about the corporate goals that algorithms should help achieve, while operative workers about if and how those goals can be technically accomplished. Thus, business requirements intervene and shape the functioning of recommendation systems, even in ways that DSs consider misleading. It is very difficult because business logics intervene. (…) Live Tv has to push its own production to you, (…) I invested money in it, so I push the new season of ‘Stranger Things’ to you. (…) However, pushing [contents] goes against what is the natural predisposition of the algorithm (…). You have these constraints that sometimes become (…) walls.
… editorial prerogatives (…) always put their hands on what the recommendation systems should be …
For operative workers, a content recommendation system is primarily a technical object that should be programmed to predict which products may be more appealing to specific clusters of customers, based on criteria, such as prior behaviour and what similar customers watched before. However, they are always asked to prioritize on the users’ interfaces editorial choices, i.e., movies and tv series whose rights have just been acquired by I have to build an algorithm that gives me the greatest chance of offering my contents, [and] a precise and reliable service to the customer.
… with the vastness of the film offering available, helping any of us to filter content is definitely a benefit.
Both managers and operative workers consider their recommendation system a medium through which the contents produced and distributed by the company are delivered to the client, thus showing an instrumentarian understanding of the artefact. Within this framework, seeming attention to the consumer resonates in accounts considering consumer experience as the main goal of … the thing is to continue to release experiential value to the customer, [which] (…) is at the heart of our business …
The idea that the company is committed ‘to release experiential value to the customer’ can be considered as a case of ‘corporate heteroglossia’ in which a manager attempts to align the different, potentially incoherent voices within a company in a single coordinated message. However, multiple interpretations constantly arise. Indeed, if algorithms are here depicted as an aid to customers, to whom the company ‘release experiential value’, in other snippets, algorithmic models are described as tools designed to keep customers ‘loyal’ to the service. The recommendation system serves to give visibility to the commercial offer (…), [and] to maintain customers, not let them go away.
Within the company, one of the most common fears is the so-called ‘churn’, i.e., when customers unsubscribe from … the recommendation can be (…) something that is more useful to Live Tv. (…) it makes business decisions efficient.
The most important aspect of a content provider is clearly the (…) editorial content (…), then make it easily accessible to customers (…) and give them a unique experience (…) that tends to be a bit immersive, which creates almost addiction.
In these accounts, algorithms are considered business products used to implement corporate decisions. The ‘experiential value’ becomes the outcome of business choices and technical operations that will make users’ behaviours and, therefore, investments in products such as movies and tv series, financially valuable. Despite the narratives promoted in certain statements resonating with the company's advertisements, it is well known among the informants that algorithms are designed to prioritize editorial contents, attract users’ attention, extract data that will be used to develop behavioural models, and favour a continuous consumption of contents through so-called ‘addictive’ mechanisms. Indeed, in corporate terms, ‘creating value’ means keeping the customer bond to the company. This interpretation of algorithmic models is reinforced by the idea that algorithmic models are contingent artifacts that can be continuously retuned according to the needs and requirements of the business unit, thus making extractive processes even more precise and efficient. As this high-level manager puts it: …a model (…) is a living artifact, it is something that is re-adjusted, re-aligned, re-tuned, (…) and then subsequent versions of that model will be released. (…) our models are constantly being revised, optimized, realigned, but why? Because the contents, human behaviors, trends and editorial productions vary…
Thus, algorithmic systems are considered as never finished, but rather ‘living artifacts’ that ‘live’ to actualize business requirements, with DSs and colleagues that continuously modify the priorities, clusters and products of these artefacts to better suit corporate goals. This last excerpt illuminates how these systems are framed within the company that has to be constantly prepared for adaptation, within narratives of efficiency and rationality that seem to fit how algorithms operate. Furthermore, these words show how the ‘variable ontology’ of algorithms is well-recognized in the company. Here algorithms emerge as ‘ontogenetic – in becoming’, thus ceaselessly adapting to ever-changing issues and, within this framework, while ‘the problem of showing the most “interesting” content remains, what constitutes interestingness depends on the given context’ (Bucher, 2012: 1178).
Discussion
This study was designed to explore the practices and relationships surrounding the production of algorithmic models within the corporate environment of
Results showed the hierarchical organization of tech work and the crucial role of BCs in the elaboration of the goals pursued by algorithmic systems. Specifically, the process of algorithmic design is constantly supervised by BCs through managers and other intermediaries in charge of coordinating operative workers, which are the ones materially designing, programming and implementing algorithmic models. Within this framework, there is no horizontal relationship between the actors involved in the production of algorithms, but rather a condition of subordination of operative figures, which work within strict time constraints, with limited possibilities to change business requirements and the necessity to adapt to business jargon, as well as with their results continuously controlled, through dashboards, walk-through meetings and formal encounters with BCs.
This scenario corroborates the idea that ‘the design and spread of technologies, as well as the technologies’ expected outcomes’ are shaped by the authority of people holding specific ‘interests, goals, and perspectives’ (Bailey and Barley, 2020: 2). Then, the present results indicate that tech workers at
The findings of this study are consistent with those of Rowlands and Kautz (2022) that investigated the perception that IT developers had of their relationship with BCs in a banking group, and of the information systems development methods through which they organized their activities. Similarly to the operative workers in my study, their interviewees ‘framed their answers in terms of a subordinate relationship with the client, and portrayed themselves in a cooperative, but submissive role’ (Rowlands and Kautz, 2022: 279). Drawing on Clegg (1989), their analysis shows how three different forms of power can be inscribed in the enactment of information systems development methods. The first and more explicit form is causal power: When an entity consciously gets another entity to do a certain action. Then, dispositional power is when an entity subtly exerts power through language, symbols and rituals, on another entity, thereby making them accept a specific role in the existing social order as something natural or unchangeable, while facilitative power has a disciplinary nature, and it is internalized through practice and habits over time. All these forms of power were found in the interviews with the operative workers in my study. An example of causal power exerted by BCs on operative workers can be the acceptance of strict deadlines or the imposition of coercive forms of control. Then, adapting to a business-oriented language or participating in walk-through meetings, which are ritual moments where working actions are legitimized, can be considered instances in which forms of dispositional power are practiced. Finally, established working techniques, professional identities and timelines are forms of facilitative power as they avoid the questioning of the status quo, which results in embedded and naturalized everyday habits.
These different forms of power enable and structure the organization of tech work at
At
Within this scenario, DSs and colleagues are required to engage in ‘the practice of recognizing needs’ (Dorschel, 2021: 5) in a twofold way. On the one hand, they must be able to design algorithmic models capable to foresee and influence the ‘needs’ of the customers that subscribed to
Given this scenario, I argue that, during the different phases of algorithmic design, forms of dispositional and facilitative power, which are not only exerted on individuals but also reproduced by communication practices, cultural actions and sensemaking processes as ‘common sense’, contribute to the development and maintenance of hegemonic relationships within the company and to the enactment of algorithmic models as socio-material artifacts. Specifically, operative workers are not only subjected to forms of power and control but rather participate in the reproduction of that ‘politics of common sense’ (Angus, 1992) for which ideology-laden activities and cultural schemes are taken for granted, thus remaining unquestioned. This seems particularly relevant for the study of ‘algorithms as culture’ (Seaver, 2017) as the culture underlying algorithmic production is mostly shaped, practiced and reproduced within everyday, unquestioned micro-communication dynamics (Markham, 2021). In this scenario, as Stuart Hall (1985: 105) would put it, the world is continuously experienced ‘in and through the systems of representation of culture’, in everyday symbolic exchanges where ‘we are most under the sway of the highly ideological structures of all-common sense, the regime of the “taken for granted.”’
Habits and routinized activities are key instances where hegemonic relationships are reproduced, especially in organizations (Mumby, 1997). Hegemony, in fact, partly functions through the neutralization of challenges to dominant dynamics and the normalization of particular behaviours. As mentioned earlier, hegemony ‘is habit writ large’ (Markham, 2021: 388). When practices in an organization become, over time, institutionalized ordinary activities, it is more likely that specific arrangements become hegemonic. Indeed, day-to-day working practices and communications allow to build and diffuse power within an organization (Scott, 2001) and turn value-laden corporate arrangements into ‘unquestioned normative structures’ (Rowlands and Kautz, 2022: 301). Following this process, hegemonic relationships are naturalized by rituals and routines, such as walk-through meetings and briefings, which normalize working procedures as inevitable. Moreover, also the use of business-oriented jargon plays a key role as hegemony is ‘played out much of the time at the level of discourse’, which ‘creates subjectivities, structures experience, and constitutes what counts as meaningful and important’ (Mumby, 1997: 366). These hegemonic relationships are at the core of algorithmic design and play a crucial role in how these artefacts are framed, constructed and enacted.
However, hegemony is not totalizing, but rather part of what Anthony Giddens (1984) defined as the structure/agency cycle. In Giddens's structuration theory, structure and agency are relational to one another. On the one hand, in fact, individuals act under certain social structures; on the other hand, their activities recreate those same social structures. This implies that ‘structure and agency imply each other. The structure is enabling, not just constraining, and makes creative action possible, but the repeated actions of many individuals work to reproduce and change the social structure’ (Giddens and Sutton, 2014: 56). By focusing on the implications of this mechanism for the cultural enactment of algorithms, this idea allows us to look at culture both as ‘a setting’ and ‘an
Finally, findings highlight the cultural and contingent nature of algorithms, which are the outcome of diverse orientations, practices and frictions, as well as organizational hierarchies, asymmetrical power relationships and hegemonic arrangements, which must be considered part of the complex ecosystem in which algorithms are enacted. Indeed, these dynamics have implications for how algorithmic models are constructed and on the values inscribed in their computational architecture. Furthermore, the technical configurations of algorithms are continuously updated and remodulated, thus confirming their ontogenetic nature (Bucher, 2012). Given this scenario, algorithms emerge as ‘multiples’, i.e., ‘manyfolded’ (Mol, 2002) artefacts, with a necessarily ‘emergent and inherently unstable’ shape (Sachs, 2020: 1700), which are constantly ‘enacted through the varied practices’ and sensemaking processes, reflecting potentially contrasting values, backgrounds and goals, ‘that people use to engage with them’ (Seaver, 2017: 5). The construction of these cultural artifacts takes place through complicated and lengthy negotiations between different individuals, within ‘a sort of constant battlefield’ (Hall, 1981: 233), where different actors, involved in asymmetrical power relations, relate and struggle. As previously shown, hegemonic arrangements play a key role in this context. This implies that digital platforms are not only socio-technical artifacts that favour the development of hegemonic relationships among end users through everyday trivial practices (Markham, 2021; Pronzato and Markham, 2023), but also the product of hegemonic arrangements within corporate environments that structure the design process and contribute to embed viewpoints and objectives into those systems.
Conclusion
As shown in the research material presented here, hegemonic structural arrangements, reproduced by specific socio-cultural values, everyday working activities and hierarchical relationships, play a key role in the production of algorithmic models. The functioning of these systems is ‘constructed, negotiated and adjusted by humans, themselves embedded in local knowledge regimes, organizations and cultures’ (Kotras, 2020: 10), hence, all their operations are inherently social. During the design process, various meanings are attached to the construction of these objects within the unfolding of uneven relations structuring the production environment, such as the ones between BCs and DSs, or between managers and operative people, which are crucial in how algorithms are constructed and enacted. In this framework, findings further support the role of algorithmic models as socio-culturally situated artifacts or ‘multiples’ (Seaver, 2017), i.e., the products of multifaceted practices, power-laden relations, needs, requests, negotiations, frictions and sensemaking activities–occurring in specific places at specific moments in time–that differently and materially enact these artifacts. In the workplace, these processes are constrained and enabled by corporate hegemonic configurations, which are themselves reproduced by tech workers’ actions, within the structure/agency cycle (Giddens, 1984). Then, once deployed, algorithmic models become social agents (Airoldi, 2022), continuously updateable by companies, which impose on end-users hegemonic relationships that are then reproduced by the same users through their everyday online activities (Markham, 2021; Pronzato and Markham, 2023).
All in all, this study can add to several research areas and suggest paths for future work. Primarily, it aligns with prior empirical inquiries, within the field of critical algorithm studies, regarding how algorithmic models are produced (e.g., Bonini and Gandini, 2019; Sachs, 2020; Kotras, 2020). Specifically, it shows how algorithms are designed and enacted within the corporate environment of an Italian television platform/subscription streaming service and sheds light on the internal relational dynamics whereby algorithmic media take shape. Given this scenario, these findings also give a contribution to the stream of research focused on digital labour and, in particular, on the experiences of tech workers, i.e., the increasing white-collar workforce in charge of programming algorithmic media (e.g., Dorschel, 2021; 2022). By examining a case study focused on how tech workers and their superiors participate in the production of algorithms, this paper has explored how the subalternity of DSs and colleagues is constructed and resisted within the corporate environment. In this regard, if prior research on digital labour has extensively explored how certain subjects, such as gig workers undergo and resist algorithmic power (e.g., Yu et al., 2022), how the emergent group of tech workers experience their workplace and perform aid and resistance practices emerges as an interesting area that could be further explored in future research. Furthermore, this area can be noteworthy also for organizational studies. Indeed, this body of literature has shown increasing interest in how organizations monitor workers and workers attempt to escape control mechanisms (e.g., Curchod et al., 2020; Cameron and Rahman, 2022). Thus, it would be interesting to focus on how these dynamics play out in media/tech companies, thereby paving the way for fruitful interdisciplinary research paths.
Then, limitations should be considered. These findings are limited by the use of a case study design in the Italian socio-cultural context. Other companies with different cultural histories and backgrounds can approach the same production processes in different manners. Indeed,
Finally, this paper has implications also at the political level as it can remind scholars, policymakers and citizens that every technology is always socially situated and the outcome of negotiations and power relations, therefore, its functioning is never neutral. Thus, public narratives describing algorithmic systems as autonomous and unbiased are extremely misleading and potentially harmful, as they hide all the practices and relationships behind the production of these artifacts and do not hold all the organizations and individuals involved accountable for their activities (Beer, 2017; Christin, 2017). How technologies work can be changed and better serve individuals, governments, or industries, hence, scrutinizing ‘the contexts in which, for what, by whom, and for whom are created and used’ (Capurro, 2019: 134) remains crucial.
