Abstract
Keywords
This article is a part of special theme on Analysing Artificial Intelligence Controversies. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/analysingartificialintelligencecontroversies
Introduction
In recent years, numerous proposals have called for rethinking the ethical, aesthetic, and political frameworks used to imagine the future of artificial intelligence (AI) (Ali, 2016; Adams, 2021; Couldry and Mejías, 2021; Mohamed, Png and Isaac, 2020; Tironi and Valderrama, 2021). These works challenge colonial values embedded in AI—often presented as natural or inevitable—such as growth and digitalization. Global discussions are largely shaped by Big Tech companies, which monopolize production and define dominant imaginaries of desirable AI futures. These imaginaries, grounded in techno-capitalist and Anglo-Eurocentric epistemologies (Katzenbach, 2021; Natale and Ballatore, 2020), limit engagement with other geographies and cultural perspectives. As automation is framed as an inexorable sign of progress, cultural alterity becomes almost nonexistent. The coloniality of AI is evident not only in its concentration in the Global North (Adams, 2021; Zuboff, 2019) but also in narratives that present digitization as the solution to economic and ecological challenges. These narratives shape what can be imagined about AI and influence how technologies are ultimately designed and deployed. In this context, countries in the Global South are predominantly portrayed as “clients” of Northern technologies (Ricaurte, 2019) or as sites of data and resource extraction (Tironi and Valderrama, 2023). A central premise of the decolonial sensibility is that AI advances by erasing otherness, reinforcing the interests and values of its developers.
There is no doubt that decoloniality within AI requires focusing on new forms of participation, incorporating historically marginalized worldviews to build a more inclusive and just technology. This work also involves fostering collective and speculative thinking (Debaise and Stengers, 2015; Stengers, 2019). A key challenge is how to de-center techno-capitalist models and enable conceptual and material tools for a technodiversity (Hui, 2021) that expands possibilities for AI futures. Using speculative design techniques, we sought to activate Latin American and situated imaginaries about AI futures. However, despite participants’ control over the technological elements intended to make AI “speak” differently, the workshop did not succeed in disrupting dominant AI imaginaries or opening a multiplicity of futures. This outcome underscores both the persistence of hegemonic visions and the challenges of unsettling them, offering a critical opportunity to reflect on the constraints of broader intellectual efforts—including our own—to introduce decolonial perspectives into AI debates. As we will show, we approach failure not merely as an error or deficiency, but as an exploratory rupture: a generative event that invites the reconfiguration of our critical repertoires and the imagination of alternative futures.
Amid the growing imperative to critique the colonial operations embedded in planetary AI systems (Couldry and Mejías, 2021; Crawford, 2021), this commentary highlights the importance of cultivating a research approach oriented toward
The decolonial issue in the AI debates
An increasing number of studies stress the urgency of challenging the colonial logics embedded in the development of AI. These works argue that digital infrastructures are shaped by the values and priorities of the Global North, perpetuating exclusion and oppression (Adams, 2021; Ali, 2016; Mohamed, Png and Isaac, 2020). Critical perspectives have explored the possibility of decolonial algorithmic systems (Milan and Treré, 2019; Tironi and Valderrama, 2021), non-Western ethical guidelines (Alegría, 2022; Lehuedé, 2024), and new forms of extractivism—both material (Crawford, 2021) and data-related (Couldry and Mejías, 2021)—that affect peripheral communities. Problematizing the sociotechnical imaginaries (Jasanoff and Kim, 2015) that guide AI implementation is essential in a context where algorithmic solutions are often portrayed as universal and objective. These imaginaries, dominated by private companies and market forces, prioritize economic logics (Mager and Katzenbach, 2021) and have been described as “inequality-related” (Sartori and Theodorou, 2022), “techno-optimist” (Natale and Ballatore, 2020), “solutionist” (Katzenbach, 2021), and even “colonial” (Adams, 2021; Mohamed, Png and Isaac, 2020). By privileging developmentalist goals and top-down perspectives, such imaginaries risk reproducing the very inequalities they claim to address.
In accordance with the aforementioned, multiple sociotechnical imaginaries can coexist in tension or in dialectical relationships. From this, it is necessary to recognize the deployment that AI has had from the Global South. The literature from the Global South recognizes the existence of AI imaginaries that “are taking place outside centers of power” (Amrute and Murillo, 2020: 1). As decolonial perspectives have been explored in terms of the concept of “coloniality of power” (Quijano, 2004), contemporary forms of the colonial legacy are reflected in the dominance of structures framed by dualistic logics, involving unequal power relations in terms of gender, ethnicity, and knowledge (Escobar, 2019; Quijano, 2004). The “decolonial turn” (Couldry and Mejías, 2021) in technology studies, in which theories circulate with different emphases between technology and its sociocultural impact, emerges in this way. Adams (2021) argues that decolonial thinking allows us to critique and undo the logics and politics of coloniality that continue to operate in the technologies and imaginaries associated with AI. Critical AI studies “must resist the sublimation of decoloniality as another rationality that justifies and legitimizes AI” (2022: 16). Amrute and Murillo (2020) propose that decolonization “means returning to questions of control and redistribution of the way technologies are designed, developed, and disseminated” (2020: 7).
As technocentric imaginaries from the Global North perpetuate dynamics of exclusion and favor global elites, the decolonization of AI imaginaries necessitates engaging with non-Western forms of knowledge and recognizing the legitimacy of contextual knowledge for the development of fairer technological futures (Ali, 2016; Irwin, 2019). This means promoting a plurality of approaches, prioritizing the self-determination of communities, and challenging the power dynamics that shape global technological development. Strengthening technodiversity in AI development (Hui, 2020) means not only acknowledging the range of technologies that exist across different contexts, but also actively exploring new ideas, scenarios, and possibilities for engaging with AI in more diverse and situated ways.
Thinking and re-imagining the imaginaries of AI from Chile
As a way of activating and promoting situated technodiversity, a speculative workshop was held in Santiago de Chile with a dozen professionals working in the field of AI and smart cities. The aim was to explore, through practice, the possibility of generating alternative imaginaries of AI—visions that move beyond homogeneous narratives of technological futures. The workshop drew on speculative and decolonial design approaches (Ansari, 2019; Dunne and Raby, 2013; Forlano and Halpern, 2023; Sloane et al., 2022), employing them both as a research method and a creative practice. The workshop unfolded in two phases. In the first phase, participants engaged in creative practice using Midjourney, an AI-based platform that generates images from text prompts. The participants were asked to imagine what a “Chilean artificial intelligence” might look like, considering the local context. They were encouraged to develop prompts that incorporated references to landscapes, traditions, materials, and values distinctive to their lived environments. As a way to provoke a conversation, participants were given the following design brief: The Government of Chile is therefore asking decision-makers to design local and situated representations of artificial intelligence. In other words, what would Chilean AI look like? How do we imagine the shape of artificial intelligence applied to our reality? What are its uses and aesthetic characteristics?
For this purpose, Midjourney—an AI software that generates images through natural language—was used. Participants were provided with a Midjourney 1 account to manage parameters such as image composition, quality, and references (concepts, textures, actions, objects, places, times, characters, etc.) in order to develop and obtain their “imagined result.” Although the objective of the workshop was to produce decolonial imaginaries of AI, an initial space was created for participants to freely discuss the prompts they would use before introducing critical approaches to help them delve deeper into the search for local imaginaries. The aim of this assignment was to establish a vision for the future of AI while offering an opportunity to “defamiliarize the present” (Bardzell et al., 2014: 9).
The initial images generated predominantly reflected conventional computational aesthetics—images of neural networks, brains, circuits, and flows. Participants discussed appropriate prompts for projecting an AI from a Chilean imaginary. The first prompts recalled a concept mostly derived from the mathematical-neuronal imaginary, for example, “spine,” “brain,” “neurons,” “networks,” and “flows.” This led them to create a variety of visualizations (Figure 1). However, the images and imagery generated in this instance failed to create an AI with a “Chilean” identity and instead tended to reproduce familiar computational and neural aesthetics.

Image generated by the workshop participants using Midjourney.
Engaging in critical reflection on AI
Following a discussion with the participants, the workshop transitioned into its second phase: an immersion phase centered on critical approaches to AI. As experts and advocates of decolonial perspectives, we anticipated that this immersion would enable participants to distance themselves from dominant discourses surrounding the development of this technology. In other words, through the use of critical tools, our aim was to provoke an epistemological rupture with common-sense representations and to foster a vigilant stance towards the colonial biases embedded in dominant AI models. With this objective in mind, we introduced concepts from philosopher Yuk Hui (2020) concerning cosmotechnics and technodiversity, alongside the imperative to imagine and develop technologies that do not replicate colonial hierarchies but instead incorporate diverse knowledge systems and realities. In line with these ideas, we emphasized the importance of a plural technological becoming—one that fosters a diverse future in the technological sphere and resists full subordination to the technocapitalism that currently dominates digital development.
Likewise, as a strategy to challenge the dominant imaginary, various representations from different Latin American cultures were presented. The idea was to demonstrate that technological development can go hand in hand with relations of coexistence with nature, beyond technology as an abstract force of control and domination. For example, the Nepohualtzintzin, a Mesoamerican calculation tool similar to the abacus, was introduced. Similarly, in Nahuatl, the number zero is represented by a closed fist or a shell, while the quipu, an Incan accounting system, is based on cords and knots. These representations led participants to a conversation about the possibility of AI centered on cooperation, well-being, and harmony with nature, rather than focusing solely on efficiency and optimization as the predominant criteria. As more recent references, the Chilean project Cybersyn was mentioned, highlighting a local effort to introduce computational intelligence for coordinating the economy of Salvador Allende's socialist government (Medina, 2006).
Our hypothesis was that the images produced by participants would reflect diverse or dissonant imaginaries in relation to hegemonic visual regimes. However, contrary to our expectations, Eurocentric references persisted throughout the workshop discussions, revealing a marked resistance to questioning prevailing modes of thought and perception. Consequently, the images selected by participants as the most emblematic of “the Chilean image of AI” reflected continuity rather than rupture with the worldviews and conceptions that the workshop had sought to destabilize.
Critical reflections for a pluralistic approach to AI
How should we interpret the failure of the experimental intervention? What does this situation reveal when examining decolonial approaches to AI? One tempting interpretation would be to view this failure as a reflection of our own insufficiency, as experts, to reveal to participants the structural power dynamics embedded in AI. The workshop design and the critical discourses we mobilized proved insufficient to provoke an epistemological and aesthetic rupture in the imaginaries we aimed to challenge. Alternatively, one might point to technical limitations—particularly the black-box nature of generative models such as Midjourney—which constrained our ability to produce decentered visualities beyond hegemonic canons.
While these interpretations are important, they risk locating failure either in the resistance of the tool or in the incompetence of its users. What they overlook is the need to interrogate the very assumptions that structured the intervention itself. Following Boltanski (1990), we note the prevalence of a revealing orientation: an asymmetry between the critical expert and the participant presumed to be in need of enlightenment. As Sloane et al. (2022) argue, participation alone does not constitute a critical or decolonial solution to technology design. In this sense, we interpret the failure of the workshop as a challenge to this revealing orientation, which aimed to highlight the gender and social biases of AI to the participants. The assumption that participants lacked critical resources—and that our role as researchers was therefore to raise their awareness—not only imposed predefined problems and questions regarding AI, but also neglected the concerns and critical competencies that participants themselves brought to the table.
The workshop adopted a problem-validating orientation, in which the questions and issues had been predefined prior to the intervention. As such, the design of the experience functioned less as a site of inquiry, experimentation, and problematization, and more as a mechanism for validating pre-existing knowledge about the decolonization of technologies. The task, however, is not merely to reaffirm the need for a decolonial approach to AI, but to generate friction and cultivate problem-making modes of thought that open up alternative perspectives on the subject. The failure of the workshop cannot be attributed solely to the methodology or to a misalignment between intention and reception; rather, it also exposed structural constraints—technical, epistemic, and institutional—that delimit which kinds of imaginaries can be conceived, articulated, and visualized within the realm of AI. Yet, while these constraints are both real and consequential, acknowledging them opens the possibility for reorientation.
What would have happened if, instead of imposing external problems and questions, we had focused our efforts on collectively finding the tools and approaches needed to reimagine alternatives to AI? We suggest that failure can be embraced as a slowing down—a rupture with the impulse to control outcomes or to master the conversation. The workshop's failure invited us to “stay with the problem” (Haraway, 2018), to rethink the questions, and to recompose the problem. In our case, the inability to decenter dominant imaginaries was not merely a technical failure, but what Haraway (2018) might describe as an
Finally, the experience immersed us in the value of experimentation—not as a path to confirmation, but as a space of emergence, where unexpected problems surface and answers give way to further questions (Tironi, 2020). Moving toward a technodiversity of AI requires taking failure seriously and resisting standardized approaches that flatten the conversation rather than pluralize the differences inherent in any technological development. This case calls for the creation of listening devices that are genuinely receptive to the critical competencies and situated knowledge that participants themselves contribute—recognizing them not as passive recipients of expertise, but as active interlocutors in the co-production of meaning.
