Abstract
Keywords
Introduction
Artificial intelligence (AI) has become a prominent item on public agendas, revealing political, economic, and technoscientific differences between and within cultures. Public discussions primarily involve national governments, businesses, media outlets, and academia, all partaking in complex processes of construing, promoting, and contesting different visions for how AI should ‒ or should not ‒ unfold its perceived potentials (Mager and Katzenbach, 2021; Nguyen and Hekman, 2024). Perspectives on AI and its regulation are diverse, with political and economic elites often dominating AI debates. While prominent voices from tech entrepreneurship hold mostly technology-deterministic views that are broadly enthusiastic about unregulated AI development, other societal stakeholders advocate for pro-regulation stances or even opposing, anti-tech positions. These varied perspectives compete for legitimacy in the public sphere. Different framings of AI give rise to ‘sociotechnical imaginaries’ (Jasanoff & Kim, 2009), that is, conceptualizations, perceptions, evaluations, and recommendations for technology's role in society in the present and future. Over time, sociotechnical imaginaries have the potential to persist and gain societal significance, solidifying into master narratives, which structure long-term beliefs and values towards technology. These can shape attitudes and actions.
Arguably, AI discourses influence how emerging technological trends are understood, assessed, envisioned, and put into practice based on the identified benefits, potentials, and promises, but also limitations and risks. This concerns politically and socially consequential questions (Bareis and Katzenbach, 2022). Recently, the launch of generative AI (GenAI) services with new capacities in content generation continues to stimulate a controversial debate about technology regulation (Ferrari et al. 2023). Case in point is OpenAI's ChatGPT, making waves across societal sectors and industries for its alleged versatility, and further feeding hype-centric discourses that steer collective visions of what AI can do as well as what it
Research on sociotechnical imaginaries as proposed by Jasanoff and Kim (2015), tends to view them from a hierarchical perspective, where institutions enforce visions top-down and citizens seemingly lack agency. Recent literature challenges this by emphasizing citizen involvement and the need for a better understanding of mediatisation processes around AI beyond elite publics (Sahakian et al., 2025; Soares Seto, 2025). Accordingly, authors such as Zeng et al. (2022) have explored social media's role in shaping AI imaginaries. However, few studies examine AI imaginaries in other alternative discursive contexts such as communities of practice (CoPs; Wenger, 1999) and pioneer communities (Hepp, 2016). Previous research suggests that the formation of AI imaginaries may happen in diverse places across the digital sphere, where different communities share views and values about AI. Critically analysing alternative discursive contexts unearths whether diverging imaginaries form at all and how those dominant in elite discourses are received, adopted or contested.
Accordingly, the present study focuses on a practice-oriented online community that engages with AI daily. It examines user discourses on AI that centre on concrete creative purposes in a collaborative setting. The main research interest is to understand how AI imaginaries in community discourses align with or differ from those in mainstream AI narratives.
Specifically, a Discord community around AI-generated livestreams serves as the empirical case for exploring AI imaginaries through a framing lens and how users’ perceptions, assessments, ideas, and visions are discussed within the community. Discord is a social media platform that facilitates the co-creation and distribution of media content among users sharing similar interests. This includes AI-generated live videos where users can steer automated content creation through prompts. It is a collaborative effort embedded within a specific community-oriented discourse culture shaped by platform affordances. The analysis zooms-in on the specific sub-community around
The contribution of the present article is threefold: (1) it explores under-researched empirical territory by expanding the analytical lens from elite discourses to concrete CoPs and pioneer communities; (2) it proposes a stronger integration of non-elite discourses into theorization about how sociotechnical imaginaries emerge, expand, and transform across societal strata and cultural spectra; and (3) it illustrates how a combination of qualitative and computational methods can be productively applied to research sociotechnical imaginaries through framing analysis. The conclusion addresses the implications for understanding the role of online CoPs (Wenger, 1999) in shaping public imagination.
Technology discourses, framing and sociotechnical imaginaries
Mediatised discourses give rise to narratives containing different framings of technologies, that is, how they are given social meaning. A discourse can be considered as the clustering of contextually related communication about a topic or set of topics as the focal point of attention (Hepp, 2012). Discourses vary in focus, size, scope, and duration but heavily depend on digital media. These enable the formation of dynamic communicative networks through which participants engage in discursive practices and potentially form communities. Different cultural, social, political, and technological factors configure the exact composition of discourses, that is, who is partaking, under what circumstance, following which norms, using various media formats.
Whatever the exact empirical configuration of discourses, they constitute ‘a particular way of talking about and understanding the world’ (Jørgensen and Phillips, 2010: 1). Discourses can be further broken down into narratives and frames as well as framing practices therein. A narrative is a relatively coherent story that promotes particular, often ideologically loaded, representations and suggested interpretations of issues. Through a narrative, individuals can ‘start questioning their own realities and identifying the socio-ideological influence of systemic and institutional discourses on their beliefs and practices, on their heteroglot conceptions of their worlds’ (Bakhtin, 1981; as cited in Souto-Manning, 2012).
This is achieved through different framing practices, that is, the selection as well as (strategic) exclusion of information, the placing of specific emphases, and choice of vocabulary and imaginary. For example, discourses on nuclear energy include narratives about clean energy but also existential threats, each consisting of different emphasis- and valence framing practices (e.g., costs vs benefits, benefits vs risks, economic vs ecological considerations, etc.). Entman (1993) describes frames as ‘[highlighted] bits of information about an item that are subject of a communication’ (53). Frames are fundamental building blocks in the construction of social reality, enabling the inquiry into processes of producing and interpreting narratives within discourses (Pentzold and Fraas, 2023: 98). Pentzold and Fraas (2023) underline that frames should be seen ‘not as holistic categories but as selective compositions of coherent elements’ (101). They are not static and can change quickly, emphasizing the dynamics of mediatised discourses.
Concerning technology, narratives and framing practices give rise to ‘sociotechnical imaginaries’ (Jasanoff & Kim, 2009). Jasanoff and Kim (2015) define them as ‘collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology’ (4). Their framework capture the complex process through which technology, science, and society ‘co-produce’ (Jasanoff, 2004) social reality. Different social groups with varying degrees of influence articulate imaginaries, through narratives that promote a desired culture of values, ideologies, and visions. These narratives mobilize a certain vision of how technology should be used, subsequently guiding social action especially with an eye on desirable futures. How to make use and govern technology in the future is often more important in sociotechnical imaginaries than the past or immediate present. Sociotechnical imaginaries are neither monolithic or homogenous but reflect diverse interests that aim to challenge or support master narratives. These master narratives define ‘what is possible or desirable, who relevant actors are, and what narrative to highlight’ (Guay and Birch, 2022: 3). By mapping sociotechnological imaginaries, it is possible to uncover which ones are dominant and how they coerce or reinforce ideas in public imagination around a given technology.
Conceptually, AI discourses can be broken down into framing practices and discourses that give rise to sociotechnical imaginaries (Figure 1), which in turn can be collectively reflective of prevalent master narratives about a technology's role in society. For example, AI may be framed as an opportunity to create jobs or even, more agentically phrased, a ‘job creator’ rather than a competitor to human labour. This particular framing links to other similar ones (e.g., ‘AI as a Motor of Growth’) that shape a sociotechnical imaginary centred on AI's potential to drive economic prosperity, which eventually guides policy-making and perceptions about emergent technologies. These master narratives, imaginaries, and framings are mutually influential. Distinguishing between these layers helps with analyzing how technology is framed through language and visuals, enabling critical exploration of meaning-making in technology discussions.

Conceptualization of framings, ST imaginaries and master narratives in action.
Researching AI imaginaries
Public discourses on technological developments display ‘a rhetoric of prospective potentials that innovation sets free. This rhetoric not only enduringly frames the perception of businesses and customers for a technology but also creates an element of performativity’ (Bareis and Katzenbach, 2022: 860). Current AI discussions illustrate this point and several studies show that media discourses have become ‘sensationalized, industry-driven, and politicized’ (Brennen et al., 2018; Goode, 2018), presenting an over-hyped vision of AI by focusing on its potential and capabilities (Elish and Boyd, 2018).
Previous research observes that news media and governments engage in AI discourses that promote narratives framing the technology primarily in respect to economic benefits, technological progress, and geopolitical competition (Bareis and Katzenbach, 2022; Nguyen and Hekman, 2022a; Nguyen and Hekman, 2024; Zeng et al., 2022). Nguyen and Hekman (2022a) and Nguyen and Hekman (2024) identify four major meta-frames focusing on economic, cultural, social, and political impacts of AI. Similarly, Scott Hansen (2022) finds several prevalent themes in AI news: AI as an autonomous entity that will develop past humans, machines and humans working complementary, AI taking over humanity, humans being passive and excluded from AI development, and AI making positive contributions to humans and serving human development (2002: 65‒69). Zeng et al. (2022) highlight the transmission of dominant frames in the context of social networks and mainstream media in the Chinese AI discourse, showing that narratives and imaginaries disseminated by the Chinese government remain dominant without relevant ‘counterpublics’ challenging these ideas. Concerning national policy, Bareis and Katzenbach (2022) explore imaginaries in national AI strategies of the USA, China, and Germany. They observe that imaginaries dominating governments’ views promote the idea that AI is essential for national competitiveness and national security (868). They portray AI as inevitable (869) and that it could serve as a technological solution for social problems and challenges, ultimately enhancing the nation's overall well-being (869).
Previous studies reveal the considerable influence of elite narratives dominated by governments, media, and tech businesses, alongside a lack of critical and alternative viewpoints, in shaping public discourses on AI. Discourses do not only convey how stakeholders view a technology but highlight the salient aspects that interest them. Imaginaries, Vicente and Dias-Trindade (2021) claim, ‘reside in the reservoir of norms and discourses, metaphors and cultural meanings’ (710). Importantly, previous research suggests that AI is undergoing a process of mediatisation. Mediatisation, as a ‘metaprocess’ (Krotz, 2009; as cited in Hepp, 2016), describes how societies evolve when ‘everyday practices increasingly rely upon media and become “moulded” by them’ (Hepp, 2016: 919). Different stakeholders mobilize their visions to shape future developments, regulations, and uses, operating in what Jasanoff and Kim (2009) describe as the ‘understudied regions between imagination and action, between discourse and decision, and between inchoate public opinion and instrumental state policy’ (123).
However, a limitation of previous studies is the focus on mainstream discourses. Expanding the analytical angle is crucial to understand how the sociotechnical imaginary landscape is configured and to recognize the role of alternative voices. This does not only mean that more research is needed on discourse cultures outside of the ’West’ by, for example, shifting focus to East-Asian or African AI discussions. There are manifold discourse formations defying geopolitical categorisation in online communities where people with varied backgrounds engage in the formation of sociotechnical imaginaries through daily practice. Analyzing alternative discourses can reveal the complex interplay between mainstream imaginaries in the broader public sphere and community-driven imaginaries among non-elite participants who share an interest in AI.
Alternative imaginaries among CoPs
In contesting elite-centric analyses of media culture, Kellner (2020) points to the relevance of smaller communities and individuals: [e]veryone can contribute to discussions, critique, and media creation through social media and new technologies and platforms (…) even create their own narratives, images, analyses and media artifacts as the tools of media production become part of the digital devices of everyday life. (2‒3)
This challenges critical research to engage with highly dynamic and diverse discursive formations in the digital sphere. Similarly, Bruns (2023) criticizes the idea of a unified or dominant ‘digital public sphere,’ suggesting that it is more accurate to think of a ‘fractured digital sphere.’ This fragmented landscape reflects the diverse nature of multiple social groups and online cultures existing in parallel. Accordingly, Vicente and Dias-Trindade (2021) argue that sociotechnical imaginaries have come to ‘incorporate greater intellectual plasticity through contributions that highlight a contested nature of sociotechnical imaginaries and the crucial importance of studying the (in)visibility of the more localized origins and circulation of alternative imaginaries’ (711).
CoPs (Wenger, 1999) that gather on online platforms around shared interests are one important example for these alternative discursive formations. What makes them insightful cases for studying AI imaginaries outside of mainstream narratives is their direct engagement with technology for specific goals. CoPs are often ‘pioneer communities’ (Hepp, 2016), exhibiting self-selecting and territorial traits. Examples for CoPs include collaborative groups that come together based on a shared interest in the interplay between society and the Internet of Things (IoT), self-measuring technologies and datafication, and open-knowledge movements (Hepp, 2016: 921‒923). These communities are media-related, that is, ‘they are constituted by technical means of communication’ (Hepp, 2016), utilizing the networking and creative potentials that digital media afford them. Often, CoPs play a ‘forerunner role’ in the adoption of and experimentation with emerging technologies, including AI (924). Communities around AI-generated livestreams are an illustrative example for CoPs exploring the capabilities of GenAI for specific creative purposes. These communities collaborate not only in creating content in line with their sub-cultural interests; they also socialize around shared norms, develop common vocabularies, negotiate and determine values, and curate AI imaginaries (Hepp, 2024). AI livestream communities can be considered ‘cultural intermediaries’ (924), bringing together members with diverse backgrounds, such as developers, content creators, journalists, and viewers. These communities are likely dominated by a similar demographic ‒ well-educated white men ‒ common in other online emerging technology spaces, which is important to consider when evaluating the diversity of viewpoints.
Importantly, CoPS discourses are considered ‘alternative’ vis-a-vis mainstream discourses because they take place among users that do not necessarily have a broad public audience in mind and may not apply strategic agenda-setting and framing in their communication; the assumption is that they are sharing genuine viewpoints among more or less equal discourse participants. Their views on AI can align with mainstream narratives but may also diverge in crucial respects. What makes them alternative is not that they per se represent marginalised positions in technology discourses but the context in which their discourse is performed.
Based on the discussion above, the present study investigates AI discourses within the Discord community of

‘Nothing, forever’ being transmitted through the streaming platform Twitch.
The analysis is guided by the research question (RQ) What is the dominant AI imaginary present and circulated in the Discord community? What benefits and risks do Discord community members associate with AI?
The present study aims to understand the influence and dynamics of CoPs in the development and dissemination of AI sociotechnical imaginaries as well as how alternative discursive formations relate to mainstream AI discourses.
Methods and data
The empirical analysis utilised a mixed-methods research design that incorporated both computational and qualitative steps for a frame analysis of chat messages from
First, AI frames were identified through unsupervised algorithmic clustering of words, revealing prominent frames based on topics and relevant keywords within the dataset. The individual frames were then manually grouped to identify broader themes and to explore whether they indicate the presence of dominant AI imaginaries, which in turn may point to a prevalent master narrative. The computational step served for determining the thematic scope of the community discourse and to explore quantitative differences in the thematic emphases based on the distribution of frames, and subsequently, scope sociotechnical imaginaries. The discourse map then served for zoning-in on different socio-technical imaginaries as the focal point of the qualitative analysis.
To this end, a selective sample of 600 messages was drawn from the 18 clusters qualitatively analyzed to illustrate how framing practices evoke key notions of each broader imaginary. A minimum of 30 representative texts were reviewed from each cluster based on thematic relevance, clarity of expression, and variation in sentiment, allowing for a deeper interpretation of framing practices across the dataset. Importantly, this step allowed for critically interpreting messages within their concrete context. As such, the methodological approach combines computational-quantitative distant reading with an in-depth close reading, where insights from one step inform the other (Lindgren and Krutrök, 2023).
Data retrieval and pre-processing
Via the open-source tool ‘Discord Chat Exporter’ (Tyrrrz, 2023), all chat messages and meta information were retrieved from ‘AI-discussion,’ resulting in a dataset of 7,934 individual texts posted by 847 unique users. The dataset spans from February 2023 to December 2023, which covers all available data at the time of extraction. Different Natural Language Processing (NLP) methods in Python 3 were used for the computational-quantitative analyses. First, the texts underwent several pre-processing steps, including removal of HTML tags, URLs, lowercasing, expanding contracted verbs or abbreviations, and removal of emojis, digits, punctuation, and special characters, as well as lemmatization and tokenization using SpaCY (Honnibal et al., 2023). Next, part-of-speech tagging was applied to distinguish nouns from adjectives and verbs. This was necessary for more efficient text analysis where only one word type was considered relevant (e.g., topics based on nouns).
Automated text clustering and network analysis
To identify frames, texts were clustered with term frequency-inverse document frequency (TF-IDF) vectorization. Methodologically, frames are considered clusters of texts that share similar words. For example, a cluster of documents frequently sharing words such as ‘risk,’ ‘ethics,’ ‘privacy,’ and ‘bias’ could be considered as the ‘AI Risks’ frame (Nguyen, 2023). The TF-IDF algorithm calculates the importance of a word in a document by measuring its frequency in that document and normalizing it based on how frequently the term appears across all documents. Those sharing distinct words are then grouped together via clustering such as
Next, a similar TF-IDF adjacency matrix was created to visualize word co-occurrences in Gephi. This matrix contained all nouns in a bi-gram disposition (e.g., ‘AI creativity’). The matrix was loaded into the Gephi visualization software and then spatialized via the Fruchterman Reingold layout algorithm. Modularity was employed to identify and color-code communities within the network by assessing the concentration of edges relative to a random distribution of links across nodes. Communities exhibiting a high degree of similarity were manually merged. The resulting clusters were then compared with the TF-IDF clusters, facilitating validation of the clustering and enabling a visual exploration of the relationships between frames.
Close readings
Close reading is crucial to enrich computational analysis (Franzini et al, 2015). To this end, at least 30 texts or more -depending on the cluster size- were manually analysed. This exploration of 600 full texts supplemented the labeling of frames next to the bag-of-words representation from the computational analysis. Simultaneously, it helped with further analysis of framing practices and sentiment towards AI. It is important to mention that the algorithmic text clustering does not account for the context in which words are frequently mentioned; the results merely provide orientation for the researchers to discern what issues and topics dominated the discussion and can be interpreted as frames. With the help of close readings, it becomes possible to characterize and reflect on each of the frames identified with the computational distant reading.
It is important to note that, while very insightful for an exploration of AI discourses and imaginaries within a specific CoP, the case of
Findings
Distant Reading: A tech-enthusiastic imaginary of AI as a tool for creative processes
A single socio-technical imaginary presenting AI from an instrumentalist and mostly tech- enthusiastic viewpoint dominates the community discord. This becomes apparent when closer investigating the different themes that emerged from clustering the individual frames. The broader AI imaginary is nuanced within a tool-centric interpretative horizon but connects only to a limited extent to cultural and social questions.
The text clustering and manual inspection yielded 18 frames (Appendix 1), most of which centre on content creation and scripting. 77.4% of all user messages evoke the frame Interactive Narratives/Scripting about content-related decisions, followed with considerable distance by Collaborative Art (3.2%), Script Feedback/Meta-Talk (3.0%), and GenAI Training (2.8%). Fewer discussions focus on reflections about Scriptwriting, AI-Generated Art, AI Livestream, and Technical Prompting (each accounting from 1 to 2.6% of all messages), with limited messages discussing AI for Media Production more broadly and Sensory and Expression (<1%).
The different frames were further grouped based on thematic similarities (Appendix 2), underscoring that most of the community discourse construes and expands a sociotechnical imaginary that mostly portrays
The strong focus on AI as a tool with creative potentials and an overall more technical perspective among users is further supported by semantic network visualizations of frequently co-occurring words and bigrams. This approach allowed for exploring framings that are commonly associated with one another, as well as the identification of issues that were not captured by the text clustering method.
The general network graph (Figure 3) indicates a strong presence of four key areas of debate. Two of these areas focus on technical discussions related to AI, such as machine learning, large language models (LLMs), and training data, as well as the use of AI in livestream production, including voice cloning, the video game engine used in the stream, and scriptwriting (Figure 4(a)). The other two domains are centered around discussions on content generation, streaming, scriptwriting, episode commentaries, and debates on art, AI implementation, and work (Figure 4(b)).

Ai discussion network visualized in Gephi. Every node in the network represents word associations (bigrams and trigrams). Each cluster of words represents relevant discussions for the channel.

Relevant domains of discourse. (a) (Left) Represents AI technical and AI-related discussions about software, characteristics and domain-specific knowledge. (b) (Right) Portrays various discussions about art, content generation, streaming, and scripts.
Altogether, these four domains of discussion support the findings from the text clustering and thematic grouping of frames. First, Figure 4(a) reveals the presence of the
Zooming-in on specific nodes allows for further exploration on the impact and implementation of AI in everyday practices, supporting the idea of AI mediatisation. For example, the node ‘chatgpt write’ (Figure 5(a)) seems mostly connected to technical aspects of AI. However, words indicative of ethical issues and challenges such as privacy, biases, authorship, copyright, or creativity are noticeably absent. The partial network around the node ‘game engine’ (Figure 5(b)) implies that users perceive AI in the context of content production and the role of different trending technological developments (e.g., stable diffusion).

Word associations selected in Gephi. The left figure (a) shows every node connected to the bigram ‘Chatgpt write’. The right figure (b) shows relevant nodes connected to the bigram ‘Game Engine’.
Another prominent node based on frequency of occurrence and centrality is ‘ai generate’ (Figure (6)). It appears primarily associated with terms about human-ai relationships, such as ‘people ai,’ ‘procedurally generate,’ ‘uncanny valley,’ ‘create ai,’ ‘generate model,’ ‘imagine ai,’ ‘love ai,’ or ‘art ai.’ These discussions demonstrate an understanding of the processes involved in AI-generated content but also potential misuse of AI in video, such as in ‘deep fakes.’ This is one of the few instances where a potential risk of AI is addressed (albeit not necessarily from a critical angle).

Word association selected in Gephi. Every node connected to the bigram ‘Chatgpt write’.
Accordingly, inspecting the network graphs prompts exploration of what issues and topics might be missing. For instance, there is a noticeable scarcity of word associations related to governance (e.g., regulation, governance, bill, law, policy), which suggests that the community may show limited interest in these areas.
In what follows, we further explore the four themes shaping the sociotechnical imaginary qualitatively in two subsections, based on their shared emphases: primarily on technological aspects, on the one hand, and on user-centric as well as social implications, on the other.
Close Reading of themes
Most discussions centre on AI's capabilities for audiovisual content creation and frequently link to technical questions, both concerning current possibilities and limitations but also with an eye on future potentials. In the latter respect, the community discourse reflects techno-optimist notions prevalent in mainstream discourses. Given the context of the Discord CoP, this is little surprising as it arguably attracts users with an interest in technology, including individuals with a hands-on professional background in tech-related sectors. Indeed, the CoP is apparently considered a pool for recruiting AI expertise (as illustrated in one post: ‘Hi all - any devs with openai experience that have bandwidth and are looking for work? reply here and ill msg ya - cheers!’). Generally, The boundaries between the four larger themes are often fuzzy, as discussions about steering the narrative can quickly pivot to questions about the underlying technology and vice-versa.
AI as a tool for creative practices and technical foundations
The largest theme Runs in peanut butter (note: prompt for the AI) we should utilize that kind of hyperbole for their characters i didnt watch much seinfeld but i saw enough to have loved it, was that plot device really that common because my memory is still quite fuzzy the animations being goofy and weird and random adds an important element to the experience, because your brain also fills in the blanks as to what the f they are doing visually as they talk
Creative goals and technical factors lead here to the emergence of a techno-artistic discourse where AI capabilities and limitations take centre stage. This includes meta-discussions about prompting, where users reflect on and debate strategies for writing effective prompts/prompting practices and link it to current technological capabilities: Without reading further, this sounds like someone “discovered” prompting. You can get ChatGPT to do the same thing if you tell it to respond as a given celebrity. The increase is nice though i really think though, if you could get some feedback from the chat, and try to use it to help “guide” the experience. well i think it would just allow for the system to continually keep improving upon itself. instead of remaining in a static state
There is generally a strong focus on various technical issues, both conceptually about AI more generally and specifically for concrete application purposes: We've basically modeled neural networks after the most basic algorithmic function of neurons, to be trained to perform a task with a fuzzy idea of “correct”. What do you mean? We've had Hatsune Miku a while. To be honest, if you want to create an AI popstar you can do so right now. Chatbot for text gen, NLP TTS or TTS synthesizer for voice generation dubbing, V-Tubing Tech for 3D Modeling, Game Engine Software for stage & putting it all together.
While this is a dominant characteristic of the general discourse, the technical outlook is particularly pronounced in the theme […]you will need to do modeling through Pixiv's V-roid, export to general 3D file then import to blender/unreal/unity to be able to do high resolution cross platform video output and sophisticated music videos[…]
Such comments indicate that certain sub-discussions require an advanced level of knowledge about AI and its underlying principles, which is characteristic for tech-centric pioneer communities. This includes critical voices that seem to bemoan hype-centric jargon in AI discussions. For example, some users ask for ‘stop using big words to say that ai is a tool’ and argue that ‘ai is a tool and that is all.’ This extends to emphasizing the importance of human creativity but also technical expertise and know-how in using AI: I think one of the key things that people new to this tech need to understand is that it needs hella human supervision and design. chiefly, you need to understand procedural generation. And if you want to make something that really makes an impact, an understanding of how people […] interpret narratives.
Importantly, AI is positioned towards achieving very specific and simple goals. While useful and innovative, perceived limitations of the technology are pointed out as well: AI is best used as a toy. I am going to write funny stories or make funny audio or generate funny images like basically just for entertainment or shitpost purposes […] It's not very good at anything else yet.
Despite this generally more sober outlook and a focus on concrete application, there is a line of reasoning that places emphasis on optimisation and eventual perfection of AI via technological progress, more data, less regulation, and sufficient funding: After all, a model can only be as good as the data it's trained on (I think). Regardless of investment or not, but in this case investment means larger models can be constructed because training them is extremely resource intensive.
Relatedly, others echo narratives about the inevitability of super-human AI and motifs of the so-called singularity as promoted by influential futurists in popular business-oriented tech discourses: Once AI supersedes the intelligence of all human beings combined, access to the AI superintelligence will become the most valuable resource.
Taken together, the exploration of the two closely intertwined dominant themes surfaces an AI imaginary that has a strong utilitarian outlook, appears overall enthusiastic about the technology, and positions it to some extent as an ‘insider topic,’ demanding sufficient expertise to understand it.
Human–AI interaction and ethics, labor and societal impact
The theme Us KNOWING that the ai is the main character is what makes it entertaining because we can cheer it on, laugh at its problems, celebrate it when it gives us output that makes sense to us, etc. Do you think we all have soon an A.I. buddy as in something like a combination of best parts of Bing bot/ChatGPT/character.ai/digital people (by soul machines, URL above). Maybe not this digital people part exactly, since I guess openai/google etc are also working on AI generated video. I mean we already have the bots for use, but we just need a little refinements[…] I would have given it (or them) some personalities and they'd proactively talk to me without me asking if I wanted to etc, and I could read their facial expression too if I wanted them to have varying mood.
More explicitly, there is an attribution of human characteristics to AI in phrases such as ‘a computer's To be honest, the AI isn't sentient enough to get “Canceled”. It's just running lines of code that make it speak. It probably made an oopsie because it read a random line of code off and without the usual moderation in place it spit it out. The AI doesn't know what offensive means.
Overall, users tend to recognize AI as a valuable resource for generating creative output. However, there are some who seem to have an ambiguous view on AI's capabilities and potential future development, pointing to (incremental progress) but also broader implications on its perceived value: If the AI got too realistic or started making too much sense i feel like it'd lose a lot of the charm it has currently I want to see the characters legitimately evolve and become smarter and make sense it would be so cool.
A smaller number of comments link community interests and AI practices to broader societal and ethical issues, albeit limitedly. These form the theme Ethics, Labor and Societal Impact, capturing more critical views of AI's cultural and social implications. AI is seen as a transformative agent, with a shift toward mediatization, where technology integrates into daily tasks and shapes practices across domains. Discussions reflecting this tend to downplay ethical concerns and ignore critical perspectives, while emphasizing AI's potential, sometimes exaggerating its capabilities. Recurring narratives include AI and art, job replacement, AI dominance, and the future of employment, often framed with techno-determinist notions: We're not even sure what human intelligence and consciousness is in discrete terms or whether it's strictly superior to AI. Do you all also feel advanced life on planet earth will become much more endangered due to AI this century?
Yet, at the same time, some users appear cautious about these developments and point to technological hurdles that need to be overcome first to unlock AI's full potential, especially regarding memory as both a computational resources and an ‘ability’: The biggest thing is that we need AI to have longer and more robust memories. And to understand how to sort what's important to remember or not itself so it can optimize that process without a human guiding it.
Others locate challenges on a social dimension by pointing to the ways tech businesses work and how that affects the creative sectors but also see concrete empowering potential for wider user base: The problem is the economic system itself, not whether the artist is assisted by AI systems. The thing I like about AI is that it allows the average person with an idea to do projects that would take way too much time and specialized skill to ever complete without it.
Despite a shared enthusiasm for the technology, not all users are fond of the corporatization of these technologies. This resonates with ideals in the broader open source movement, where one main principle is ‘peer production,’ meaning that users support the idea of making source codes, blueprints, and documentation freely accessible to anyone: I mean, a while back, Google wrote a big long thing about not wanting to grant open access to their AI art bot because of ethical concerns. Now they're out in the wild and we're all like “oops, jobs gone lol”.
Importantly, several frames associated with this imaginary focus on technical aspects of AI, such as data training, and model development. They rarely address issues regarding developer ethics and critical perspectives on AI adoption and misuse. Similarly, issues such as copyrights, environmental impact, monetization, and platform economies, are discussed to a lesser extent. Although discussions about AI regulation are rare, some community members do address the issue, indicating that AI will be regulated in a preventive rather than reactive manner: All the laws and regulations will be preventative, addressing terrible things that have already happened well after they did.
Generally, AI is seen as ubiquitous, underlining its growing role and its expanding applications across various domains (e.g., content creation, production, automation, entertainment, and scriptwriting). It is noticeable, however, that the use of GenAI triggers a critical discussion about art and what its use implicates for the future of the production of creative content: Expect a lot of messy legal battles over AI generated content in the future. the ruling that AI-generated art is not subject to copyright will be challenged multiple times. I hope it holds up, but we'll see how things go. There is a reason why nobody is hiring people doing AI art. They all look the same.
Discussion
The present study focused on the main research question:
Concerning SQ1
Some community members perceive AI as a companion, surmising about its capability to have independent agency. Similarly, themes of human enhancement and continuous AI progress towards ever growing potential and autonomy emerged, too. Even community members who appear more hype-critical do not necessarily contest the idea of ongoing AI advancement. The Discord CoP is overall tech-enthusiastic, and critical sentiments are primarily directed at current limitations, exaggerations, and the role of companies, rather than denying the potential of AI itself. The community discourse differs from mainstream discourses in its focus on specific AI applications for a concrete purpose but it shares motifs and rhetoric that underline the technology's innovative potentials that can be unlocked ‘if done in the right way’. In this respect, it shows similarities with narratives prevalent in tech business and governmental policy discourses, which frame AI as a transformative agent and catalyst expected to drive change across various societal domains (Bareis and Katzenbach, 2022), with profound implications for work, education, art, and social dynamics.
The prevalent AI imaginary in the Discord CoP further resonates with other research into mainstream media. The notion of
Relatedly, regarding SQ2
Noticeably, AI risks such as privacy intrusion and algorithmic biases are rarely, if at all, raised in the Discord CoP. Critical discussions aim for limitations and misunderstandings of the technology but do not center on ethical issues raised in mainstream discourses. This points to differences especially to critical stances towards AI in news media and politics that focus on calls for more governance (Nguyen and Hekman, 2022a, 2022b). The
In conclusion, it can be stated that the CoP builds its AI imaginary around the specific purpose for which it formed and applied technology. The backgrounds and interests of users play a similarly important role. In a sense, for CoPs one depends on the other, that is, people with certain backgrounds flock together for a certain purpose. Creative-artistic views are of visible importance but they are outweighed by technology-focused perspectives; where to draw the line is difficult, as creatives often care greatly about understanding and mastering their tools especially in the context of digital media. Still, both creative-artistic and technical considerations largely shape the AI imaginary that displays tech-determinist and even utopian notions. The findings demonstrate that online communities do not always function as ‘counter-public’ spaces where unique sociotechnical imaginaries emerge and disseminate. This resonates with research by Zeng et al. (2022), who observe that AI discourses in social media are not fundamentally different from mainstream discourses in news, business, and politics.
There are critical differences pertaining to the relative invisibility of ethical issues and critical reflections on risks, while commercial-economic gains are also not prominently featured. The CoP is ‘alternative’ in the sense that it construes AI by starting from the context of use, linking experiences and evaluations bottom-up to more abstract and fundamental implications of emerging technology on society. It is thus an insightful example for how practices and imaginaries feed into each other. However, it illustrates how a narrow focus on a relatively limited application purpose poses the risk of ignoring potentially harmful effects of AI in non-related domains, as immediate experiences seem to directly inform grander visions about AI's overall beneficial impact on society. Arguably, the shaping of the AI imaginary is determined and potentially confined by the purpose and related inclusion processes that shape the CoP in the first place: the people who join the community for a particular goal. This inevitably entails socio-technical biases.
The
Conclusion
By zooming-in on a CoP that actively and directly engages with novel forms of GenAI for content creation, the present study critically explored how a mix of pioneer enthusiasm, passion for technology, but also tech-deterministic sentiments and ignorance of ethical challenges seem to dominate views on AI. While differences in focus are evident within the broader dominant AI imaginary, the general tenor within the community seems to strongly resonate with other dominant tech-optimistic narratives observed in media, tech business, and politics. The findings imply that the concrete context of use determines to what extent AI's role in society and culture is addressed and also, to what extent mainstream narratives are taken over, are being ignored, or are contested.
There are several limitations to the provided analysis. First, the corpus is limited to a single CoP around a very specific topic of interest sampled from only one social media platform. The selected Discord community represents a highly specialized environment, which may overrepresent certain opinions on AI within the digital sphere. Hence, a broader sample of more diverse CoPs engaging with AI for creative processes should be investigated in future research to probe whether and where more diverse viewpoints and potentially contesting AI imaginaries exist. Second, the analysis is limited to an English-speaking community; comparative studies into different languages and cultural spaces may unearth similarities and differences in AI imaginaries. Third, the computational component is limited to relatively simple forms of word clustering, sufficient for the exploratory research goals in focus. More advanced methodologies may include transformer-based topic modeling and word embeddings to add depth to the quantitative text analysis.
To build on these findings, a comprehensive comparative study analyzing multiple online communities could offer a more nuanced understanding of how AI discourse is framed across diverse spaces. Such research could reveal complexities and differences that are obscured by the case-study nature of this research.
