Abstract
Introduction
With the rapid development of artificial intelligence (AI) technology, AI now plays a central role in social, economic, and political spheres worldwide. The global discourse and sociotechnical imaginaries surrounding AI have become significant across societies, particularly visible on social media platforms (Li et al., 2024). On different social media platforms such as Weibo and X (formerly Twitter), user discussions on AI reveal distinct sociotechnical imaginaries shaped by cultural context and platform-specific dynamics (Hine & Floridi, 2024; Nguyen & Hekman, 2022).
Sociotechnical imaginaries originate from the initial developmental trends of technology and profoundly influence subsequent technological trajectories (Wagner & Gałuszka, 2020). Social media platforms play a critical role in shaping these imaginaries by serving as primary venues for public discourse (Koliba et al., 2011). Different platforms have distinct user demographics, cultural contexts, and community norms (Park, 2013), which significantly influence how users collectively imagine and debate on emerging technologies such as AI. For instance, a review of the internet’s evolution highlights a stark contrast in development philosophies reflected across social media user communities. In the U.S., the internet grew through market-driven innovation, heavily dependent on free market principles and competition, where private enterprises and venture capital serve as primary drivers of technological advancements (Curran, 2013). Whereas in China, the internet’s development was state-led (Yang, 2012), where government policies and strategic plans direct the growth of the internet and high-tech industries (Hong & Harwit, 2020).
The years 2022–2025 represent a crucial period of the accelerated development of AI, during which time several key events and technological advancements have centered public and media attention on AI technologies. In 2022, China released the “Guiding Opinions on Accelerating Scene Innovation to Promote High-Quality Development through High-Level AI Application,” clearly setting forth policies to integrate AI applications with economic transformation. The same year, OpenAI’s launch of ChatGPT, a powerful tool for natural language interaction, attracted worldwide attention. In 2023, OpenAI unveiled the multimodal pre-trained model, GPT-4, which significantly enhanced AI’s capabilities in multimodal understanding and generation. By 2024, OpenAI’s introduction of Sora, a multimodal video generation model, marked the maturation of multimodal generative technologies. Concurrently, leading Chinese technology firms like Baidu, Alibaba, and Huawei released their own large-scale models, sparking extensive societal discourse. In early 2025, when the results of DeepSeek-V3 sparked worldwide interest, there was a clear sense of alarm across the U.S. and Europe: some referred to it as the “Sputnik moment” in AI, highlighting China’s AI breakthroughs that unsettled the West. This metaphor signals a shift in the geopolitical technological imaginary—framing AI as the new frontier of great power competition.
As representative social media platforms in China and the West, Sina Weibo and X share similar functions and levels of influence, yet they are rooted in vastly different sociocultural and political environments (Han et al., 2016). This comparability enables the present study to examine how cultural and institutional factors shape users’ sociotechnical imaginaries and technological discourses across platforms (Bolsover & Howard, 2019). Theoretically, differing cultural values—such as collectivism versus individualism—and distinct sociohistorical trajectories give rise to divergent expressive styles. Empirically, studies have found that Weibo users tend to engage more positively and align with mainstream narratives, while Twitter users are more likely to employ humor, satire, and openly critique official viewpoints (Kim et al., 2021). Therefore, a comparative analysis of Weibo and X is both feasible and meaningful for cross-cultural research, and this approach has been widely adopted in recent studies on international communication and public discourse.
This paper aims to explore whether and how sociotechnical imaginaries about AI differ across social media platforms situated in distinct sociocultural contexts, focusing specifically on user discourses from Weibo and X. The contribution of this study is threefold. Firstly, this study, through a computational analysis, provides empirical evidence from widespread and powerful user discourses in social media for the discussion on sociotechnical imaginary. Secondly, this study, through a comparative analysis of user-generated content on Weibo and X, examines and further expands the theory of sociotechnical imaginary from a cross-cultural platform-based perspective. Thirdly, this study examines the reasons why differences exist between the sociotechnical imaginaries of AI articulated by users on these two platforms, revealing a subtle and profound interplay between imagination, technology, and society.
Literature review
Interaction between technology and society: Conceptual interpretations of social imaginaries, technological imaginaries, and sociotechnical imaginaries
Before delving into the discussion on sociotechnical imaginaries, it is essential to clarify two main relevant concepts: “social imaginaries” and “technological imaginaries.” Social imaginaries are a broader concept. As Charles Taylor (2004) argues in Modern Social Imaginaries, social imaginaries refer to the shared values, norms, and expectations held by members of a society, which shape the social order and confer societal legitimacy. Taylor posits that social imaginaries are not merely theoretical ideas but collective beliefs reflected in everyday practices. Modern concepts such as democracy, market economy, and national identity are all manifestations of social imaginaries. Anderson’s (2005) notion of “imagined communities” is also an important case of social imaginaries, explaining how national identity is constructed through imagination. Castoriadis (1987) discusses how social imaginaries shape institutions and forms of social organization. Jasanoff and Kim (2013) further develop this concept, arguing that the role of social imaginaries in technological governance and policy-making cannot be overlooked. For instance, the promotion of clean energy is not solely based on technological possibilities but also on how society imagines a sustainable future.
Technological imaginaries primarily focus on how people understand, imagine, and anticipate the development of technology and its impact. This concept emphasizes that technology is not merely a collection of physical devices or algorithms but also encompasses the collective vision of the future of technology (Jasanoff, 2015). In Dreamscapes of Modernity, Sheila Jasanoff discusses technological imaginaries as cultural products that not only pertain to the functions of technology but also concern how society constructs the meaning of technology (Jasanoff & Kim, 2019).
Technological imaginaries reflect public imaginations of AI, such as expectations about superintelligence or fears of automation-induced unemployment, influencing the trajectory of technological development through policy, market forces, and academic research (Suchman, 2007). Although these imaginations are shaped by history, culture, and science fiction archetypes (e.g., the dichotomy of “technological savior” vs. “apocalyptic threat”), they are ultimately driven by socio-economic forces. More specifically, Mazzucato (2013) demonstrates how innovation narratives artificially construct technological economic value; Cave et al. (2018, 2020) reveal how media, policy, and popular culture actively mold public perceptions through divergent narrative strategies, such as inducing public panic or excessive optimism. This indicates that technological imaginaries are inherently socially constructed, yet compared to sociotechnical imaginaries, their analyses of societal agency remains instrumental rather than substantive.
Recognizing the social attributes of the imaginary of technology, sociotechnical imaginaries emphasize that technological imaginaries are not merely individual or collective fantasies but are closely tied to social institutions, policies, and cultural values (Jasanoff, 2015). Sociotechnical imaginaries represent institutionalized technological visions that influence the trajectory of scientific and technological development, as well as the governance models of states and societies. Jasanoff and Kim (2009) proposed that the concept reveals how different countries’ approaches to nuclear energy governance are shaped by their respective sociotechnical imaginaries. Jasanoff (2015) further argues that sociotechnical imaginaries are co-created by states, corporations, and societies, influencing not only technological practices but also shaping laws, ethics, and public policies. In the AI field, Cave et al. (2019) point out that the sociotechnical imaginaries of AI are deeply influenced by sociocultural and historical narratives, and the policy-making regarding AI in different countries and societies is often guided by their respective sociotechnical imaginaries. Hajer and Pelzer (2018) note that sociotechnical imaginaries are not merely the products of the state or elites; social movements and civic organizations can also shape these visions. For example, social movements opposing 5G reflect a sociotechnical imaginary distinct from that of the official narrative. Pfotenhauer and Jasanoff (2017) discuss how, in the context of globalization, sociotechnical imaginaries circulate transnationally, influencing technological governance models in different countries.
Conceptually, social imaginaries present the broadest category, encompassing the fundamental understanding of the world by society members. Technological imaginaries are a subset of this, focusing on technology’s development and societal expectations. Sociotechnical imaginaries represent the intersection of both, addressing the conceptualization of technology and the interaction between technology, policies, and social norms. Theoretically, sociotechnical imaginaries bridge the analytical domains of technological imaginaries (focused on material and symbolic dimensions of innovation) and social imaginaries (grounded in collective norms and institutional practices). Jasanoff’s co-production theory further illuminates their operational logic: technological development is inextricably embedded within social institutions and power hierarchies, while social order is simultaneously reconstituted through technological interventions. For example, ethical debates surrounding AI transcend technical concerns over algorithms; they embody societal struggles to redefine the boundaries of human agency and moral responsibility. This bidirectional co-constitution underscores that technologies are not neutral tools but value-laden practices that reflect and reinforce societal priorities.
Furthermore, sociotechnical imaginaries possess a significant political dimension, as “imagination” functions as a form of political capital, shaping not only the design of technologies but also public expenditure and the inclusion or exclusion of citizens. Specifically, imaginaries of future technologies guide scientific research and engineering practices and determine the prioritization of technological development. The design and implementation of technologies often embed specific social values and political intentions, reflecting the power structures underlying technological designs. Annette Markham points out that imaginaries of technological futures are always closely connected to contemporary social and economic structures and cannot escape the constraints of existing power structures (Markham, 2013). This phenomenon is especially evident in emerging fields such as AI, where imaginaries of future technologies not only direct scientific practice but also shape societal expectations and design approaches.
In summary, the concept of “sociotechnical imaginaries” employed in this study goes beyond simplistic frameworks of technological determinism or social constructivism, instead foregrounding the co-evolutionary interplay between technology and society. By synthesizing existing scholarship, three core attributes of this concept emerge: collectivity, normativity, and dynamic contestation. Firstly, sociotechnical imaginaries are inherently collective, representing shared visions institutionalized through policy frameworks, media narratives, and public discourse. Secondly, they possess a normative dimension that extends beyond mere descriptions of technological possibilities; they actively define “desirable futures” through power-laden negotiations, shaping what societies prioritize and legitimize. Thirdly, these imaginaries are dynamically contested, as nations, communities, and individuals negotiate competing visions of technological pathways—a process deeply entangled with clashes between cultural values and political-economic agendas.
Methodologically, the collective and discursive nature of sociotechnical imaginaries legitimizes empirical investigations into public deliberation, such as analyses of social media texts (e.g., Weibo and Twitter posts). While fragmented narratives on these platforms may appear micro-level, they highlight both points of conflict and emerging consensus around macro-level imaginaries. A tweet about AI-induced unemployment, for instance, might encapsulate conflicts between labor rights and techno-utopian ideologies, while a viral Weibo post on AI-driven deepfakes could signal underlying tensions between digital sovereignty and global governance. By dissecting such texts, researchers can trace how sociotechnical imaginaries manifest in public discourse and permeate policymaking and technological design through everyday language. This “micro-to-macro” analytical approach serves as a critical interface for translating the abstract framework of sociotechnical imaginaries into grounded empirical inquiry.
Discussions on technology in social media
Social media plays a pivotal role in modern society by providing an open platform for discussions and imaginations about emerging technologies. Research indicates that it serves not only as a tool for information dissemination but also as an incubator for sociotechnical imaginaries, allowing technological imaginations circulate and be debated widely. Brossard and Scheufele (2013) highlight that social media significantly influences science communication, especially when it comes to emerging technologies such as AI and big data. Through shaping public discussions and public opinion, social media helps to mold public perceptions of technology.
The platform attributes of social media allow people from diverse social groups, cultural backgrounds, and professional fields to participate in discussions about new technologies. This engagement fosters public understanding of technology and shapes the social significance and future imaginings of technological advancements. Williams et al. (2015) found that the interactivity and global reach of social media make it a critical space for public participation in scientific discussions, often accompanied by exchanges of emotions and values, which influence public attitudes toward technology.
Through social media, the decentralized nature of technological information dissemination promotes a more diversified sociotechnical imaginary. For example, discussions about AI cover a wide array of topics, from ethical concerns and social implications to the impact of AI on employment and the economy. These conversations, frequently discussed on platforms such as Twitter and Reddit, facilitate the construction of multi-dimensional sociotechnical imaginaries by the public, experts, and media alike. In this context, social media functions not only as a platform for disseminating technological information but also as a space where public opinion and technology policy intersect. According to Jasanoff (2015), scientific and technological discussions on social media often influence policymakers, as social media opens up technological controversies, making it easier for government and tech corporations to respond to public opinion.
Moreover, social media provides a globalized stage for technological imaginaries, enabling discussions about technology to transcend cultural and national boundaries (Markham, 2013). This decentralized nature of technological discourse on social media further enriches sociotechnical imaginaries, facilitating diverse and cross-cultural perspectives on emerging technologies (La Cava, Mandaglio, & Tagarelli, 2024).
Imagining “new technology” from a cross-cultural perspective
The imagination of new technology is always embedded in specific cultural contexts and social settings. The construction process of sociotechnical imaginaries exhibits significant cross-cultural differences. Existing research indicates that the shaping of technological visions is by no means a unidirectional projection of technological determinism but rather a dynamic negotiation deeply rooted in national institutions, cultural traditions, and historical experiences (Jasanoff & Kim, 2013).
Currently, research on technological imaginaries is undergoing a paradigm shift from a single cultural context to cross-cultural comparisons. Early studies primarily focused on the unidimensional construction of technological imaginaries (Rudek, 2022), treating technology as universally applicable while neglecting the role of cultural foundations in shaping imaginaries. This cognitive framework of “technological universalism” is particularly evident in recent technology research, such as the metaverse (Hennig-Thurau et al., 2023) and big data (Yaqoob et al., 2016), where researchers often assume that technological imaginaries possess cross-cultural homogeneity.
The turning point emerged in the global controversies surrounding AI ethics (Hagendorff, 2020). Cave and his team focused on the issue of Western-centric cultural hegemony embedded in AI technology, arguing that AI development and application have long been dominated by Europe and North America. This dominance has led to the monopolization of technical standards, ethical frameworks, and even aesthetic preferences by Western cultural values (Cave & Dihal, 2020). Such value-driven distinctions have prompted the academic community to re-examine the cultural embeddedness of technological imaginaries. Hoff and Bashir’s model of trust in technology demonstrates that collectivist cultures are more likely to establish “institutional trust” in technological governance, whereas individualist cultures rely on “process trust” (Hoff & Bashir, 2015).
This distinction directly shapes the sociotechnical imaginaries differently across different societies. Through computational topic modeling, Wang et al. analyzed mainstream media reports from China, the UK, and India between 2011 and 2022, revealing national divergences in technological imaginaries: British media emphasizes a liberal narrative of AI ethics and individual rights protection; China highlights a collectivist vision where technology serves national strategic goals; India presents a tension between technological empowerment and its impact on the labor market (Wang et al., 2023). This finding validates the “institutionalization” of sociotechnical imaginaries, wherein technological visions are continuously reaffirmed and reshaped through policy discourse, media framing, and public debate. This process, in turn, shapes the priorities and legitimacy boundaries of technological development (Cave & ÓhÉigeartaigh, 2018).
The growing body of cross-cultural research further reveals the underlying cultural logic of sociotechnical imaginaries—AI is not merely a technological product but also a medium for cultural projection. Different societies “imagine” AI in ways that reflect their own understandings of humanity, authority, ethics, and the future (Cave & Dihal, 2023). For example, while Western societies often perceive AI as a “human replacement,” East Asian cultures are more inclined to view it as a “collaborative partner.” This divergence reflects the distinct responses of individualist and collectivist cultures to technological disruption. It also aligns with Hofstede’s cultural dimensions theory, particularly in terms of cross-national variations in power distance and long-term orientation (Hofstede, 2011).
Existing research suggests that sociotechnical imaginaries, as concrete expressions of collective cognition, are always closely linked to specific social contexts, cultural traditions, and regional characteristics (Jasanoff & Kim, 2009). The field of cross-cultural studies has confirmed systematic differences between the East and the West in areas such as technological ethics (Floridi et al., 2018), human–machine relationship definitions (Gao & Feng, 2023), and privacy perceptions. These differences are particularly pronounced on culturally specific platforms like social media.
Overall, while existing research has made significant progress in revealing the symbiotic relationship between sociotechnical imaginaries and cultural contexts, there is still a research gap remains to be addressed. Firstly, with the rapid growth of AI, the cultural diversity and dynamic evolution of sociotechnical imaginaries around AI remain insufficiently understood, which requires more updated and representative empirical examination. Secondly, explorations about how sociotechnical imaginaries differ across different cultural contexts remain underexplored in the context of AI, an emerging and controversial technology. Thirdly, investigation into socio-cultural factors behind sociotechnical imaginaries and especially their cross-cultural differences still remains to be addressed.
Research questions
To fill the research gap suggested above, this study addresses the meta question that whether differences exist in sociotechnical imaginaries about AI across the distinct cultural contexts and why such differences emerge. To observe possible differences, the current study conducts a computational and comparative analysis of public discussions around AI on two social platforms with distinct cultural differences, Weibo and X. Specifically, the analysis follows a three-tier framework including theme, narrative, and sentiment analysis, capturing people’s concerns, opinions, and sentiments toward AI.
In theme analysis, LDA topic modeling is employed to identify the core themes in AI-related discussions on social media, mapping the thematic landscape of technological imaginaries. This corresponds to Entman’s (1993) framing function of “problem definition,” which highlights how media define issues within public discourse. RQ1: What are the core topics in AI-related discussions on Weibo and X, respectively?
In narrative analysis, Gephi-based co-occurrence network analysis is used to examine the thematic co-occurrence patterns on social media—specifically, which topics are mentioned together and how they form a narrative network. According to Fisher’s Narrative Paradigm Theory, individuals make sense of the world through storytelling, and different narrative structures influence public perceptions and attitudes toward technology (Fisher, 1984). RQ2: How are different AI-related topics interconnected through narrative structures on Weibo and X?
In sentiment analysis, UIE-based sentiment analysis is applied to decode public sentiment regarding AI. Affective Mobilization Theory posits that technological perceptions are shaped not only by factual information but also by sentiment-driven engagement (Marcus et al., 2000; Wahl-Jorgensen, 2019). In different socio-cultural contexts, the acceptance, fear, or anticipation of new technologies is often propagated through sentiment-oriented discourse on social media. RQ3: How is public sentiment toward AI reflected in user discourses on Weibo and X?
Research design
Sample selection
Considering the representativeness and feasibility, this study selects Sina Weibo and X (formerly Twitter) as lenses from which to observe sociotechnical imaginaries about AI in China and the U.S., respectively, covering the time period of 2022, 2023, 2024, and the first quarter of 2025.
Sina Weibo is one of the most popular social media platforms in China with nearly 600 million monthly active users across various regions, age groups, and professional backgrounds that captures a wide spectrum of public engagement with AI. X is widely used in the U.S., covers diverse social groups, and plays a significant role in discussions around social issues including the emerging technology.
Although X has a global presence, statistical data indicates a significant geographical concentration pattern (i.e., U.S.-centered) in its user distribution. According to Statista (2024), the U.S. remains the platform’s largest single-country market, with 106.2 million registered users as of April 2024—1.54 times the size of the second-ranked Japanese market and 4.25 times that of the third-ranked Indian market. The differences in the geographical distribution of users make X and Weibo culturally different (Kreps et al., 2022).
To further ensure the cultural differences between the two platforms, this study precisely anchors the English-language discourses on X by a language-based data purification strategy using LangDetect to filter out non-geotagged posts in non-English languages, such as Japanese and Hindi. After processing, English-language tweets account for 73.2% (
Despite cultural differences, Weibo and X share significant similarities in both interfaces and functions. Users are allowed to engage in discussions through posts, comments, shares, and hashtags. The generation and delivery of user discourses follows a similar pattern, making the discussion about AI on these two platforms comparable. Established studies have mostly used Weibo and X to make cross-cultural comparisons of public discourse (Gao et al., 2012; Han et al., 2016), information diffusion (Lin et al., 2016), and social opinion (Deng & Yang, 2021), especially between China and the West, providing methodological support for this study.
Data analysis
Data mining and cleansing
This study follows a systematic approach to constructing the discourse spectrum of AI to ensure comprehensive coverage of core technological concepts, key models, industry applications, and societal discussions. To enhance the completeness and reliability of the keyword selection process, a three-tiered strategy was adopted: literature support, AI-assisted analysis, and manual verification.
In the first step, based on existing research (Brown et al., 2020; Dwivedi et al., 2021; Floridi et al., 2018; Frey & Osborne, 2023; Jobin et al., 2019), this study first established a conceptual framework for AI to ensure a systematic and representative selection of keywords. This framework consists of three layers: the core conceptual layer, the technical conceptual layer, and the application conceptual layer. Specifically, concepts referring to ontological epistemology, such as “Artificial Intelligence (AI),” were included in the core conceptual layer; concepts referring to technical principles, such as “Machine Learning” and “Deep Learning,” were incorporated in the technical conceptual layer; concepts referring to technology applications, such as “AIGC,” “ChatGPT,” and “DeepSeek,” were captured in the application conceptual layer.
In the second step, AI-assisted keyword selection was introduced as an innovative approach. Specifically, ChatGPT-4 was queried with the following prompt: “List the most frequently used keywords in the AI field.” The generated keyword list was analyzed and compared with the literature-supported framework to identify any potential omissions. This process allowed for a dynamic adjustment of the keyword set, ensuring the inclusion of emerging AI-related terms.
In the third step, manual verification and cross-validation were conducted. To further ensure the validity of the selected keywords, a manual review of 100 sampled AI-related discussions from Weibo and X was performed. The analysis revealed that while topics such as “AI Ethics,” “AI Safety,” “AI Regulation,” “AI Bias,” and “Explainable AI” represented distinct discussion areas, they were largely encompassed within the broader category of “Artificial Intelligence (AI).” Additionally, manual verification helped eliminate non-representative or high-noise keywords that could distort the analysis.
Keyword List for Data Mining.
Next, data mining was conducted by using a Python-based web scraper. Discussion texts on AI containing the selected keywords above were collected from Weibo and X between January 1, 2022, and February 16, 2025. To ensure comprehensive topic coverage, data collection was conducted at the beginning, middle, and end of each month, ultimately yielding 8,741 Weibo posts and 7,020 X platform posts (with non-English content removed).
Data cleansing was then carried out in the following steps. First, text normalization was performed by standardizing case formats, and removing irrelevant tags (e.g., #topic), usernames, and emoji symbols. Additionally, words shorter than two characters and stop words were removed, except for “not” and “no” to prevent the loss of sentiment-related information (Qi & Shabrina, 2023). Further filtering was applied to eliminate duplicates, empty content, and conversational expressions lacking substantive information, ensuring data validity and analytical quality.
Regarding translation, Grammarly’s AI translation tool was employed to generate an initial draft, focusing on grammar structure and technical terminology adaptation (e.g., passive voice usage and domain-specific terms). The translated texts were then independently reviewed by two researchers with C2-level English proficiency, who refined culturally embedded expressions, terminological consistency (e.g., standardizing “sentiment polarity”), and logical coherence in complex sentence structures. To verify translation accuracy, 20% of the texts were randomly selected for back-translation (English-to-Chinese) and evaluated by a third-party linguist. The results indicated a 98.7% accuracy rate for key terms and a coherence rating of 4.2/5.0 (Kappa = 0.81), confirming the reliability and precision of the translated content.
Data analysis methodology
For thematic data analysis, this study employed the Gensim library in Python and utilized the Latent Dirichlet Allocation (LDA) model to conduct topic modeling on AI-related discussions from Weibo and X. As an unsupervised topic modeling method, LDA can automatically extract latent topics from large-scale textual data and assign topic probability distributions to each text (Blei et al., 2003).
To determine the optimal number of topics (K), two evaluation metrics were used: Perplexity and Coherence Score (Stevens et al., 2012). First, perplexity values were computed across different K-values, with the point where perplexity decreases and stabilizes selected as the candidate topic number. Then, coherence scores were examined to ensure a balance between topic distinctiveness and semantic interpretability, leading to the final selection of the most suitable K-value.
For result validation, a dual verification approach was implemented, combining Topic Coherence Evaluation and Human Validation (Chang et al., 2009). First, the C_v topic coherence metric was used to assess the semantic consistency of the topic words, ensuring that the generated topics were logically interpretable. Next, a subset of texts was randomly selected, and researchers manually evaluated the alignment between the assigned topics and the actual content, further verifying the validity of the topic modeling results.
Ultimately, the LDA model extracted multiple topics, each represented by a set of keywords. The extracted topics were then used to categorize each text in a “Topic-Keyword” format, providing insights into the core thematic structures of AI discourses on Chinese and American social media platforms.
To further reveal the structures and narratives of AI-related discussions, this study employed Gephi to construct a topic co-occurrence network. Co-occurrence clustering networks help uncover semantic relationships by examining keyword co-occurrence frequencies, where semantically similar terms are more likely to cluster within the same topic (Shen & Li, 2014).
In this study, Python was used to calculate the co-occurrence strength coefficient, constructing a co-occurrence frequency matrix in which keywords serve as nodes, and co-occurrence strength functions as edges, forming a co-occurrence network (Zhu et al., 2020). Gephi was then utilized for network visualization and community detection (Bastian et al., 2009), allowing for the identification of core thematic structures in AI-related discussions on social media.
In the process of determining the optimal number of topics in the LDA topic model, this study conducted multiple rounds of tuning and comparison across different topic numbers. The results indicate that when the number of topics is set to five, the model achieves the lowest perplexity score, suggesting that this configuration effectively balances topic differentiation and semantic interpretability. Additionally, the LDA model generates topic modeling results by providing a set of keywords for each topic along with their respective probability weights. This study selects the top five keywords with the highest probability for each topic as its core representation and assigns topic names based on the semantic characteristics of these high-probability keywords. This approach ensures that topic categories maintain clear conceptual boundaries and interpretability. To further enhance contextual understanding in topic analysis, this study incorporates portions of the crawled data as fundamental textual units and employs word co-occurrence patterns to analyze key textual components. This methodological approach provides more precise semantic support for determining topic labels (Reinert, 1983).
Within the co-occurrence network, the node size represents the frequency of keyword co-occurrence, while the thickness of edges indicates the strength of semantic association. Larger nodes signify keywords with higher co-occurrence frequencies, and thicker edges indicate stronger semantic relationships. Based on this network structure, the study identified key concepts within AI discourse and mapped their semantic connections across different social media environments.
For sentiment analysis, this study employed the UIE-base pre-trained model to ensure precise sentiment extraction and structured expression. Experimental results indicate that UIE achieved state-of-the-art performance across four information extraction tasks, 13 datasets, and various supervised, low-resource, and few-shot settings, demonstrating its superiority in entity, relation, event, and sentiment extraction tasks (Lu et al., 2022). The model was pre-trained on heterogeneous large-scale datasets, including Wikipedia, Wikidata, and ConceptNet, and was fine-tuned on publicly available SemEval-14/15/16 sentiment analysis datasets to enhance its adaptability across different domains. A key feature of the model is its ability to extract (aspect term–opinion term–sentiment polarity) triplets, allowing for a more structured understanding of sentiment-related expressions. Results on the SemEval-16res dataset show that the model achieved an F1 score of 75.07% under full supervision, outperforming the previous best baseline model (70.26%) by 4.81 percentage points. In a few-shot learning scenario (trained with only 10 labeled samples), the model reached an F1 score of 39.11%, significantly surpassing traditional models such as T5-base (29.92%), demonstrating strong adaptability in low-resource settings. The model achieves over 80% accuracy, demonstrating robust performance in explicit sentiment analysis (e.g., “the screen is clear”), while the 12.5% error rate for implicit expressions (e.g., “battery drains after two hours”) remains acceptable given the inherent challenges of contextual ambiguity and domain-specific nuances.
The model classified each text’s sentiment polarity, with sentiment scores ranging from 0 to 1, where higher values indicate more positive sentiment. The final output includes sentiment scores and their overall distribution, offering insights into the sentiment dynamics of AI-related discourses across social media platforms.
Findings
Focus of topics: Divergence and convergence between technological orientation and ethical scrutiny
The study reveals distinct sociotechnical imaginaries and focal points in AI discussions on Sina Weibo and X. AI topics on Weibo primarily reflect China’s societal vision of technology, with discussions centered on national progress, societal development, and collective benefit. Topics like “Intelligent Innovation and Development” appear frequently on Weibo, highlighting the public’s optimistic expectation for technology to enhance efficiency and propel future progress. Additionally, topics such as “Policy Support and Investment” underscore the Chinese government’s role in spearheading AI development, showcasing a policy-driven model of technological advancement. Other topics, like “AI and Market,” “Tech Giants and Models,” and “Film Production and Entertainment,” further illustrate AI’s application across economic, technological, and cultural sectors, reinforcing the perception that technological and societal advancement are closely intertwined.
LDA Model Output Results.
Overall, AI discussions on Weibo are more nationally oriented, focusing on collective interests, with the public expressing strong anticipation for technology to facilitate societal advancement and align with national strategies. The government’s role is critical in promoting AI’s widespread adoption. Meanwhile, on X, AI discussions emphasize potential ethical risks, with the public showing heightened vigilance toward privacy and employment issues while maintaining a positive outlook on technological innovation’s promise. This thematic divergence highlights the blend of technological optimism and caution across different cultures.
Narrative networks: Coexistence of technological utility and global challenges, with a focus on application context, ethical prudence, and diverse perspectives
First, Weibo’s co-occurrence word network exhibits a strong central clustering pattern (see Figure 1), with core terms like “artificial intelligence” and “generation” situated at the network’s center, forming a dense interconnected structure. This indicates that discussions on Weibo around AI are highly concentrated on the technology itself and its role in specific application scenarios. Surrounding these core terms are other nodes, such as “education,” “business,” and “film,” which reflect AI’s widespread application and tangible impact across various sectors. This structure reveals that Weibo users strongly focus on the practical and context-specific implementations of AI. Given the context of technological development and government policy support in China, AI is viewed as a critical force driving social progress and industry transformation. Topic Co-occurrence Network.
Additionally, while Weibo discussions primarily focus on the positive applications of technology, there are peripheral terms like “risk” and “error,” indicating a degree of public awareness of potential issues with AI. These terms, however, are on the outskirts of the discussion, suggesting that conversations around risks exist but remain relatively marginal. Overall, Weibo’s discussions reflect a sense of technological optimism, with the public concentrating on AI’s enabling effects and practical applications, and less on profound ethical concerns.
In contrast, AI-related discussions on X are more diversified, encompassing not only the technology itself but also its ethical implications and global perspectives. In X’s co-occurrence word network, while “artificial intelligence” remains a central node, terms such as “privacy,” “regulation,” and “ethics” also frequently appear, reflecting X users’ keen focus on AI’s societal challenges, privacy concerns, and ethical dilemmas. Particularly regarding AI’s social impact and the need for global collaboration, X discussions indicate a profound consideration of the risks associated with technological development.
Additionally, X’s discussions feature a range of global topics, such as “country,” “global,” and “industry,” signifying that users are not only concerned with the local applications of technology but also its international implications. This marks a stark contrast to Weibo’s discussions, which are primarily centered on domestic applications. X users are more focused on how AI may influence global economies, social structures, and international technological cooperation (Micu et al., 2018), adding depth and breadth to the discourse through a globalized lens.
In summary, Weibo’s AI discussions emphasize technological utility and industrial application, reflecting Chinese society’s strong expectations for technological advancement and social development. X’s discussions are more focused on cutting-edge technology, social ethics, and global impact, reflecting a form of technological prudence that highlights AI’s potential ethical risks and global challenges alongside its capacity to drive societal progress. This contrast reflects the varying concerns and expectations for AI in different cultural and social contexts.
Sentiment attitude: A social-technical sentiment landscape characterized by optimism, polarization, and prudence
In the process of technology adoption, the interaction between societal sentiments and technological responses is crucial for understanding the relationship between technology and society. Analyzing the sentiment patterns in AI-related discussions on Weibo and X reveals culturally specific variations in sociotechnical imaginaries.
The average sentiment score for AI topics on Weibo is 0.872 (see Figure 2), indicating an overall positive outlook. This sentiment trend reflects a collective optimistic sentiment toward technology among Weibo users, underscoring the public’s high interest and emotional investment in AI. Within the cultural context of Weibo, technological advancement is widely viewed as a critical factor for national modernization and global competitiveness (Kitchin, 2021). Government policy support and positive media coverage further strengthen this optimistic attitude and high level of public expectation. To better illustrate the characteristics of sentiment discourse, three representative comments with high sentiment scores (above 0.9) were selected, as shown below: “AI can drive employment, cultivate talent, and promote technological development. I think we shouldn’t worry about it taking young people's jobs. Society is becoming more intelligent, and young people will learn more from it, which will increase job opportunities” (Weibo user comment). “#DeepSeek# accomplished in one or two minutes what took me one or two days. Technological innovation accelerates the pace of life, yet it allows people to slow down, leading to a healthier and happier life” (Weibo user comment). “#Modernization Through Different Generations # In recent years, artificial intelligence has been developing at an increasingly rapid pace. Soon, we will enter a world we had never imagined before” (Weibo user comment). Sentiment Analysis Results.
The sentiment trends on Weibo show polarization, with extremely positive sentiments driven by high expectations for the convenience and progress AI promises, while extremely negative sentiments reflect concerns over inadequate privacy protection and potential loss of control over technology. This distribution of sentiments reflects the complex psychological state in Chinese society amid rapid technological change, characterized by high expectations coupled with awareness of potential risks (Qiu, 2019). “Will AI replace humans? It’s pretty scary to think about. If there truly is advanced AI, it will replace us, but it’s uncertain whether it will eliminate us or just take over our work and production” (Weibo user comment).
The average sentiment score on X (Twitter) is 0.731, still generally positive but slightly lower than Weibo’s, with a more stable sentiment distribution and fewer extreme sentiments. This suggests that users on X hold a more cautious and rational stance toward AI, with public attention focused on ethical, privacy, and security concerns, resulting in a more balanced and moderate emotional response.
Compared to Weibo, X users exhibit a more cautious and rational sentiment tendency. While the overall sentiment remains positive, the average sentiment score is lower, and extreme negative sentiments are more prevalent, indicating a higher level of vigilance among the platform’s English-language users toward AI. “AI is becoming increasingly important. We need to find ways to collaborate with AI without causing harm and to ensure it aids society. It’s time to make sure AI resonates with us, preventing any antisocial behavior” (X user comment).
Additionally, X users express greater concerns over AI’s potential risks. Two representative tweets with low sentiment scores (below 0.3) were selected, as shown below: #AIApocalypse # In a world where every sci-fi enthusiast envisioned a matrix-esque takeover by our machine overlords, the bleak prophecy of the post-AI professional apocalypse seems to have had a plot twist (X user comment). I'm starting to think the biggest threat we have is from AI as it exponentially increases its abilities to act independent of human interaction (X user comment).
Comparing the sentiment tendencies in the AI discussion between Weibo and X, Weibo users emphasize the positive role of technological advancement for national and collective benefit, displaying optimistic and sometimes extreme sentiment. In contrast, X users prioritize ethical considerations and individual rights, showing a more cautious and balanced sentiment response. This cross-cultural difference suggests that technology adoption and application are not only influenced by policies and markets but also significantly shaped by cultural and social psychology (Cohen, 2019).
Discussion
Path dependence and strategic divergence of sociotechnical imaginaries across cultural contexts
In social sciences and humanities, imagination is viewed as a core element of social and political life. It is more than just individual fantasy or artistic expression; it serves as a vital cultural resource for interpreting and understanding social reality by creating new ways of life and systems of meaning (Gezerlis, 2001). Imagination provides frameworks through which individuals and groups understand the world and fosters a sense of belonging and identity among political communities (Anderson, 2008). In a cross-cultural context, the sociotechnical imaginaries of different societies often exhibit unique path dependencies, which not only influence the ways in which technologies are formed but also determine how they are accepted and applied within society (Taylor, 2002).
In constructing the historical narrative of technological development, imagination not only shapes how technology is perceived but also determines the priorities and directions of technological advancement in different societies. For instance, imagination influences social cognition by constructing the notion of the “other,” thereby delineating social boundaries and shaping how different nations set their technological development goals and ethical norms (Said, 2023). Furthermore, imagination serves to simplify and standardize complex social phenomena, making technological systems more comprehensible and manageable (Star & Bowker, 1999), playing a crucial role in technology governance, industrial policy, and societal acceptance.
Sociotechnical imaginaries do not develop in isolation as a linear process; rather, they are deeply embedded in the historical cultural traditions, technological governance models, and socio-economic structures. Imagination is not detached from the social structures in which individuals grow but is inherently socialized, forming a fundamental connection between individuals and their external environment (Patalano, 2007). In the development of AI technologies, sociotechnical imaginaries do not emerge out of thin air; rather, they are constructed upon existing trajectories of technological advancement and policy orientations. In other words, AI development is not only shaped by current market demands and policy decisions but also represents the continuation of historical technological development models. Its path dependence determines how technological imaginaries are formed and evolve over time.
Li Ying’s “technological inheritance” theory posits that any new technology must achieve innovation through the recombination of “technological genes,” akin to genetic mutations and natural selection in biological evolution. The continuous evolution of artificial artifacts must always rely on the “technological niche” established by historical developments (Li, 2006). This cumulative characteristic is corroborated by Marx’s analysis of technological history, which, through examples such as the evolution of clockmaking and watermill technology, demonstrates that major technological innovations during the Industrial Revolution were, in essence, modern reconstructions of ancient technological legacies (Pancaldi, 1994).
Within this theoretical framework, the divergence in AI development between China and the West, taking the United States as an example, is not incidental but rather a consequence of their respective technological development trajectories. In the following analysis, we will review the historical evolution of the internet industry in both countries, exploring how this process has shaped distinct AI development models and, consequently, how these models have influenced contemporary AI sociotechnical imaginaries.
The development trajectory of the U.S. internet industry is characterized by market-driven growth and decentralized innovation. In the early stages, the government played a critical role in funding basic research, but as the technology matured, it gradually withdrew, allowing market competition to shape its evolution (Mowery & Nelson, 1999). By the 1990s, the U.S. government implemented “Tech Deregulation” policies, shifting the leadership of internet innovation to private enterprises (Horwitz, 1986). This environment enabled the rise of tech giants such as Google, Amazon, and Facebook, establishing an industrial framework centered on corporate-led innovation, open standards, and free-market competition. The defining characteristics of decentralization, platform openness, and market-driven governance became the hallmarks of the U.S. internet industry, and these features later extended into the country’s AI development model.
In contrast, China’s internet development follows a state-led and industry-coordinated approach, emphasizing national strategic planning and policy guidance to ensure that internet technologies align with broader economic and social development objectives (Hong & Harwit, 2020). Although China connected to the global internet in the 1990s, the government adopted a “gradual opening + localized development” strategy, preventing overreliance on foreign technologies. During the 2000s, policies such as “Broadband China” facilitated large-scale network infrastructure expansion while simultaneously fostering domestic tech enterprises like Baidu, Alibaba, and Tencent (BAT), leading to the formation of an internet ecosystem distinct from that of the U.S. By the 2010s, the “Internet+” policy further propelled the integration of internet technologies with traditional industries, positioning the internet as a key driver of China’s economic transformation and industrial upgrading.
This divergence in internet development paths has profoundly shaped the AI development models and sociotechnical imaginaries in China and the United States. The U.S. AI industry has inherited the Silicon Valley model, emphasizing market-driven growth, independent innovation, and technological openness. Led by companies such as OpenAI and Google DeepMind (Stanford University, 2024), the U.S. AI sector has prioritized breakthroughs in automated decision-making, scientific computing, and foundational AI research.
In contrast, China’s AI industry follows a “government-led + market-adaptive” model, emphasizing AI’s integration into smart cities, intelligent manufacturing, and social governance while ensuring domestic technological self-sufficiency. Reports indicate that China’s DeepSeek V3 model has surpassed GPT-4o in performance, demonstrating the nation’s push for indigenous AI innovation (Special Competitive Studies Project, 2025).
This “fundamental innovation versus applied deployment” contrast reflects a deeper divergence in technological innovation paradigms: the U.S. continues its market-driven “Silicon Valley model,” whereas China leverages its integrated national strategic system to drive AI advancements. This path dependency reinforces the idea that AI is not merely a product of science and engineering but is also embedded within historical, policy, and socio-cultural frameworks, shaping distinct sociotechnical imaginaries in each country.
Sociotechnical imaginaries not only depict possible futures but also outline the types of futures that ought to be pursued. As a form of political capital, these imaginaries play a crucial role in technological design, public expenditure, resource allocation, and public acceptance or rejection of technology.
The sociotechnical imaginaries on X reflect a continuation of the ideals of free market competition and decentralized innovation, emphasizing AI as a future-oriented general intelligence (AGI) with breakthroughs in scientific computing, automated decision-making, and human–computer interaction. This imaginary is rooted in the Silicon Valley model, where technological progress is determined by market demand and corporate competition, with the government’s role limited to funding basic research and regulating ethical concerns. Meanwhile, users on X also exhibit a heightened sense of vigilance toward potential risks associated with AI, such as loss of control, data privacy, and ethical dilemmas.
In contrast, the sociotechnical imaginaries on Weibo are more influenced by government-led initiatives and industry-driven applications. AI is perceived as a key instrument for national modernization and technological ascendancy, with an emphasis on social governance and economic empowerment, rather than merely market competition or exploratory innovation.
Imagination rooted in social culture: Key drivers of differences in sociotechnical imaginaries
Previous research indicates that sociotechnical imaginaries display unique characteristics and functions across different social and cultural contexts. The previous sentiment analysis indicates that English-speaking users on the X platform exhibit a more cautious attitude toward AI, as reflected by their lower sentiment scores. This aligns with the techno-skepticism commonly observed in Western cultural contexts, where the public is highly focused on issues of ethical regulation, privacy, and employment impacts (Cardoso & Castells, 2006), advocating for stringent regulatory and ethical measures to protect individual rights and mitigate potential risks of technological advancement (Marien, 2014). Jasanoff highlights the crucial role of technological imagination in shaping national science policies and public understanding, emphasizing that technological progress is not solely a product of scientific discovery but also a result of social and cultural interactions (Miller & Jasanoff, 2004). Beck’s (1992) theory of the “risk society” reveals the risks and uncertainties that arise in modern societies due to technological advancements, stressing the need for governance and risk management in technology. On Weibo and X, sociotechnical imaginaries around AI reveal distinct cultural resources and political currency functions.
In essence, sociotechnical imagination represents a cognitive contest, where various social forces shape the interpretation of technology through discourse (Kearnes, 2008). In China, technology is seen as a core driver of national development, with the idea that “science and technology are the primary productive forces” deeply embedded in the public psyche. Within this context, techno-solutionism and technological nationalism have garnered broad social support. The Chinese public generally embraces new technologies, believing that innovation can enable “leapfrog development” to rapidly enhance the nation’s international competitiveness. As a cutting-edge field, AI has been given a crucial role in promoting China’s economic development and is seen as a symbol of technological nationalism. Consequently, China’s technological imagination is marked by strong optimism, viewing AI as a solution to many social, economic, and political challenges, which fosters widespread acceptance and support from both the government and the public.
This nationalist technological imagination further shapes China’s policies and developmental trajectory in AI. The government promotes extensive application of AI through policy guidance and large-scale investments, positioning it as a critical component of national strategy. AI is not only viewed as a tool to enhance global competitiveness but also as an essential resource for social management and governance. The imagined benefits of technology are often linked to national interests, social welfare, and long-term strategic goals, emphasizing AI’s role in improving social efficiency and national security. AI’s applications in social management, such as intelligent surveillance and data analytics, are seen as vital tools for maintaining social stability, reflecting the deep-seated belief in integrating technological development with national strategy in China. This sociotechnical imagination is continually reinforced in public awareness, with AI viewed as not only a symbol of technological advancement but also of China’s technological autonomy and strength.
In contrast, in Western sociocultural contexts, taking the U.S. as an example, technology is often depicted as a liberating force, particularly in fields like AI, where the sociotechnical imagination is characterized by strong individualism and market-oriented tendencies. American narratives around technology frequently emphasize its role in enhancing productivity, improving individual lives, and driving societal transformation. This vision of a technological future is especially prominent in Silicon Valley, with entrepreneurs like Elon Musk’s Mars colonization plans and Sam Altman’s advocacy for friendly AI, portraying technology as a means of human progress and expanding the boundaries of human existence. Behind this imagination lies American society’s cultural affinity for individual adventure, innovation, and a market-driven approach to technology development (Miller & Jasanoff, 2004).
In this cognitive contest over sociotechnical imagination, discourse serves as the primary tool for different interest groups to interpret technology. Bourdieu’s theory of “cultural arbitrariness” helps explain this phenomenon: sociotechnical imaginaries are not fixed but continually redefined by social and cultural contexts, imbuing the same technology with divergent meanings. This implies that in different societal contexts, the signified and signifier of technology have no inherent connection but are reconnected by societal and cultural narratives. This background saw the increasing use of discourse closure. Certain interest groups wield control over discourse, limiting alternative voices and shaping a singular technological narrative (Mason, 2000). In the U.S., tech companies and innovation leaders reinforce technological optimism through media and market narratives, linking technological advancement closely to individual freedom and market efficiency. In China, the Chinese government, through policy frameworks and media promotion, strengthens the association between technology and national development, fostering a collective and state-directed narrative around technology.
Thus, understanding the interplay between sociotechnical imagination and sociocultural background, as well as the struggle for discourse construction, offers an essential perspective for further understanding the modalities of technological development and application. Recognizing the socially constructed and potentially closed nature of sociotechnical imagination is crucial for engaging in ethical and policy discussions on future technology, supporting fairer and more sustainable technological development.
The impact of cross-cultural differences in sociotechnical imaginaries on technological development
In examining the sociotechnical imaginaries surrounding AI on social media platforms Weibo and X, it becomes clear that varying technological development path dependences and sociocultural contexts shape distinct public perceptions of AI. This raises the question: Does sociotechnical imagination, in turn, influence the development of technology itself? If so, using China and the United States as examples, how might their divergent sociotechnical imaginaries shape the concrete trajectories of AI development in each country?
Research suggests that sociotechnical imagination indeed influences technological development, and this impact is not straightforward or linear but rather intertwined across multiple dimensions (Liao, 2018). In the short term, sociotechnical imagination can accelerate technological evolution and application. In China, the optimistic public imagination around AI has propelled rapid application across various sectors, including industry, healthcare, finance, education, and media. AI is perceived as a crucial tool for national progress, and thus, there is widespread expectation for significant functional breakthroughs within a short timeframe. For example, the swift development and widespread adoption of virtual anchors and personalized e-commerce illustrate China’s high demand for AI across sectors. This sociotechnical imagination directly shapes functional design, aligning AI technology more closely with specific social needs and economic goals.
In contrast, the American sociotechnical imagination tends to be more cautious and regulatory in nature. Although market-driven AI development is robust in the U.S., public concerns over privacy, ethics, and security influence the pace and direction of technological advancement. In the short term, the American public’s cautious sociotechnical imagination has promoted greater emphasis on risk management and ethical standards, leading AI developers to prioritize transparency and compliance in functional design. For instance, issues like content safety and model fairness compel companies to allocate more resources to risk assessment and mitigation during technology development. While this approach may slow the expansion of certain applications, it ensures ethical standards and social responsibility within the development process.
Long-term Concerns About AI.
In the long term, sociotechnical imagination shapes not only the development trajectory of technology but also its global positioning and strategic direction. In China, with a background of technological nationalism, long-term sociotechnical imagination envisions AI as a tool for enhancing national innovation capacity and economic transformation. This has led to increased investment in autonomous research and development in AI. In October 2023, during the Third Belt and Road Forum for International Cooperation, China introduced the
In contrast, the U.S.’s sociotechnical imagination places a strong emphasis on ethics and social responsibility, a long-term vision that will profoundly impact the social acceptance of AI and the establishment of international regulations. On September 5, 2024, the U.S., the European Union, and the United Kingdom signed the
Technology as a “mediator”: How divergent imaginaries influence society
When examining the relationship between technology development and society, we can draw on the perspective of the Social Construction of Technology (SCOT). SCOT’s central idea is that technological development is shaped not solely by scientific progress or technical innovation but by social factors, cultural context, political power, and the needs and interests of different groups (Pinch, 2012). This theory emphasizes the social construction process of technology, including how different social groups assign varied interpretations based on their needs and backgrounds. To better illustrate this process of social construction, this study selects China and the United States as case examples for analysis.
SCOT divides the social construction process of technology into three primary stages. The first stage is the emergence and development phase, involving Relevant Social Groups and Interpretative Flexibility. At this stage, technology’s meaning and use are diverse and open among different social groups, with each group assigning unique interpretations based on their needs and contexts. In the U.S., interpretations of AI largely revolve around enhancing commercial productivity and user convenience, with consumers viewing AI as a tool for daily life and tech companies seeing it as a profit growth engine. As AI adoption has expanded, it has increasingly integrated with free market logic, facilitating its expansion into various societal fields. Conversely, in China, the sociotechnical imaginaries surrounding AI are closely tied to policy-driven goals and the need for social stability. Government investment and policy support for AI provide substantial backing for its development. AI is perceived not only as critical for advancing national technological autonomy and international competitiveness but also as playing a vital role in social governance. In SCOT’s first stage, AI development in China is seen not merely as market innovation but as a strategic national imperative, with political and social functions embedded in different groups’ needs.
The second stage is problem resolution and closure. As technology spreads within society, conflicts may arise between different groups’ needs and expectations, leading to debates and adjustments in technology development. During this stage, technological design and functionality are continuously refined to address conflicts and satisfy varying demands. Through negotiation and adaptation, certain interpretations and applications of the technology gradually dominate and are widely accepted across society. Closure entails a reduction in technical disputes and the formation of consensus among social groups. In the U.S., AI development has spurred widespread discussions on privacy, security, and employment ethics, especially in consumer tech. Rising consumer concerns over data privacy have prompted companies and regulators to negotiate and revise legal frameworks for data use and protection. Additionally, the impact of AI on the job market has led to societal reflection and policy adjustments in the U.S. In China, however, AI’s promotion has involved balancing social stability with application efficiency. Issues surrounding privacy, security, and employment have gained attention, but China’s approach tends to achieve closure through top-down policy frameworks. AI is deployed to enhance social efficiency and public management, with its development closely aligned with national governance objectives.
The third stage is stabilization. At this point, the design, uses, and meaning of technology become largely fixed, with relevant social groups converging in their understanding of the technology. The technology undergoes minimal changes, and its applications and functions gain widespread acceptance, even though minor innovations may still occur. SCOT illustrates how technology is shaped through interaction and negotiation among social groups, where development is not merely a process of material innovation but also a social process shaped by cultural and economic influences.
Currently, AI development appears to be transitioning from the first stage to the second stage. Generative AI has become widely used in everyday life, approaching a critical scale threshold. Similar to smartphones, these technologies are user-friendly with relatively low barriers to entry for the general public. Notably, generative AI exhibits self-enhancing capabilities that could potentially drive exponential advancements in the technology.
The Science, Technology, and Society (STS) theory further emphasizes that technology development is not merely an advancement in scientific ideas or principles on a natural or objective level. It is also shaped by social factors such as conflict, negotiation, and compromise (Bauchspies, 2006). Technology development is a socially constructed process that involves interactions among multiple stakeholders. The global competition in AI between China and the U.S. exemplifies this. AI development is driven not only by domestic social demands but also by the need to set international standards and foster technological cooperation. For example, the U.S. prefers to establish international standards focused on AI ethics and privacy protection, while China integrates its AI technology into global applications through initiatives like the Belt and Road. This international negotiation and competition shape AI’s developmental trajectory significantly, further influencing domestic sociotechnical imaginaries.
Conclusion
Through a cross-platform comparative analysis, this study reveals structural differences in sociotechnical imaginaries of AI between Weibo and X, and examines how these differences are rooted in socio-cultural natures and in the meantime influence national development and social governance of AI technology. It offers both theoretical and practical implications.
From a theoretical perspective, this study not only analyzes AI-related discussions on specific platforms but also provides a valuable framework for understanding global technological development and its cultural contexts. The findings demonstrate that sociotechnical imaginaries are not merely shaped by technology itself but deeply embedded in specific cultural traditions, social structures, policy environments, and path-dependent technological practices. AI imaginaries on X emphasize individual freedom, market competition, and technological autonomy, framing technology as a self-directed force of innovation. Conversely, AI imaginaries on Weibo reflect a collective orientation, integrating AI closely into national strategy, economic development, and social governance, with government playing a guiding and coordinating role. This interaction among culture, society, and technological development shapes distinct AI imaginaries on different platforms, influencing public acceptance and sentiment toward AI. The findings of this study further validate the sociotechnical perspective, which posits that technology does not unilaterally shape society but is co-constructed with the social environment. The technological trajectories and policy orientations of different nations profoundly influence public imaginaries and perceptions of technology. These imaginaries, in turn, shape how societies prioritize, interpret, and respond to emerging technologies. This dynamic interaction suggests that imagination, technology, and society are mutually constitutive—each continuously influencing and reshaping the others across cultural and institutional contexts.
At the practical level, this study reveals significant differences in AI governance imaginaries between Chinese and Western platforms: discussions on Weibo reflect a state-driven, pragmatist vision of AI centered on national strategy and industrial application, while discourses on X emphasize privacy, ethics, and risk, highlighting a market-oriented approach grounded in individual rights. These divergent platform discourses illustrate how sociotechnical imaginaries are shaped by cultural contexts and institutional logics, which in turn influence governance trajectories and public expectations of AI.
Such findings pose both challenges and insights for global AI governance. First, achieving coordinated international governance requires reconciling contrasting technological visions across cultural contexts. Without a shared conceptual foundation, governance frameworks risk fragmentation and deadlock. On critical issues like data governance, algorithmic accountability, and ethical oversight, multilateral platforms must facilitate inclusive negotiation processes that aim for baseline consensus while allowing space for normative diversity.
Second, the localization of AI development must consider path-dependent cultural and technological habits. Governance frameworks should remain flexible to accommodate local sensitivities toward risk, varying levels of institutional trust, and culturally embedded technological imaginaries. Global interoperability should not come at the expense of cultural intelligibility or policy legitimacy at the national level.
Furthermore, platforms themselves are not neutral infrastructures but act as transnational carriers of dominant cultural imaginaries. In global platforms like X, discourse shaped by hegemonic sociotechnical imaginaries—particularly those rooted in Anglo-American contexts—may disproportionately influence how other societies conceptualize and respond to AI. To mitigate such asymmetries, it is essential to advance global AI literacy and public education, fostering diverse, inclusive spaces for cross-cultural dialogue. This would empower citizens worldwide to critically engage with AI and participate in shaping equitable, transparent, and accessible governance structures.
From the perspective of sociotechnical imaginaries, global AI governance is not merely a matter of regulatory harmonization—it demands an understanding of how different societies imagine, legitimize, and domesticate technology. Only by integrating these cultural imaginaries into governance design can we move toward a truly inclusive and sustainable global AI governance architecture.
In conclusion, sociotechnical imaginaries are dynamic and evolve through the intertwined influences of platform algorithms, cultural frameworks, and social structures. This study, through data analysis, reveals the sociotechnical imaginaries of AI on Weibo and X. However, to truly understand how technology shapes society, further methodological expansion is needed to penetrate the cognitive fog created by algorithms and uncover the deep interactions between technological imaginaries and social reality.
Research limitations and future directions
Despite systematically collecting data and conducting cross-platform comparisons to reveal the structural differences in sociotechnical imaginaries of AI on Weibo and X, this study acknowledges the following four key limitations:
First, since platform ecology acts as a technological mediator that shapes AI imaginaries through distinctive platform architectures and governance logics, divergent architectures such as algorithmic recommendations that differ between Weibo and X may lead to visibility bias and amplify discursive differences in public imagination of AI. Future research is suggested to employ mixed-method approaches, such as multi-platform data triangulation and offline surveys, to differentiating platform-mediated imaginaries from real public perceptions.
Second, representational boundaries inherent in discourse sampling may introduce demographic biases. Despite employing stratified sampling techniques that consider temporal and topical coverage, certain groups—particularly techno-optimistic—may become overrepresented, while digitally underprivileged communities remain marginalized. In response, future studies could enhance representational validity by integrating demographic-weighted analyses or engaging marginalized communities through participatory research methodologies.
Third, semantic loss during cross-cultural translation poses another critical constraint. Excluding non-English tweets, which constitute 26.8% of AI discussions on X, coupled with inherent linguistic incompatibilities between Chinese and English, risks overlooking minority perspectives and culturally specific imaginaries. To address this issue, further research could develop multilingual natural language processing frameworks or collaborating with local researchers to preserve culturally nuanced meanings.
Finally, although English-language user discourses on X can largely reflect Western-oriented AI imaginaries that are distinct from those presented on Weibo, trajectories of technology and public attitudes towards it can be significantly different within the West. Given that American users hold a dominant presence on X, this study primarily takes the U.S. as an example to investigate the socio-cultural factors behind AI imaginaries on X. Future research could look more closely at the complex and subtle differences within the West to deepen understanding and interpretation of the relationship and dynamics between technology, society, and imagination.
