Abstract
Keywords
Introduction
Artificial intelligence (AI) has undergone significant advancements, overcoming initial limitations in data processing and human-like cognition since its inception in the mid-1950s, (Y. Wang et al., 2019). The ever-evolving definition of AI tools, combined with the integration of technologies such as the Internet of Things and big data, has resulted in a surge in research, particularly in design and performance (Pan et al., 2019; Romanova et al., 2021; J. Wang et al., 2021). However, the successful adoption and use of AI tools remain challenging for organizations (Brown et al., 2010; Hong et al., 2014; Venkatesh et al., 2016), often due to insufficient infrastructure and inadequate training.
The media and journalism sectors have rapidly adapted to technological changes, with AI gaining increasing attention (Chou et al., 2021; Guzman & Lewis, 2020; Lindén, 2017). AI’s adoption in these sectors has numerous benefits, including enhancing journalists’ efficiency and competitiveness for media organizations (Broussard et al., 2019; Túñez-López et al., 2021). However, issues such as the evolving role of journalists, personalized content delivery, training requirements, and work organization warrant further investigation (Kothari & Cruikshank, 2022; Parratt-Fernández et al., 2021).
Although newsrooms worldwide explore AI adoption for improving information sourcing, organization, and distribution, a gap exists between resource-rich organizations and those with limited means (Newman et al., 2020). As AI transforms newsrooms into large-scale economies, understanding the adoption and response to technological advances in low-income countries is crucial (Kothari & Cruikshank, 2022). While some Southeast Asian countries have started integrating AI technologies, research is scarce on AI adoption and its implications in these regions.
In recent years, AI’s potential to revolutionize the media sector has garnered attention in Asia, particularly in Vietnam (NDO, 2022). AI implementation can benefit Vietnamese journalism by enhancing reporting accuracy and providing cost-effective alternatives for financially constrained media firms (Minh Thu, 2022). The increasing use of AI in Vietnam’s media sector is strengthening its international presence. This study aims to identify the factors influencing AI adoption in Vietnamese press agencies, thereby addressing the research gap on AI use in Asian journalism, specifically in Vietnam. The findings will contribute to the theoretical understanding of AI application in media journalism and provide practical suggestions for AI adoption in the Vietnamese press and countries with similar contexts.
Literature Reviews
Artificial Intelligence Technology in Journalism
The application of artificial intelligence (AI) technology in journalism has seen a significant increase in recent years, with various terms used to describe this integration, such as “computational journalism” (S. Cohen et al., 2011), “algorithmic journalism” (Anderson, 2013), and “automated journalism” (Nickolas, 2019). Using algorithms to analyze data from various sources, translate text into audio and video, and determine sentiment are examples of how AI is implemented in newsrooms (Calvo Rubio & Ufarte Ruiz, 2021). The New York Times, The Washington Post, and the Associated Press are examples of traditional news organizations that have effectively integrated AI efforts in their newsrooms (Chan-Olmsted, 2019). Chinese newsrooms, like the state-run Xinhua News Agency, also use AI tools to produce news through platforms like Media Brain, which autogenerates news bulletins (Yu & Huang, 2021).
AI has facilitated the development of news proposal systems (Helberger, 2019; Túñez-López et al., 2021) and has become a crucial technology for exploring data, analyzing trends, identifying topics of interest, and verifying data (Latar, 2015; Noain-Sánchez, 2022). Journalists increasingly use social networks, websites, and online forums to search for news on the internet, allowing them to quickly explore news events (Parratt-Fernández et al., 2021). AI applications in data analysis serve to create content and understand readers (Chou et al., 2021). Charlie (Charlie, 2019) found that machine learning, automation, and data processing are the primary AI technologies used in news production, which comprises gathering, creating, and spreading the news. AI has made mass media more competitive in today’s fragmented media landscape (Chou et al., 2021).
However, AI application in news production faces several limitations. AI models are often designed for specific stories, requiring re-creation and training for new projects, which prevents cost amortization (Stray, 2019). Investigative journalism projects using computer vision necessitate significant investments in technology infrastructure and skilled personnel (de-Lima-Santos & Mesquita, 2021). Furthermore, AI models often use outdated and biased datasets, raising ethical concerns (Guzman & Lewis, 2020). AI implementation in newsrooms also requires substantial expenses (Broussard et al., 2019).
While AI is not a panacea for journalism, it is a novel instrument that necessitates that news agency personnel gain a deeper understanding to promote and advance newsroom AI skills. Power dynamics between stakeholders, AI enforcement tools, and adherence to legal and ethical standards must be explicitly considered (Broussard et al., 2019). AI tools and algorithms may reflect the values and biases of their developers, and technology companies have little incentive for transparency regarding algorithmic bias (Bird et al., 2016). Research has revealed societal gender biases (Bolukbasi et al., 2016), racial biases (Buolamwini & Gebru, 2018; Campolo & Crawford, 2020) or foment violence, spread falsehoods, to diminish self-esteem (Haugen, 2023), and more in AI systems. Such biases can pose challenges when applying AI systems across different countries or cultures, as the encoded values may not automatically adjust to new contexts. Consequently, the study of AI usage in journalism, both in theory and practice, should consider diverse cultural contexts worldwide.
The Unified Theory of Acceptance and Use of Technology (UTAUT) in Media Journalism
UTAUT has been a focal point in media journalism research, with studies examining the influence of factors such as effort expectancy, performance expectancy, and social influence. Additionally, research has explored the evolution track and future research trends based on UTAUT and its application to journalists and technology adaptation (Ahadzadeh et al., 2021; Peng & Miller, 2021; C. T. Pham & Thi Nguyet, 2023). UTAUT has been used to investigate every stage of technology adoption, from pre-adoption through post-adoption use. Performance expectancy, effort expectancy, social influence, and facilitating conditions are among the fundamental ideas of UTAUT (Venkatesh et al., 2003). These concepts have to do with whether someone thinks using a system will improve their job performance, how simple it is to use, whether they believe important others think they should use the new system, and whether they think there is an organizational and technological infrastructure in place to support system use.
The research agenda presents four primary research opportunities: antecedents/determinants of UTAUT constructs (Morris & Venkatesh, 2010; Morris et al., 2005; Venkatesh et al., 2004), interventions, moderators of UTAUT relationships, new predictors, and consequences. It is crucial to note that UTAUT has been a robust general theoretical model, with its constructs proving predictive in various contexts and technologies (Venkatesh et al., 2012, 2016; Xu et al., 2017).
Venkatesh (2022) discusses the utilization of UTAUT as a theoretical basis for examining antecedents/determinants tailored to specific technologies. This examination framework encompasses individual characteristics such as personality (Thong, 1999); technology characteristics like quality (Venkatesh et al., 2007); environmental characteristics, including the culture of innovation (Venkatesh & Bala, 2008); and interventions such as training (Venkatesh et al., 2016). Individual factors like risk-seeking and desire to learn (Venkatesh, 2000) play a significant role in adopting and using AI tools. Technology characteristics, including perceptions of model errors and AI model transparency, can impact UTAUT predictors (Venkatesh, 1999). Environmental factors like organizational climate promoting innovation can also influence AI tool adoption and usage. Interventions, including training, can be studied to evaluate their effects on adoption and use (Venkatesh & Bala, 2008).
The UTAUT model has undergone many investigations to ensure its validity, and the findings have consistently demonstrated that it is a viable and trustworthy model for forecasting technology adoption. For additional empirical research, Bervell and Umar (2017) suggested comparing the proposed full UTAUT model with non-linear relationships and modifiers to the original UTAUT model. The UTAUT model was a superior technology acceptance and used predictor compared to other competing models. Jacob and Pattusamy (2020) also developed empirical support for the UTAUT model using data from Germany and India. The research discovered that the UTAUT model was a trustworthy indicator of technological adoption and utilization in both nations.
Additionally, Abdullahi et al. (2021) did a study to evaluate language teachers at public institutions in North-Central Nigeria’s awareness of and integration of e-learning amid the COVID-19 pandemic. The UTAUT model was superior to eight theories/models in empirical testing and validation. Empirical testing has shown that the UTAUT model is ideal for other widely used rival models. It is a solid and accurate model for estimating technological adoption and utilization. It will take further research to refine the idea and address its flaws.
Based on these discussions, this study uses UTAUT as a theoretical framework to suggest possible research topics in individual characteristics, technical characteristics, environmental characteristics, and interventions. The recommendations in this study could help organizations positively affect the adoption of AI tools and contribute to the adoption literature and a springboard for additional research (Venkatesh, 2022; Zhang & Venkatesh, 2018).
Research Hypothesis Development and Model
The UTAUT highlights the key factors influencing adoption intention and behavior and enables researchers to examine potential moderator effects that can amplify or limit the impact of key factors. Based on UTAUT, Performance expectance (PE), Effort Expectancy (EE), and Social Influence (SI) significantly affect individual intention to use. Facilitating Conditions (FC), and Behavioral intention significantly affects individual behavior toward new technologies (Venkatesh et al., 2003; Williams et al., 2015). Accordingly, the present study was to test the following hypotheses:
This study seeks to determine what factors significantly encourage journalists to use AI rather than attempting to replicate the UTAUT studies. As a result, the study model (see Figure 1) incorporates the following three additional constructs: trust (TR), regulatory support (RS), and technology affinity (TA). In which:

Conceptual model.
UTAUT is a widely used model to understand technology adoption. However, some studies do not consider or do not find out the effect of all four moderators of gender, age, experience, and voluntariness when applying the UTAUT (e.g., Afrizal & Wallang, 2021; Dutta & Sarma, 2021; Jacob & Pattusamy, 2020). The inclusion or exclusion of moderators in UTAUT studies may depend on the research question and context. In this study, we do not consider the moderator effect of gender, age, experience, and voluntariness because this study is not longitudinal, and AI has not been popularized in the Vietnamese press; incapable of capturing increasing levels of user experience or assessing the context of use (Venkatesh et al., 2003).
Method
Questionnaire Design
To gather data, we employed a questionnaire. UTAUT constructs, Trust, Regulatory support, and Technology affinity were selected as the primary constructs of the research model from measurement constructs created in relevant studies (Aldás-Manzano et al., 2009; Kumar Bhardwaj et al., 2021; C. T. Pham & Thi Nguyet, 2023; Venkatesh & Zhang, 2010; Venkatesh et al., 2003; Zhu et al., 2021).
Six journalism and communication specialists were consulted after we had prepared the questionnaire to enhance its semantics and content validity. A pre-test with 25 journalists was conducted after expert revision to ensure the questionnaire’s phrasing, completeness, sequencing, and other potential errors. Following input from respondents, the questionnaire was slightly revised to improve its clarity and thoroughness. The assessment tool utilized in this study had 32 items in the final questionnaire, with 9 components in our research model (see Appendix 1). The first section’s questions were all scored using a 5-point Likert scale, from “strongly disagree” to “strongly agree,” for each response. Basic information about the respondent’s attributes, such as age, gender, and duration of journalism employment, was also provided.
Data Analysis
The Partial Least Squares-Structural Equation Modeling (PLS-SEM) approach was employed to examine the conceptual model, utilizing SmartPLS software version 4 (Ringle et al., 2022). The analysis followed a two-step procedure, beginning with evaluating the measurement model to ensure validity and reliability, then examining the structural model to test the proposed hypotheses (Hair et al., 2017). The measurement model defines the methods for measuring each construct, while the structural model delineates the relationships between these constructs in the overall model.
PLS-SEM has gained widespread acceptance as a multivariate method frequently used in social science research (Khan et al., 2018). The choice of PLS-SEM for this study is based on its ability to simultaneously analyze both the measurement and structural models, resulting in more precise estimations (Al-Saedi et al., 2020). Data collected via the questionnaire was entered into an Excel spreadsheet and subsequently imported into SmartPLS for analysis, allowing for the evaluation of the research model and the hypothesized relationships between constructs.
Data Collection and Participants
Most of the study’s participants were journalists from Hanoi, Vietnam, who were chosen using convenience sampling. To gather information, we spoke with them in person. All participants agreed before data was collected, and they were assured that their answers would be kept private and used exclusively for research. From October 2022 to December 2022, data was gathered. We gathered 238 surveys in all. We took into account the Partial Least Squares SEM (PLS-SEM) sample size recommendations; as a result, 188 observations are needed to achieve an 80% statistical power to detect R2 values of at least 0.1 with 1% probability of error (Hair et al., 2017, p. 48) when the measurement and structural models include eight independent variables. Consequently, the study’s sample size was sufficient.
The 238 participants’ demographic and employment-related details are shown in Table 1. The group’s gender composition is primarily male (57.1%), with females accounting for less than half of the total (42.9%). Most of the participants, or 34.4%, are between the ages of 30 and 39 when the age range of the participants is considered. 18.1% of the group is 40 to 49 years old, and a close second, 19.3%, is 50 or older. After that, 28.2% of the participants are between the ages of 22 and 29, indicating a workforce that is still quite young. This demonstrates a good representation of all age groups in the sample, with a bit of bias toward younger ages. The statistics also shed light on how long the individuals worked. Most of the group (38.2%) have been employed for 1 to 2 years, while 35.7% have been employed for 3 to 5 years. The lowest group is six years or more of employment (26.1%). According to this breakdown, most of the workforce has only five years of experience or less.
Participants’ Information (
The many types of AI adoption at work are detailed in Table 1. Text-to-Speech and Speech-to-Text technology is the group’s most used type of AI, employed by 39.5% of the participants. Video, picture, and text processing, used by 37.0%, are next. Other kinds of AI technology adoption have fallen significantly. Only 7.6% of participants utilize AI for content production; compared to 11.3% who use content editing, 4.6% of respondents said they used other AI systems. According to the data, this group mainly uses text-to-speech and video, image, and text processing; other AI techniques are employed far less frequently.
Results
Common Method Bias and Multicollinearity
We calculated the Variance Inflation Factor (VIF) values to ensure the acquired data didn’t have common method bias (CMB). If all of the VIFs in the outer model (see Table 1) and all of the VIFs in the inner model (see Table 2) are equal to or lower than 3.3 as a result of a thorough collinearity test, the model is thought to be free of CMB (Kock, 2015; Kock & Lynn, 2012). The results showed that All VIFs had values lower than 3.3; hence, the data gathered had no CMB-related problems. Table 2’s VIF value data reveals that all VIF coefficients are below 3. As a result, according to Hair et al. (2019), there was no multicollinearity in this research model.
Collinearity Statistics—Inner Model.
Reliability and Validity
At first, the reliability and validity were checked for all constructs. Internal consistency usually indicates reliability, measured by Cronbach’s alpha, composite reliability rho_c and rho_a. According to the findings (see Table 3), Cronbach’s alpha, Composite reliability rho_c, and rho_a were greater than .7, indicating that all scales fulfilled reliability standards (Hair et al., 2012, 2017).
Construct Reliability and Validity.
The concept of convergent validity is the measurement of conceptual similarity across items. To investigate the concurrent validity, this study investigated factor loadings and average variance extracted (AVE). According to the findings in Table 3, the AVE values varied from 0.548 to 0.689, exceeding the recommended value of 0.5 (Hair et al., 2017). Only FC1 and FC2 factor loadings were lower than the recommended value of 0.7, ranging from 0.6 to 0.7, but in social research studies, and this value can be accepted (Hair et al., 2012; Hulland, 1999), so most items of factor loadings have satisfied the requirements, according to Table 3. These findings enable convergent validity to be established.
This study uses the Heterotrait-Monotrait ratio (HTMT) criterion to evaluate the discriminant validity. The HTMT has been proven to perform better than the prior criteria for the measurement of discriminant validity (Henseler et al., 2015). Table 4 indicates that the HTMT criterion is met because all values were below the suggested threshold of 0.9 (Hair et al., 2017).
Heterotrait-Monotrait Ratio (HTMT) Matrix.
Structural Model Assessment
The structural model is the next stage after the validated measurement model. To find the coefficient of determination and the path coefficients, a bootstrapping technique with 5,000 additional samples is required (Hair et al., 2017). According to the results,
PLS Predict and Construct Cross-Validated Redundancy.
As indicated in Table 6 and Figure 2, the structural model of the assessment indicates hypotheses testing. The findings indicated that PE, EE, and SI significantly impact AI adoptions’ behavioral intention to use in journalism. Thus, H1 (β = .245,
Hypothesis Testing.
The effect size (

Structural model.
Importance-Performance Map Analysis (IPMA)
As an advanced method of PLS-SEM, the importance-performance map analysis (IPMA) is used in this study, with BI and AA as the goal variables. The IPMA technique, which considers the average value of latent variables and associated indicators (i.e., performance measure), can be used to understand better PLS-SEM results (Hair et al., 2017; Ringle & Sarstedt, 2016). The findings of the IPMA are shown in Figures 3 and 4. All of the independent variables’ significance and effectiveness were evaluated. The results of IPMA in Figure 3 indicate that TR has the highest importance for predicting the behavioral intention to use, with the highest effect and performance scores. Concerning the importance of measures in AI adoption, Figure 4 shows that BI is the most important factor impacting AI adoption, followed by TA and RS.

IPMA for behavioral intention.

IPMA for AI adoption.
Discussion
The research problem addressed in this study was understanding the factors influencing journalists’ adoption of AI in journalism. The study aimed to test the hypotheses related to the impact of Performance expectancy (PE), Effort expectancy (EE), Social influence (SI), Facilitating conditions (FC), Behavioral intention (BI), Trust (TR), Regulatory support (RS), and Technology affinity (TA) on the behavioral intention and adoption of AI in journalism.
First, we found support for the hypothesis that PE, EE, and SI significantly affect the behavioral intention to use AI in journalism. FC and BI significantly affect the behavior of AI adoption in journalism. This finding aligns with previous studies utilizing the UTAUT framework (Ahadzadeh et al., 2021; Peng & Miller, 2021; Venkatesh et al., 2003; Williams et al., 2015). This finding indicates that how journalists view the pros and cons of adopting AI affects their decision to use AI technologies, and it is essential to consider the impact of social networks, peers, and coworkers on journalists’ adoption choices. Besides that, the organizational and technological environment's capacity to provide the appropriate tools, infrastructure, and support is vital in enabling journalists to use AI technologies.
In addition to the core determinants proposed by the UTAUT model, this study incorporated three additional constructs: Trust, Regulatory support, and Technology affinity, to comprehensively examine the factors influencing journalists’ adoption of AI in journalism. Regarding Trust, our findings confirmed its significant impact on the behavioral intention to use AI in journalism. This result is consistent with prior studies that have emphasized the role of Trust in technology adoption (Latifa & Zakaria, 2020; C. T. Pham & Thi Nguyet, 2023; Zhu et al., 2021). Trust enables journalists to overcome perceptions of risk and uncertainty associated with AI adoption, thus enhancing their acceptance and intention to adopt AI technologies.
Furthermore, the hypothesis that RS significantly affects the behavior of AI adoption in journalism received empirical support. This finding suggests that the policies and rules set by regulatory agencies influence the diffusion and adoption of AI technologies in journalism. It aligns with previous research highlighting the impact of regulatory support on technology adoption (Koster & Borgman, 2020; Kumar Bhardwaj et al., 2021; Matias & Hernandez, 2021). Regulatory support can reduce compliance costs, address data breaches and fraud concerns, and create an environment conducive to adopting new technologies.
Additionally, we found that TA significantly affects the behavior of AI adoption in journalism. This result underscores the role of individuals’ natural inclination toward technology and their comfort level in shaping their adoption behavior. It is consistent with prior research highlighting the influence of technology affinity on technology adoption (Aldás-Manzano et al., 2009; Trautwein et al., 2021). Journalists with a higher level of technology affinity are more likely to adopt AI technologies, as it modifies the impact of traditional technology acceptance factors on their adoption intentions.
This study fills several research gaps in the existing literature. First, it contributes to understanding AI adoption in the specific context of journalism, especially in developing journalism like Vietnam. While previous research has examined technology adoption in various domains, the present study focuses on AI adoption within journalism, providing insights into the unique factors influencing journalists’ adoption decisions. Second, by adding new constructs, this study broadens the UTAUT framework. This study includes trust, regulatory support, and technical affinity in a complete model to explain journalists’ use of AI in journalism. These new components emphasize the significance of regulatory elements, human propensity toward technology, and trust in influencing adoption behavior.
This research contributes to the theoretical understanding of AI adoption in journalism by extending and integrating the insights from UTAUT (Venkatesh et al., 2003). One key theoretical contribution of this study is the identification of TR, RS, and TA as significant factors influencing AI adoption in journalism. While previous research on technology adoption has acknowledged the importance of TR (Latifa & Zakaria, 2020; C. T. Pham & Thi Nguyet, 2023; Stojanović et al., 2023; Zhu et al., 2021), RS (Koster & Borgman, 2020; Kumar Bhardwaj et al., 2021; Matias & Hernandez, 2021), and TA (Hasgall et al., 2018; Trautwein et al., 2021) in various contexts, this study highlights their relevance in journalism, where issues of accuracy, transparency, and ethical use of technology are paramount. This finding calls for further investigation into the boundary conditions of the UTAUT model and the exploration of context-specific factors that may influence technology adoption in journalism.
The practical contributions of this study are manifold, providing valuable insights for AI developers, media organizations, and policymakers. By identifying the factors influencing journalists’ intentions to adopt AI tools and their actual adoption behavior, this study offers guidance for developing and promoting AI tools in journalism. For AI developers, the study highlights the importance of designing AI tools that are easy to use and that demonstrate their performance benefits to journalists. By focusing on these factors, developers can create AI tools more likely to be embraced by journalists. Building trust among journalists by ensuring AI tools’ transparency, accuracy, and reliability is crucial for widespread adoption (de-Lima-Santos & Mesquita, 2021; Guzman & Lewis, 2020; Haugen, 2023).
Media organizations can leverage the findings of this study to foster a supportive environment for AI adoption. By providing training and support to improve journalists’ technology affinity and encouraging the use of AI through social influence, media organizations can promote a culture of innovation and acceptance of AI in journalism. Finally, policymakers can use the insights from this study to inform the development of regulations and guidelines that promote transparency, accountability, and ethical use of AI tools in journalism (Broussard et al., 2019).
Conclusion
This study contributes significantly to the theoretical framework of AI adoption in journalism, particularly within the context of Vietnamese journalism. Integrating core constructs from the UTAUT model with additional constructs of trust, regulatory support, and technology affinity offers a comprehensive understanding of the factors influencing journalists’ adoption of AI. The results underscore the importance of perceived benefits, ease of use, social influence, and organizational support in shaping journalists’ behavioral intentions toward AI adoption. The significant roles of trust, regulatory support, and individual technology affinity were highlighted, emphasizing how these elements can facilitate or hinder the adoption process. These findings enrich the UTAUT framework and provide practical insights for AI developers, media organizations, and policymakers to develop strategies to encourage AI adoption in journalism.
However, this study has several limitations that pave the way for future research. The specific focus on Vietnamese journalism limits the generalizability of the findings, highlighting the need for replication studies in diverse cultural and organizational settings to validate the results. The study’s cross-sectional design constrains the establishment of causality, suggesting that future research could employ longitudinal or experimental methods to obtain more robust evidence. Additionally, reliance on self-reported data raises concerns about potential biases, indicating the need for future studies to incorporate objective measures of AI adoption. The study also overlooks potential influencing factors such as perceived risk, job characteristics, and organizational culture, which could be crucial in understanding AI adoption in journalism. Future research may collect qualitative data from interviews to make the data more interpretive. Lastly, the study did not examine the moderating effects of demographic factors such as gender, age, experience, and voluntariness. Addressing these aspects in future research could provide a more comprehensive understanding of the adoption process and help tailor AI tools and policies more effectively to suit the diverse needs of journalists.
