Abstract
Introduction
Digital technology integration has driven significant shifts in higher education, enhancing both teaching and learning processes (Alenezi, 2023; Bond et al., 2020; Shrivastava & Shrivastava, 2022). These technologies facilitate personalized learning, flexibility, and improved outcomes, fostering autonomy and engagement that contribute to positive educational impacts (Pinto & Leite, 2020; Rybakova et al., 2021; Wekerle et al., 2020). However, in the effective use of these tools, students’ digital competencies and their willingness to adopt new technologies play significant role (Artacho et al., 2020; Bergdahl et al., 2020; Venkatesh et al., 2012; Vuorikari et al., 2022; Zhao et al., 2021).
Advancements in technology during the digital age have profoundly reshaped learning processes. The emergence of Generative Artificial Intelligence (GenAI) has accelerated this transformation by enabling the creation of high-quality instructional materials and providing personalized support, instant feedback, and adaptive learning experiences tailored to individual learner profiles and habits (Alasadi & Baiz, 2023; Boscardin et al., 2023; Mittal et al., 2024; Ouyang et al., 2023). Nielsen (2023) describes the interaction between humans and GenAI as the third paradigm in computing user interfaces, marking a shift where users specify what they want rather than how to achieve it, effectively reversing the locus of control. In this context, GenAI tools like ChatGPT have the potential to revolutionize education by transforming how individuals interact with information, acquire knowledge, and develop skills (Alasadi & Baiz, 2023).
Recognizing the critical importance of AI literacy, educational policies are increasingly advocating for the integration of AI education into curricula. Legislative initiatives across various regions are mandating AI literacy in schools, emphasizing the necessity for students to understand AI’s principles, applications, limitations, ethical considerations, and real-world impacts (GovTech, 2024; Kean, 2024; Lieberman, 2024). For instance, the California Chamber of Commerce has called for the state to teach students how to use GenAI tools effectively (GovTech, 2024). These developments underscore the growing recognition of AI’s pervasive role in society and the urgency for educational systems to prepare students to navigate and utilize AI technologies effectively.
GenAI represents a cutting-edge advancement that is significantly altering educational methods by simulating human-like creation and ideation (Alasadi & Baiz, 2023; Pavlik, 2023; Xu & Ouyang, 2022). Leveraging natural language processing, ChatGPT provides immersive, customized learning experiences with immediate feedback, thereby enriching engagement and comprehension (Kohnke et al., 2023; Maheshwari, 2024). ChatGPT exemplifies AI’s transformative impact, capable of supporting curriculum development and offering real-time feedback, fostering a more creative, and adaptable approach to learning (Bai et al., 2023; Jonsson & Tholander, 2022; OpenAI, 2024; Yan et al., 2024). However, as with other digital tools in education, recent studies suggest that adopting GenAI technologies requires advanced digital skills and a proactive disposition toward AI engagement (Kreinsen & Schulz, 2023; Yilmaz et al., 2023). Although, Artificial Intelligence (AI) technologies, particularly GenAI tools like ChatGPT, are reshaping higher education by enabling personalized and interactive learning experiences, successful adoption of these technologies requires a range of digital skills and motivational factors since students must be equipped to use GenAI effectively (Bergdahl et al., 2020; Venkatesh et al., 2012; Vuorikari et al., 2022). Without these skills and motivations, students may struggle to fully leverage GenAI’s potential, risking underutilization or even misuse (Abdelghani et al., 2023; Dickey et al., 2023).
In addition to structured, formal education, informal learning is becoming increasingly crucial for skill development in today’s fast-evolving technological landscape. GenAI tools, in particular, offer immense potential to transform informal learning by enabling personalized, self-directed educational experiences that go beyond traditional classroom boundaries (Touvron et al., 2023). By providing flexible, tailored access to knowledge, GenAI tools empower individuals to learn at their own pace and on their own schedule, adapting to unique learning goals and needs. This accessibility supports lifelong learning, making skill development achievable and sustainable for diverse learners (Laato et al., 2023; Peters & Romero, 2019). As technology rapidly advances, the need for continuous skill updating grows to meet societal and professional expectations—demands that formal education alone may not fully address due to its inherent rigidity (Nygren et al., 2019). GenAI thus opens new avenues for informal learning, offering learners adaptable resources to stay relevant and competent in an increasingly AI-driven world.
This study seeks to address the knowledge gap in understanding how digital competencies and motivational factors together influence the acceptance and use of GenAI in educational contexts, with a specific focus on informal learning through ChatGPT.
Theoretical Framework
The Digital Competence (DigComp) framework provides a comprehensive structure for understanding and developing digital competencies essential for participation in the digital world (Ferrari, 2013; Vuorikari et al., 2022). DigComp outlines five key areas of digital competence: Information and Data Literacy (IDL), Communication and Collaboration (CC), Digital Content Creation (DCC), Safety (S), and Problem-Solving (PS; Carretero et al., 2017; Vuorikari et al., 2022). These competencies are crucial for effectively engaging with digital technologies (Carretero et al., 2017; Vuorikari et al., 2016, 2022). In the realm of higher education, application of DigComp framework extends across various facets, from evaluating student capabilities to guiding curriculum development. For instance, it has been effectively used to create self-assessment tools tailored for university students, enabling them to gage and improve their digital skills in a structured manner (Liu, 2023). Furthermore, the framework has facilitated an understanding of the interplay between digital competence and socioeconomic factors in higher education settings, as explored by Evangelinos and Holley (2016). DigComp framework has inspired the integration of information and communication technologies and digital skills into pedagogical models, as evidenced by the work of Evangelinos and Holley (2016). This integration is key to preparing students for a digital world, ensuring that higher education remains relevant and responsive to the evolving demands of the digital age. While DigComp has been instrumental in higher education for assessing digital competence and guiding curriculum development (Evangelinos & Holley, 2016; Liu, 2023), it does not explicitly address the competencies required for GenAI tools like ChatGPT. This gap underscores the necessity to investigate how these competencies influence the acceptance and utilization of GenAI technologies, particularly in informal learning contexts—a key focus of this study.
In addition to digital competencies, motivational factors are critical to technology acceptance. As GenAI technologies like ChatGPT become more embedded in educational environments, beside understanding digital competencies, how motivational factors predict their acceptance, and utilization by students in learning processes becomes important. The Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) provides a comprehensive framework for understanding the factors that drive technology adoption by integrating various motivational theories and earlier adoption models. Built upon the Expectancy-Value Theory (Fishbein & Ajzen, 1977) and Theory of Reasoned Action (Ajzen & Fishbein, 1988), UTAUT2 extends these foundational models by incorporating social and behavioral factors that address users’ motivations to adopt new technologies (Venkatesh et al., 2012). Key constructs in UTAUT2, such as Performance Expectancy (PE) and Effort Expectancy (EE), directly correlate with perceived usefulness and ease of use, core elements in motivational theories, where perceived benefits, and manageable effort drive intention to adopt (Davis, 1985; Venkatesh & Davis, 2000). UTAUT2 introduces additional constructs, including Social Influence (SI) and Facilitating Conditions (FC), which recognize the impact of social pressures, normative beliefs, and supportive resources on technology acceptance, responding to critiques that previous models lacked explanatory power for social and environmental contexts (Taylor & Todd, 1995; Venkatesh et al., 2003). By incorporating Hedonic Motivation (HM) and Habit (HT), UTAUT2 also accounts for intrinsic motivations such as enjoyment and familiarity with technology, reflecting elements of Self-Determination Theory and Flow Theory, which highlight the role of pleasure, and routine in sustaining user engagement (Ryan & Deci, 2000). The model’s adaptability to diverse settings and its inclusion of intrinsic, extrinsic, and social motivations make it particularly well-suited for analyzing adoption patterns of GenAI tools in educational contexts. By applying UTAUT2 in this study, the aim is to capture a holistic understanding of both the digital competencies and the motivational factors that influence students’ acceptance and use of ChatGPT as an informal learning tool. This integrated approach allows to examine not only the skills students possess but also the motivational drivers that encourage them to adopt and utilize GenAI technologies. Table 1 provides a summary of literature review aligned with research gap and study aim.
Summary of Literature Review Aligned with Research Gap and Study Aim.
Model Generation
In formulating the model for this study, the DigComp and UTAUT2 were integrated to provide a comprehensive understanding of the factors influencing the acceptance and use of GenAI technology ChatGPT as an informal learning tool among higher education students. This integration allowed examining not only the role of digital competencies, as outlined in DigComp, in facilitating effective engagement with ChatGPT but also the motivational factors influencing behavioral intention, and actual use, as captured by UTAUT2.
The DigComp framework provides a structured approach to understanding digital competencies essential for effective technology use (Carretero et al., 2017; Vuorikari et al., 2022). This study examines specific DigComp competencies—Information and Data Literacy (IDL), Communication and Collaboration (CC), Digital Content Creation (DCC), Safety (S), and Problem-Solving (PS)—as predictors of Actual Use (AU) of ChatGPT. Recognizing the importance of ethical considerations in digital interactions, Ethics (E) is included as an additional factor (Gümüş & Kukul, 2023). By focusing on these competencies, the study assesses how each area, including Ethics, impacts students’ effective use of ChatGPT in informal learning, directly informing the hypotheses.
Information and Data Literacy (IDL)
The ability to collect, manage, analyze, and interpret data (Vuorikari et al., 2022). This could predict the effective use of ChatGPT for data-driven tasks and inquiries. Students proficient in searching, evaluating, and critically analyzing information can leverage ChatGPT more effectively for research and learning. This component of digital competence ensures that students can ask relevant questions, interpret ChatGPT’s responses accurately, and integrate this information into their learning processes.
Communication and Collaboration (CC)
The capability to communicate, collaborate, and participate in digital networks (Vuorikari et al., 2022). ChatGPT can facilitate peer discussions, group projects, and interaction with instructors by providing a platform for instant information exchange, feedback, and support. Students with higher levels of digital competence are likely to use ChatGPT more effectively for collaborative learning activities.
Digital Content Creation (DCC)
The skills involved in creating digital content (Vuorikari et al., 2022). Users’ adept in this area may be more inclined to use ChatGPT for generating and enhancing educational materials. The ability to create digital content is a vital component of digital competence. Students can use ChatGPT to assist in drafting essays, reports, and presentations, thereby enhancing their digital content creation skills. Familiarity with digital tools and platforms enables students to integrate ChatGPT’s outputs creatively and responsibly into their work.
Problem-Solving (PS)
The competence to identify digital problems and creatively solve them (Vuorikari et al., 2022). This is likely to influence how students and educators leverage ChatGPT to tackle complex learning challenges. Digital competence includes the ability to engage in problem-solving and critical thinking in digital environments. Higher education students can use ChatGPT to explore solutions to complex problems, simulate scenarios, and generate ideas, all while critically evaluating the information and suggestions provided by the AI.
Safety (S)
The confidence and capability to protect oneself and others in digital environments (Vuorikari et al., 2022). It may influence the propensity to engage with ChatGPT if users feel secure in their interactions with AI. Understanding the importance of online safety, data protection, and privacy is part of being digitally competent. Students knowledgeable about these aspects are more likely to use ChatGPT and other digital tools while maintaining ethical standards and safeguarding personal and academic integrity.
Ethics (E)
This study acknowledges the intricate layers within the DigComp framework, especially the nuanced delineation between safety and ethics. Traditionally enveloped within the broader scope of safety, ethical considerations in digital environments encompass a wide array of moral decisions individuals face regarding content use, creation, and communication (Gümüş & Kukul, 2023). These considerations are paramount in fostering a digital culture that is not only secure but also respectful and equitable. Therefore, this study proposes a deliberate separation of the safety and ethics dimensions to scrutinize how higher education students navigate these digital terrains. Safety, as defined within the DigComp framework, focuses on protective measures against digital risks, emphasizing knowledge, and skills for risk management and privacy maintenance (Vuorikari et al., 2022). Conversely, ethics in digital interactions concern adhering to moral principles that govern behavior in digital spaces—emphasizing respect for copyright, privacy, and digital etiquette across diverse cultures and generations (K. Lin, 2016). This distinction is pivotal in our study, allowing for a focused investigation into the specific competencies critical for responsible digital citizenship in higher education. By examining these competencies, this study develops hypotheses on their direct impact on students’ actual use of ChatGPT, aiming to identify which are most critical for effective utilization in informal learning contexts.
UTAUT2 is utilized within this study for its comprehensive approach to understanding the factors influencing the acceptance and utilization of technological innovations (Venkatesh et al., 2012). UTAUT2 synthesizes elements from various established theories, providing a robust framework to assess the interplay of PE, EE, SI, FC, HM, and HT on behavioral intention (BI) and actual use (AU). However, the construct of Price Value (PV) have been excluded from the model. The exclusion of PV is a strategic decision that aligns with the nature of ChatGPT’s deployment in higher education, where the technology offers both free and paid versions. The complexity of assessing PV arises because the two versions cater to different user needs and financial thresholds, potentially skewing perceptions among users (OpenAI, 2024). Moreover, in the context of educational use, the value derived from ChatGPT is not solely based on financial cost but on the qualitative enhancement of the learning experience. Therefore, including PV might not accurately reflect the factors influencing students’ acceptance and use of ChatGPT in an informal learning context.
UTAUT2 framework is particularly appropriate for academic settings, as it has been empirically validated across contexts and is adaptable to the nuanced dynamics of higher education technology use (Al Farsi, 2023; Strzelecki, 2024).
The constructs of UTAUT2 are (Venkatesh et al., 2012):
Performance Expectancy (PE)
The degree to which individuals believe that using the technology will help them attain gains in performance. In the context of ChatGPT in higher education, this translates to the perception that using ChatGPT can enhance learning outcomes and educational productivity. This study posit that if students perceive ChatGPT as beneficial for their academic performance, they are more likely to intend to use it.
Effort Expectancy (EE)
This refers to the ease of use of the technology. For ChatGPT, it would be the user’s belief that this GenAI is user-friendly and easy to integrate into their learning processes. An intuitive and accessible interface reduces the effort required to use ChatGPT, thereby increasing students’ intention to adopt it.
Social Influence (SI)
The extent to which individuals perceive that important others believe they should use the technology. In higher education, this could be influenced by peers, instructors, or institutional adoption of ChatGPT. If students feel that their social circle supports the use of ChatGPT, they may be more inclined to use it themselves.
Facilitating Conditions (FC)
The belief that an individual has the necessary infrastructure and support to use the technology. For ChatGPT, this includes access to the technology, availability of support materials, and a supportive learning environment. Adequate resources and support facilitate the use of ChatGPT, encouraging students to adopt it in their learning activities.
Hedonic Motivation (HM)
The fun or pleasure derived from using the technology. ChatGPT could enhance the learning experience by making it more interactive and enjoyable for undergraduates. Enjoyment and intrinsic satisfaction from using ChatGPT may motivate students to adopt it for informal learning.
Habit (HT)
The extent to which people tend to perform behaviors automatically because of learning. In the case of ChatGPT, it’s how integrated the use of the AI becomes in the student’s or educator’s routine. When the use of ChatGPT has become habitual, students are more likely to intend to continue using it in their informal learning.
Behavioral Intention (BI)
In the context of this study, BI captures undergraduate students’ intentions to use ChatGPT for informal learning purposes. The determinants of BI in this study—PE, EE, SI, FC, HM, and HT—highlight the multifaceted nature of technology acceptance, where both practical benefits (like improved learning outcomes) and experiential aspects (such as enjoyment and ease of use) play crucial roles.
Actual Use (AU)
AU measures how frequently and extensively students integrate this AI tool into their informal learning processes. It reflects the transition from intending to use ChatGPT to incorporating it into study habits, coursework, research, and other educational activities.
The relationships among these constructs form a model where PE, EE, SI, FC, HM, and HT directly determine BI to use a technology, which in turn determines actual use behavior (Venkatesh et al., 2012). This study hypothesize that these motivational factors significantly influence students’ intention to use ChatGPT, and that BI influences AU.
Studies have highlighted ChatGPT’s potential as a supportive educational tool and the challenges associated with its adoption, including ethical concerns and the need for guidance in leveraging AI (Crawford et al., 2023; Gilson et al., 2023; Han, 2024; West, 2023). A combination of UTAUT2 and digital competence constructs can form a comprehensive model for understanding ChatGPT’s acceptance and use in higher education, ensuring ethical and effective use to support learning outcomes (Crawford et al., 2023; Gilson et al., 2023).
The aim of the current study is to empirically investigate how both digital competencies, as defined by the DigComp framework, and motivational factors, as outlined in UTAUT2, influence the acceptance and effective utilization of ChatGPT within higher education settings. By integrating constructs from both frameworks, it is aimed to provide a comprehensive understanding of the factors that affect students’ behavioral intention to use and actual use of ChatGPT as an informal learning tool. The outcomes of this research are anticipated to provide actionable guidance on curricular enhancements that can better align with the emerging requirements of a digitally competent and motivated academic community, capable of navigating and maximizing the potential of GenAI in education. By illuminating the interplay between technology acceptance and digital competence, this research contributes to the broader discourse on integrating GenAI tools into educational practices, ultimately enhancing educational outcomes.
Hypotheses Development
Based on the above constructs and their justifications, this study propose the following hypotheses:
UTAUT2 Hypotheses:
Hypothesis 1 (H1): PE positively influences BI to adopt ChatGPT as an informal learning tool within the higher education context. If students believe that ChatGPT will enhance their learning performance, they are more likely to intend to use it.
Hypothesis 2 (H2): EE positively influences BI to adopt ChatGPT as an informal learning tool. An easier-to-use system increases students’ intention to adopt the technology.
Hypothesis 3 (H3): SI positively influences BI to adopt ChatGPT as an informal learning tool. Perceptions of social pressure or encouragement from peers and instructors can motivate students to adopt ChatGPT.
Hypothesis 4 (H4): FC positively influences BI to adopt ChatGPT as an informal learning tool. Access to resources and support increases students’ intention to use ChatGPT.
Hypothesis 5 (H5): HM positively influences BI to adopt ChatGPT as an informal learning tool. Enjoyment and fun derived from using ChatGPT can enhance students’ intention to adopt it.
Hypothesis 6 (H6): HT positively influences BI to adopt ChatGPT as an informal learning tool. When students have formed a habit of using ChatGPT, they are more likely to intend to continue using it in their informal learning.
Hypothesis 7 (H7): BI positively influences AU of ChatGPT as an informal learning tool. Students with a strong intention to use ChatGPT are more likely to actually use it in their learning activities.
DigComp Framework Hypotheses:
Hypothesis 8 (H8): IDL positively influences AU of ChatGPT. Students with higher IDL skills can better utilize ChatGPT for accessing and processing information.
Hypothesis 9 (H9): CC positively influences AU of ChatGPT. Students proficient in CC may use ChatGPT more in collaborative learning activities.
Hypothesis 10 (H10): DCC positively influences AU of ChatGPT. Students skilled in DCC may more frequently use ChatGPT for creating and enhancing digital content.
Hypothesis 11 (H11): PS positively influences AU of ChatGPT. Students with strong PS skills may use ChatGPT to tackle complex problems, increasing their usage.
Hypothesis 12 (H12): S positively influences AU of ChatGPT. Students aware of digital safety may feel more comfortable using ChatGPT, leading to higher usage.
Hypothesis 13 (H13): E positively influences AU of ChatGPT. Students with high ethical awareness may use ChatGPT responsibly, affecting their usage patterns.
Method
Study Design
This study was approved by the Human Research Education Sciences Ethics Committee at [Erzincan Binali Yıldırım] University (Ethics Code: E-88012460-050.04-349157). The research was conducted ethically in accordance with the World Medical Association Declaration of Helsinki. Participants provided informed consent, were assured of the confidentiality and anonymity of their responses, and were informed of their right to withdraw from the study at any time without penalty.
This study employed a quantitative research design consisting of two main components. The first phase involved conducting a Confirmatory Factor Analysis (CFA) of the adapted UTAUT2 model to validate its applicability in assessing the acceptance and use of ChatGPT as an informal learning tool in higher education. The second phase comprised the main study, which explored how digital competencies and the motivational factors that influence the acceptance and utilization of ChatGPT as an informal learning tool among higher education students. This was achieved by integrating constructs from both the UTAUT2 model and the DigComp framework.
Participants and Setting
Participants and Setting for Confirmatory Factor Analysis
The Confirmatory Factor Analysis (CFA) included 140 first-year undergraduate students enrolled in an introductory information technologies course during the 2023 to 2024 academic year. Participants were selected through convenience sampling due to their accessibility and relevance to the study, as they were already utilizing ChatGPT within the scope of their course. This sample was appropriate for validating the adapted UTAUT2 model, given the challenges of recruiting a sufficient number of ChatGPT users at the time of data collection. The sample size met the guideline proposed by Hair et al. (2010), which recommends a minimum of five observations per item for factor analysis; with the UTAUT2 containing 28 items, a sample of 140 ensured stable and reliable factor solutions. Data were collected via Google Forms to facilitate efficient and convenient participation.
Participants and Setting for the Main Study
The main study included 404 undergraduate students from diverse disciplines and institutions during the 2023 to 2024 academic year. Using separate samples for the CFA and main study ensured robustness and generalizability, reducing overfitting risks (Fokkema & Greiff, 2017; Kline, 2015). Practical considerations led to a specific course sample for the CFA, while the main study utilized broader representation. Data collection was conducted via Google Forms.
Data Collection Instruments
Instrument for Confirmatory Factor Analysis
The survey instrument for the CFA was the adapted version of the UTAUT2 framework, tailored to assess the adoption of ChatGPT as an informal learning tool in higher education. This adaptation involved revising the UTAUT2 constructs to align with the context of Generative AI (GenAI) in education. The instrument featured items on a 5-point Likert scale ranging from “Strongly Disagree” (1) to “Strongly Agree” (5), allowing participants to express the degree of their agreement with various statements about their behavioral intentions and actual use behavior concerning ChatGPT.
The UTAUT2 model utilized in this study (Venkatesh et al., 2012), assessed six key constructs that influence BI (3 items, ICR = 0.93) and eventually AU (3 items). These are; PE (4 items, ICR = 0.88), EE (4 items, ICR = 0.91), SI (3 items, ICR = 0.82), FC (4 items, ICR = 0.75), HM (3 items, ICR = 0.86), and HT (3 items, ICR = 0.82). The high Cronbach’s alpha values across all factors indicate excellent internal consistency, confirming the instrument’s reliability for assessing technology acceptance in diverse settings. Each construct was measured using multiple items, with the original model demonstrating strong psychometric properties, including high internal consistency reliability (ICR) with values ranging from 0.75 to 0.93.
Instruments for the Main Study
The main study applied the validated UTAUT2 model to assess higher education students’ acceptance and usage of ChatGPT, focusing on the diverse factors that influence its adoption in informal learning settings. Concurrently, the study utilized the Teacher’s Digital Competence Scale, developed by Gümüş and Kukul (2023), to assess a broad spectrum of digital competencies essential for navigating the contemporary educational digital landscape. The Teacher’s Digital Competence Scale comprises 46 items distributed across six factors: IDL (9 items, Cronbach’s α = .92), CC (7 items, Cronbach’s α = .95), DCC (6 items, Cronbach’s α = .93), PS (9 items, Cronbach’s α = .95), S (10 items, Cronbach’s α = .96), and E (5 items, Cronbach’s α = .91). Responses were measured on a 5-point Likert scale ranging from “Strongly Disagree” (1) to “Strongly Agree” (5). The instrument explained 71.97% of the total variance in digital competence, with each dimension demonstrating high reliability and strong internal consistency.
The Teacher’s Digital Competence Scale was chosen despite its primary focus on teachers for two reasons: the participants were pedagogy students preparing for teaching, and the scale aligns closely with the DigComp 2.1 framework, ensuring relevance to current digital standards. Its cultural sensitivity and focus on ethics made it well-suited for evaluating digital competencies. To increase relevance, “colleagues” was changed to “friends” to better reflect student interactions, improving its applicability and accuracy in capturing competencies related to peer engagement in higher education.
Data Analysis
Data Analysis for CFA
The validation of the adapted UTAUT2 model used CFA conducted in R software, starting with sample size verification to meet the recommended ratio of five subjects per item (Hair et al., 2010). Outliers were flagged using
The CFA assessed the model fit by examining various fit indices. The chi-square to degrees of freedom ratio (χ2/
The Comparative Fit Index (CFI) and Tucker-Lewis Index (TLI) were also assessed, with values above 0.90 indicating acceptable fit and values at or above 0.95 indicating very good fit (Hooper et al., 2008). Factor loadings were evaluated for significance, with a threshold of 0.40 based on Stevens (2012) guidelines, to confirm the strength of each item’s relationship to its construct. This CFA process rigorously validated the adapted UTAUT2 model, confirming its suitability for assessing ChatGPT acceptance and use as an informal learning tool in higher education.
Data Analysis for the Main Study
For the main study, data analysis was conducted in R, beginning with checks on data structure, sample adequacy, normality, and outliers to confirm suitability for statistical tests (Hair et al., 2010). Structural Equation Modeling (SEM) was used to test hypothesized relationships between digital competence constructs and factors influencing ChatGPT acceptance. This SEM approach simultaneously analyzed both the measurement model and the structural model, examining interrelations between constructs.
Model fit was evaluated using indices like the CFI and TLI, with values above 0.90 indicating an acceptable fit (Hooper et al., 2008). The RMSEA and SRMR, with values below 0.08, were also used to confirm fit (T. A. Brown, 2015; Hu & Bentler, 1999). Path coefficients were evaluated with bootstrap confidence intervals, highlighting the strength and direction of relationships. This thorough analysis provided insights into how digital competencies influence the adoption and integration of GenAI tools like ChatGPT in higher education. Table 2 summarizes the research process.
Summary of the Research Process.
Results
Confirmatory Factor Analysis: Validating the Adapted UTAUT2 Model for Instructional Acceptance and Utilization of ChatGPT
Skewness and kurtosis values for all variables fell within the acceptable range of −2 to +2 (Hair et al., 2010), indicating no violation of normality. Outliers, identified using
An initial CFA revealed that item EE3 loaded significantly on both Performance Expectancy and Effort Expectancy (0.301 and 0.532, respectively), violating discriminant validity. To maintain construct integrity, item EE3 was removed based on modification indices and CFA principles, which emphasize each item’s unique contribution to its construct.
The main CFA analysis completed successfully after 67 iterations using the Maximum Likelihood (ML) estimation method with the NLMINB optimization method. The model evaluated 83 parameters with a sample size of 140 observations.
The confirmatory factor analysis, conducted using lavaan 0.6.17, resulted in a significant chi-square statistic (χ2 = 520.331,
The CFA results demonstrated that all constructs in the adapted UTAUT2 model exhibited strong and significant factor loadings, confirming that the items effectively measure their intended latent variables (see Table 3). The standardized loadings for the indicators ranged from 0.513 to 0.970, all exceeding the acceptable threshold of 0.40 (Stevens, 2012), indicating robust associations between the observed variables and their respective constructs. Notably, HM showed exceptionally high standardized loadings, ranging from 0.867 to 0.970, suggesting a very strong relationship between the indicators and the latent construct. SI and PE also exhibited high loadings. Overall, all constructs had loadings well above the minimum acceptable level, reinforcing the reliability of the measurement model. The model’s fit indices suggest that while the fit is not perfect, it is acceptable and indicates that the hypothesized relationships are present in the data.
Standardized Estimates for the Adapted UTAUT2 for the Instructional Acceptance and Use of ChatGPT.
Measurement Model Validation in the Main Study
Following the initial validation of the adapted UTAUT2 model in the initial sample, a second CFA was conducted on the larger main study sample to validate the measurement model of all constructs prior to SEM analysis. The CFA using the Maximum Likelihood estimation with Robust standard errors (MLR) estimator provided robust measures for evaluating the model fit and parameter estimates for a dataset analyzed in R with the lavaan package. Here is a summary of the CFA results formatted according to APA guidelines:
A confirmatory factor analysis was conducted using the MLR estimator to assess the fit of the specified model to the data. The analysis involved 404 observations with 248 model parameters. The chi-square test of model fit was significant, χ2(2,453) = 3925.584,
In terms of parameter estimates, all factor loadings were significant (
Standardized Estimates for the Measurement Model.
Given these results, the specified model can be considered to have a good fit to the observed data, supporting the hypothesized structure of the constructs under investigation.
The evaluation of the measurement model’s reliability and validity was conducted through CFA, employing Cronbach’s alpha, McDonald’s omega (ω), and the Average Variance Extracted (AVE) as key metrics.
Internal consistency reliability, as measured by Cronbach’s alpha, indicated high levels of consistency across all constructs, with values ranging from .866 for EE to .948 for HM. These results suggest that the items within each construct cohesively measure their respective theoretical concepts. Complementary to alpha, McDonald’s omega coefficients were calculated, yielding similar evidence of reliability. Omega values spanned from 0.811 (EE) to 0.948 (HM), reinforcing the constructs’ reliability within our model. Such consistency in the reliability indices supports the internal consistency of our measurement scales, aligning with the recommended threshold of 0.7 for psychological and behavioral research (Nunnally & Bernstein, 1994). The convergent validity of the constructs was assessed through the AVE, with values exceeding the 0.50 benchmark for all constructs. These findings indicate that a significant portion of the variance in the indicators is accounted for by their respective constructs. Notably, HM demonstrated the highest AVE (0.860), suggesting that this construct provides a robust representation of its indicators. The AVE results, complemented by the high reliability coefficients, affirm the constructs’ validity in capturing the essence of the theoretical concepts they represent. Table 5 shows reliability and convergent validity coefficients for measurement constructs.
Reliability and Convergent Validity Coefficients for Measurement Constructs.
Overall, the measurement model displayed excellent reliability and satisfactory convergent validity across the all of constructs. The internal consistency, as evidenced by both Cronbach’s alpha and McDonald’s omega, was exceptionally high, illustrating the cohesiveness of the items within each construct. Moreover, the AVE results further validated the constructs’ efficacy in representing their respective indicators, thereby supporting the measurement model’s integrity and its suitability for subsequent structural analysis.
Structural Equation Modeling Analysis: Unraveling the Dynamics of Digital Competence and ChatGPT Adoption in Informal Learning Contexts
First, the analyzes for assumptions were conducted. Skewness and Kurtosis values within the range of −2 to +2 suggest an approximation to normal distribution. The identification of influential outliers was conducted using Mahalanobis distance with a chi-square distribution threshold. A total of 98 observations were identified as outliers from the dataset, indicating deviations in the multivariate space that could potentially influence the SEM analysis. These outliers were determined based on a threshold set at a significance level of .05, with the degrees of freedom corresponding to the number of variables in the numeric data. Mardia’s test for multivariate normality revealed significant deviations from the assumptions of multivariate normality in the dataset. The Mardia skewness statistic was 132,177.89 (
The SEM analysis was performed using lavaan 0.6.17 with a MLR. The model consisted of 236 parameters after 149 iterations and was tested on a sample size of 404 observations. The SEM analysis revealed the following fit indices: The Chi-square test statistic was significant, χ2(2,465) = 4837.429,
The CFI and the TLI were 0.910 and 0.904, respectively, indicating a good fit to the data. When considering the robust corrections, the Robust CFI was 0.929, and the Robust TLI was 0.924, both of which further substantiate the model’s adequacy.
The RMSEA was 0.049, with a 90% confidence interval ranging from 0.047 to 0.051, and the probability of RMSEA being less than .05 was .832. The robust RMSEA was 0.043, with its 90% confidence interval from 0.040 to 0.045, suggesting a close fit of the model to the observed data. The SRMR was 0.048, further supporting the model’s good fit. In summary, the SEM analysis demonstrated a good fit of the model to the data, with several key relationships being identified among the constructs of interest. The use of MLR estimation with bootstrap methods ensured the robustness of the SEM analysis, addressing potential issues related to non-normality and outliers in the data.
The structural model revealed significant relationships among the constructs. In the structural equation model examining the determinants of Behavioral Intention to use digital tools in educational settings, several predictors were identified as significant contributors. According to the model’s estimates:
Behavioral Intention as a Dependent Variable
BI was significantly influenced by several predictors. PE positively predicted BI (β = .196,
Actual Use as a Dependent Variable
AU was significantly predicted by BI (β = .929,
The hypothesized relationships among UTAUT2 constructs, digital competence factors, behavioral intention, and actual use are illustrated in Figure 1.

Path diagram of the structural equation model: influence of UTAUT2 constructs and digital competence factors on behavioral intention and actual use of ChatGPT in higher education. Solid arrows represent significant paths. Dashed arrows represent non-significant paths. Asterisks next to path coefficients indicate level of significance (*
The squared multiple correlations in the SEM revealed substantial variance explanations for the criterion variables based on direct effects. Specifically, 60.9% of the variance in BI was accounted for by direct effects of PE, EE, FC, HM, and HT. This indicates a strong predictive capability of these variables on BI, showcasing the combined influence of expectancy beliefs, ease of use, and habitual usage on the intention to engage in the behavior. Further, 92.1% of the variance in AU was explained by the direct effects of BI, PS, and E, with BI being the most significant predictor. These findings, detailed in Table 6, not only validate the UTAUT2 model’s applicability to the educational adoption of GenAI technologies but also illuminate the critical influence of digital competence on this innovative technology adoption.
Impact of Digital Competence Factors and UTAUT2 Constructs on Behavioral Intention and Actual Use of ChatGPT in Higher Education Settings.
Discussion
This study addresses a research gap by exploring how digital competencies and motivational factors impact the acceptance and use of ChatGPT in higher education. By integrating constructs from the DigComp framework with the UTAUT2 model, it aims to provide a comprehensive view of factors influencing students’ BI and AU of ChatGPT as an informal learning tool. This approach examines both the digital skills students have and the motivational drivers that encourage them to adopt GenAI technologies.
Previous researches confirm UTAUT2′s strong predictive power, effectively accounting for significant variance in BI and AU across different technological contexts (Avcı, 2022; Raman & Thannimalai, 2021; Schretzlmaier et al., 2022; Zacharis & Nikolopoulou, 2022). In this study, the hypothesized relationships between the UTAUT2 constructs and BI were partially supported. Specifically, PE, FC, HM, and HT were found to have significant positive effects on BI, supporting Hypotheses 1, 4, 5, and 6. However, EE surprisingly had a significant negative effect on BI, contrary to Hypothesis 2, and SI did not significantly predict BI, not supporting Hypothesis 3.
HT’s strong positive effect on BI highlights its critical role in ChatGPT acceptance among students, showing that habitual use encourages seamless integration into academic routines. This finding aligns with Lewis et al. (2013) and Venkatesh et al. (2012) who emphasize habit’s influence on technology use in educational contexts. Strzelecki (2024) also found similar effects in ChatGPT adoption, suggesting that regular engagement strengthens students’ intention to continue using it. This trend is supported by studies on various educational technologies, including mobile learning platforms (Sitar-Taut & Mican, 2021), speech to text lecture systems (Farooq et al., 2017), and Google Classroom (Alotumi, 2022; Kumar & Bervell, 2019). However, some studies found no direct impact of HT on BI in e-learning (Twum et al., 2022) or LMSs (Ain et al., 2016; Raman & Don, 2013), indicating a complex interplay of contextual factors in technology acceptance.
HM emerged as the second strongest predictor of BI, showing that the enjoyment from using ChatGPT significantly enhances students’ intent to integrate it into their learning, supporting Hypothesis 4. This finding aligns with S. A. Brown and Venkatesh (2005), who identified enjoyment as a key factor in technology usage intentions, as well as with studies by Raman and Thannimalai (2021) and Strzelecki (2024), which highlighted HM’s role in influencing BI for e-learning and ChatGPT.
The strong impact of both HT and HM on ChatGPT adoption underscores the importance of tracking user engagement. Habitual use suggests that students may continue using ChatGPT regularly due to its convenience and effectiveness, underscoring its academic value. However, the influence of HM should be carefully monitored, as the enjoyment derived from ChatGPT could drive students to engage deeply in learning but may also risk creating dependency or distraction. Monitoring these engagement patterns is essential to ensure that ChatGPT remains an educational asset without leading to overuse.
PE demonstrated a significant positive effect on BI which implies that students’ perception of ChatGPT as beneficial and likely to enhance their academic performance directly influences their willingness to use it. This finding is consistent with previous research affirming the predictive power of PE in technology acceptance (Alrawi et al., 2019; Chun & Yunus, 2023; Ezzaouia & Bulchand-Gidumal, 2021; Hartono & Oktavia, 2022; Havidz et al., 2018; Issaramanoros et al., 2018; Li et al., 2023; Nurcholisha, 2022; Pham et al., 2020; Rosmayanti et al., 2022; Strzelecki, 2024; Zhang et al., 2019). These findings indicate that individuals’ perceptions of the usefulness and performance of a technology strongly influence their intention to use it.
Contrary to conventional UTAUT findings where ease of use promotes technology adoption (Venkatesh et al., 2003), EE in this study negatively impacted BI. The literature shows mixed results: some studies report a negative impact of EE on BI (Yu et al., 2021), others find a positive impact (Gupta & Arora, 2020; Jameel et al., 2023; Strzelecki, 2024; Su & Chao, 2022), and some find no significant effect (Mensah et al., 2023). This suggests the EE-BI relationship is complex and context-dependent.
In the context of higher education students using ChatGPT as an informal learning tool, this negative EE-BI relationship presents a paradox. Despite EE implying ease of use, students may not find ChatGPT inherently easy to use for their studies, yet they still intend to use it. This counterintuitive result may stem from transitioning from keyword-based searches to conversational interactions with GenAI, requiring new skills and approaches to information retrieval (Nielsen, 2023). The learning curve associated with mastering ChatGPT’s interaction mode might contribute to EE’s negative impact on BI. Nevertheless, students’ intention to use ChatGPT persists, with HT emerging as the strongest predictor. This is likely due to ChatGPT’s ability to deliver targeted content and assist in navigating learning material, even if it requires a greater initial investment to learn how to use the tool effectively.
FC positively influenced BI, and this indicates that the availability of resources and support for using ChatGPT strengthens students’ intentions to use it, consistent with the notion that adequate FC can enhance the likelihood of technology acceptance (Jameel et al., 2023; Raman & Don, 2013; Yu et al., 2021).
SI did not significantly affect BI in this study, suggesting that peer or social pressures may not be instrumental in students’ decisions to use ChatGPT for learning purposes. This lack of influence could be attributed to the novelty of the technology; at the time of study, ChatGPT’s adoption may not have been widespread enough to establish it as a normative behavior within student communities. Consequently, social norms and peer influences related to ChatGPT usage might not have developed significantly. Students’ decisions to use ChatGPT as an informal learning tool are likely driven by individual factors, such as personal evaluations of its utility and effectiveness, rather than by social pressures, or recommendations from peers or instructors. This finding aligns with previous research indicating that the effect of SI on BI can be minimal when a technology is in the early stages of adoption and social norms have not been established (Lampo & Silva, 2022; Yang & Forney, 2013). The absence of significant social influence might suggest that individual attitudes and perceived usefulness are more critical determinants of technology adoption in the early phases of its introduction.
In the context of higher education students using ChatGPT as an informal learning tool, this study provides critical insights into factors influencing AU. Notably, BI emerged as the strongest predictor of AU, supporting Hypothesis 7 and reinforcing the well-established premise that a strong intention to engage with a technology leads to its usage. This robust correlation aligns with recent research on ChatGPT acceptance, indicating that when students express a clear intent to use an educational tool, it reliably predicts subsequent adoption, and utilization in their academic activities (Maheshwari, 2024; Strzelecki, 2024).
Among the DigComp constructs, only PS and E positively influenced AU, supporting Hypotheses 11 and 13. This relationship underscores the integral role of digital competence, particularly problem-solving skills, in contemporary education. In STEM education, problem-solving is fundamental for effectively navigating and leveraging technological tools (Buckley et al., 2018). Students with strong problem-solving abilities are more likely to adopt and use digital tools, as these skills help them adeptly handle the complexities of digital platforms (Bhat, 2019). Additionally, Alasadi and Baiz (2023) suggest that GenAI tools like ChatGPT act not only as information repositories but also as active participants in the learning process, aiding in the development of students’ problem-solving and critical thinking skills. This indicates a potential bidirectional relationship between problem-solving and the use of GenAI tools in learning.
E was another construct of digital competence that positively influenced AU of ChatGPT as an informal learning tool. The positive influence of ethical considerations on AU might underscore a nuanced understanding of digital citizenship among higher education students, encompassing respect for content ownership, adherence to ethical norms in technology use, mindful interaction within digital communities, and sensitivity to the digital etiquette of diverse audiences (Carretero et al., 2017; Gümüş & Kukul, 2023; Vuorikari et al., 2016). Students’ preference for ChatGPT may be influenced by the platform’s transparent approach to its limitations, reflecting a commitment to ethical standards of honesty and integrity, fostering a sense of trust, and reliability.
Conversely, IDL, CC, DCC, and S did not significantly predict AU, leading to the rejection of Hypotheses 8, 9, 10, and 12. This may reflect limitations within the DigComp framework, as it was developed before GenAI’s emergence and may not fully capture competencies specific to GenAI use. Levy-Nadav et al. (2024) highlight the need to refine DigComp to include skills such as prompt-writing, managing AI dialogs, and critically assessing AI-generated content. For instance, IDL, which traditionally focuses on searching and evaluating information, may lack GenAI-relevant skills like generating effective prompts. Similarly, CC in a GenAI context involves managing human-AI rather than human-human interactions. GenAI-assisted DCC requires guiding AI in content creation and responsibly integrating outputs, while S with GenAI requires understanding biases, privacy concerns, and responsible usage beyond traditional safety measures.
Thus, while students may meet current DigComp competencies, the framework may not fully address GenAI skills essential for tools like ChatGPT. As Levy-Nadav et al. (2024) suggest, expanding DigComp to include GenAI-specific competencies would better reflect the skills necessary for effective GenAI use in education. This study contributes to the discourse on digital competencies in the GenAI era, underscoring the need for updated frameworks that address skills required by modern educational technologies.
In summary, the non-significant impact of certain DigComp competencies on ChatGPT use may stem from both the study’s timing and the limitations of the current framework. As GenAI technologies become more embedded in education and competency frameworks evolve to incorporate GenAI-specific skills, future research may reveal different relationships between these competencies and technology use. This underscores the importance of ongoing research and framework updates to facilitate effective GenAI adoption in education.
Limitations
First, a significant limitation is the relatively low sample sizes in relation to the number of estimated parameters in both the initial (
Second, ChatGPT’s novelty at the time of the data collection may have impacted familiarity and comfort levels, potentially influencing participants’ responses. As both the platform and its users mature with GenAI tools, shifting perceptions and competencies may affect the relevance of these findings in the future.
Third, the rapid advancement of AI technologies introduces limitations. GenAI tools like ChatGPT are constantly evolving, potentially changing user interactions and experiences over time. This ongoing development may impact the applicability of findings as new features emerge or educational integration deepens.
Fourth, the cross-sectional design restricts causal inferences between constructs. Longitudinal studies are needed to track how digital competencies, motivational factors, and technology use develop over time as users gain experience and the technology evolves.
Lastly, the reliance on self-reported measures may introduce biases, such as social desirability or inaccurate self-assessment, potentially impacting findings. Future research could include objective data sources, like usage logs or behavioral observations, to complement self-reported information, and improve accuracy.
Implications
The findings of this study have several implications for theory, practice, and future research.
Theoretical Implications
This study contributes to understanding technology acceptance and digital competence in the context of GenAI technologies in higher education. The integration of UTAUT2 and DigComp provides a comprehensive model capturing both motivational drivers and necessary digital skills for GenAI adoption. The findings reveal that HT is the strongest positive predictor of BI to use ChatGPT, highlighting the pivotal role of habitual engagement in adopting advanced AI technologies. HM and PE also significantly influence BI, emphasizing that enjoyment and perceived usefulness drive students’ intentions. These results extend UTAUT2 by demonstrating that in the GenAI context, habitual use and enjoyment may outweigh traditional factors like social influence. The positive effects of PS and E on actual use AU highlight that advanced skills are essential for effectively utilizing GenAI technologies. The current DigComp framework does not fully address these competencies, revealing a theoretical gap. Therefore, this study implies that the DigComp framework should be updated to include GenAI-specific competencies such as prompt engineering, managing ongoing dialogs with AI tools, analyzing the pros and cons of AI technologies, and critically evaluating AI-generated content.
Practical Implications
Since HT is the strongest predictor of BI, educators might integrate GenAI tools into regular learning activities to foster habitual engagement. This could involve incorporating ChatGPT into assignments, discussions, and research projects to normalize its use.
The positive effects of PS and E on AU indicate that curricula could emphasize these areas. Incorporating ethical discussions about AI use and problem-solving exercises involving GenAI can enhance students’ competencies and responsible usage. Furthermore, educational institutions might revise their digital competence programs to include GenAI-specific skills such as prompt engineering, managing ongoing dialogs with AI tools, analyzing AI technologies’ pros and cons, and critically evaluating AI-generated content. This ensures students are equipped with the necessary skills to effectively engage with GenAI tools like ChatGPT.
Policymakers and educational authorities should consider updating frameworks like DigComp to incorporate GenAI competencies. This alignment ensures that educational standards reflect the skills required in an AI-driven landscape.
Implications for Future Research
Future research should explore the complex relationship between EE and BI in GenAI-enhanced learning tools. The unexpected negative relationship observed here suggests a need to understand how perceived effort impacts GenAI adoption in education.
As ChatGPT’s usage becomes more common among students, the influence of SI on BI may grow. Longitudinal studies could provide insights into how social norms and peer recommendations shape technology acceptance over time as GenAI tools move from novelty to mainstream. Additionally, examining the impact of instructor endorsements and institutional support on SI could enhance understanding of various sources of social influence on technology adoption.
Future studies should investigate the bidirectional relationship between PS skills and GenAI use. Longitudinal research could capture how sustained ChatGPT interaction influences PS skill development, while experimental studies might assess changes in students’ PS abilities before and after extended use. Qualitative studies, such as interviews or focus groups, could offer insights into how students perceive ChatGPT’s impact on their problem-solving processes, revealing its value beyond immediate informational support.
Longitudinal research employing objective measures along with self-reported data is essential to understand the evolving role of ChatGPT in higher education. As the technology matures, larger and more diverse samples will enable more robust, generalizable findings. Additionally, research should examine how updating digital competence frameworks to include GenAI-specific skills influences GenAI adoption and usage.
Conclusion
This study examined how digital competencies and motivational factors affect higher education students’ acceptance and use of ChatGPT as a learning tool. Integrating UTAUT2 and DigComp constructs, the research found that HT was the strongest positive predictor of BI to use ChatGPT, underscoring the critical role of habit in GenAI technology adoption. HM and PE were the second and third most effective constructs influencing BI, indicating that enjoyment and perceived usefulness significantly drive students’ intentions. Conversely, EE negatively impacted BI, and SI was not significant, possibly due to ChatGPT’s novelty. BI strongly predicted AU, while specific digital competencies—PS and E—positively affected AU. These findings suggest that updating digital competence frameworks to include GenAI-specific skills is essential, helping educators and policymakers support students in effectively using emerging technologies like ChatGPT.
