Abstract
Introduction
In a digitally driven world, mastery of computer programming skills rapidly transitions from a specialized skill to a vital necessity. Programming has been recognized as a core competency essential for university students (Nouri et al., 2020). Computer programming involves designing and building computer programs to perform specific tasks. Through programming learning, learners can develop systematic thinking and acquire the ability to solve complex problems, enabling them to better adapt to information circumstances where technology shapes our daily existence and future possibilities (Groothuijsen et al., 2024).
Despite the significant emphasis on programming education in university curricula, students studying programming languages such as Python face various challenges. Survey data indicate that dropout rates in introductory programming courses are generally high, reaching over 30% in some contexts (G. Fan et al., 2025). Moreover, studies across different countries and regions consistently report that students experience considerable difficulties, particularly related to the heavy cognitive load, the abstract nature of the content, and insufficient instructional support (Demir, 2022). For many learners who begin programming courses, factors such as a steep learning curve, high cognitive load, and the high level of abstraction in content can negatively influence their sustained learning motivation (Utreras & Pontelli, 2021). In resource-constrained contexts, such as developing countries, insufficient access to qualified instructors and digital infrastructure has compounded these difficulties (Bull & Kharrufa, 2024; G. Fan et al., 2025). Besides, traditional programming education primarily relies on direct instruction from teachers, providing insufficient support for learners’ independent exploration and understanding of complex problems, which limits the accessibility and depth of programming learning (X. Y. Hou et al., 2023).
Embracing the rapid pace of AI, AI-assisted learning has become a subject of interest for scholars and educators, as it has emerged as a significant driver of innovation in education. The application of generative AI tools (such as ChatGPT and Google’s Bard) can generate code through natural language and explain complex programming concepts, effectively lowering the threshold for learning programming (Ezeamuzie, 2023). Compared to traditional teaching methods, generative AI tools for programming learning have been highlighted for their capacity to provide personalized and adaptive learning experiences across different cultural contexts. Tlili et al. (2023) pointed out that generative AI offers learners real-time and customized feedback and guidance, thereby improving the efficiency and experience of programming learning. Many studies have validated the potential of generative AI in programming education. On the one hand, such tools can generate code examples in real-time and provide step-by-step explanations to help learners understand complex programming concepts, thereby enhancing the intuitiveness of the learning process (Ryan et al., 2021). On the other hand, they effectively reduce frustration for beginners by offering instant feedback (Tlili et al., 2023) and support personalized learning paths, promoting active learner engagement (Zawacki-Richter et al., 2019).
Despite the promising potential of generative AI tools in programming education, they also introduce some new challenges. On one hand, learners may face information overload when using AI-assisted tools, as the generated code and feedback can be too complex, making it difficult for learners to understand and apply them (Steyvers & Kumar, 2024). Moreover, while generative AI can provide instant feedback and guidance, whether such feedback truly addresses the individualized needs of learners—especially those with varying programming skill levels—remains a significant concern (Ryan et al., 2021). On the other hand, while AI’s automation can enhance the efficiency of programming learning, it may also lead to dependency among some learners, weakening their active learning and problem-solving abilities, which could negatively impact their overall learning experience (Miller, 2023). These concerns underscore the need for more inclusive evidence from regionally diverse educational contexts to better understand the opportunities and risks associated with AI integration in programming education. Thus, this study aimed to present a systematic analysis of complex configurations for achieving high levels of students’ programming learning experience with Generative AI tools.
Based on the objective, two specific research questions were posed: (1) What functional and psychological factors shape university students’ programming learning experiences with generative AI tools? (2) How do different configurations of these factors jointly influence students’ learning outcomes and satisfaction? The findings will enrich the existing literature on AI-assisted learning in programming education, which has traditionally focused on the net effects of instructional tools without considering their complex configurations.
Literature Review
Generative AI for Education
Generative AI refers to a category of artificial intelligence technology that utilizes generative models to create new content, such as text, images, and audio (Al-Emran et al., 2024; Liu et al., 2024). These tools are characterized by their adaptability, scalability, and capability to offer instant responses, which significantly extend the role of traditional educational technologies that mainly function as static repositories or assessment systems (Becker et al., 2022). In the context of programming education, AI tools demonstrate several key features. They can serve as intelligent tutors by delivering personalized feedback, as code-generation assistants by producing working examples or alternative solutions, and as learning companions by supporting iterative practice and exploration (Sajja et al., 2024). Such affordances expose learners to diverse problem-solving strategies and help cultivate better coding practices. Recent research has further explored AI-assisted programming environments. X. Hou et al. (2024) proposed the Code Tailor system, which fully utilizes large language models to provide personalized Parsons puzzles for students in need, aiming to improve each student’s engagement and cognitive level in learning programming while enhancing convenience. Kazemitabaar et al. (2023) developed Code Aid, an LLM-based programming assistant, which provides more personalized learning feedback through timely human-computer interaction during the learning process. This balances educators’ needs for innovation in teaching and significantly enhances students’ engagement when encountering complex concepts in the learning process. In addition, Kwak et al. (2023) examined how artificial intelligence tutoring systems offer personalized feedback and support tailored to students’ learning progress and understanding. It provides timely, dynamic information feedback to help students choose more efficient learning paths to master more difficult programming concepts. G. Fan et al. (2025) provided evidence from China showing that AI-assisted pair programming enhanced students’ intrinsic motivation, reduced programming anxiety, and improved programming performance to levels comparable to those of human–human pair programming. Beyond these developments, emerging evidence from non-Western contexts also shows that generative AI tools are increasingly integrated into programming-related learning. Archana et al. (2025) reported that Indian university students actively use AI tools to support code understanding and debugging. Al-Dokhny et al. (2024) further demonstrated that multimodal AI systems can enhance task performance among Asian students by improving code interpretation and problem-solving efficiency.
Factors Influencing AI-Assisted Learning Experiences
With the emergence of generative AI tools, such as ChatGPT, AI tools are gradually transforming traditional approaches to programming learning, serving as novel support tools. Existing research indicates that the application of AI tools significantly reduces learners’ cognitive load through effective interaction and code suggestions (Evans et al., 2024), thereby enhancing the efficiency of completing programming tasks (Hartley et al., 2024). In addition, the code generation capabilities of AI tools can help beginners overcome frustration caused by programming syntax and logic issues, thereby enhancing their self-efficacy (Bandura, 1997). In addition to aspects of cognitive support, the application of AI tools significantly influences learners’ emotional experiences through personalized support and conversational interaction (Vistorte et al., 2024). For instance, generative AI utilizes natural language understanding technologies to provide explanations that align more closely with learners’ needs, thereby reducing anxiety during the learning process (Yan et al., 2024). Although some researchers have identified factors such as cognitive load and interactivity that significantly influence the learning experience with generative AI tools for programming, other scholars present differing perspectives regarding these influencing factors. For example, Wang et al. (2022b) argued that under certain conditions, the interactivity between generative AI tools and learners, particularly its impact on the programming learning experience, is moderated by students’ individual learning styles and other personal characteristics. Similar findings have been reported in non-Western contexts, where learner dispositions and contextual norms further shape how students benefit from AI-assisted programming. For example, Archana et al. (2025) found that Indian learners’ self-efficacy and perceived support strongly influenced their engagement with AI tools, while Ali and Shaban (2025) showed that individual traits significantly condition the emotional and cognitive gains students derive from AI-driven learning systems.
This further illustrates that the learning experience with generative AI tools for programming results from the combined effect of multiple factors, with complex causal relationships inherent in its driving mechanisms. It underscores the limitations and partiality of models driven by a single factor, thereby necessitating an analysis of the driving mechanisms of generative AI tools in programming learning experiences from a configurational perspective.
Means-End Chain Theory and Fuzzy-Set Qualitative Comparative Analysis in Education Research
The means-end chain theory and the qualitative comparative analysis method provide new perspectives and methodological support for educational research by integrating theoretical models and analytical approaches. The means-end chain theory, proposed by Gutman (1982), is a theoretical model that analyzes the connection between consumer behavior and personal values through the “attribute-consequence-value” framework (Zhou et al., 2020). While this theory has been widely applied in fields such as tourism management (Yin et al., 2024) and consumer behavior analysis (Kumar et al., 2024), it has also undergone specific development and practical application in the field of educational research. Studies have shown that the use of the MEC framework offers valuable insights for designing educational tools by analyzing both the perceived value and functional needs of students regarding educational technologies (N. Wang et al., 2022a). However, applying generative AI tools based on the MEC framework in programming education remains a relatively unexplored research direction.
The fuzzy-set qualitative comparative analysis (fsQCA) method possesses distinct advantages. It demonstrates strengths in revealing causal relationships within complex educational contexts based on multi-factor configurational analysis (Greckhamer et al., 2018). It has been widely applied in fields such as healthcare management (Y. Wang et al., 2022b) and hotel management (Rey-Martí et al., 2017). In recent years, scholars have introduced the fsQCA method into educational research. For example, Sánchez-Mena et al. (2019), based on survey data, used the fsQCA method to analyze the combinations of factors influencing teachers’ use of educational video games. Wu and Wang (2022) applied the fsQCA method to analyze configurational paths influencing the efficiency of MOOC development in Chinese universities. Fu et al. (2023), based on evaluation data from 27,316 students, used the fsQCA method to explore the combinations of factors influencing MOOC learners’ satisfaction. The above literature primarily adopts the single fsQCA method to analyze complex causal relationships. These studies demonstrate that fsQCA can identify various factor combinations, revealing the nonlinear relationships that influence educational outcomes. This provides methodological support for exploring the complex interactions between educational tools and learner behaviors.
Although previous studies have typically applied MEC or fsQCA in isolation, recent research suggests combining the two can generate more comprehensive insights. MEC provides a structured framework to conceptualize the hierarchical linkages between attributes, consequences, and values, while fsQCA is particularly effective in capturing configurational causality across multiple conditions. For instance, Wang et al. (2022a) integrated MEC with fsQCA to examine how individual characteristics, social capital, and perceived value jointly shape users’ continuance intention toward innovative wearable products. Their findings highlight that MEC can define the attribute–consequence–value chain, whereas fsQCA can empirically identify multiple alternative configurations leading to the same outcome. This illustrates the complementary nature of the two approaches: MEC offers the theoretical lens for structuring conditions, and fsQCA reveals the complex, non-linear causal pathways among them. Building on this rationale, the present study employs an integrated MEC–fsQCA approach to investigate the multifaceted pathways through which functional and psychological factors impact learners’ experiences with AI-assisted programming.
Summary of Literature Review
In the context of learning with generative AI tools, two gaps exist in the investigation of learners’ experiences. First, the existing literature on Generative AI and introductory programming reveals a limited focus on student-centered perspectives (Amoozadeh et al., 2023). Second, research on generative AI tools often overlooks learning-related aspects, particularly students’ thought processes, motivations, and perceptions of these tools. This gap in student-centered research highlights a lack of understanding of “what students actually do” (T. Wang et al., 2023). Many studies primarily target the performance of AI tools (Zawacki-Richter et al., 2019). While there is a positive trend in the literature toward moving beyond merely evaluating the capabilities of large language models, a strong call remains for research that delves deeper into the complexities of students’ experiences, perceptions, and interactions in this field (Vistorte et al., 2024).
Another gap is that previous studies have predominantly employed quantitative analysis methods, such as regression analysis and structural equation modeling, to investigate the factors influencing the learning experience (Zawacki-Richter et al., 2019). However, these research methods primarily control for single-variable factors, overlooking the complex relationships and interactions among multiple dimensions that influence the learning experience. As a result, they fail to comprehensively reveal how generative AI tools optimize students’ overall learning experience through the combined effects of various factors across different dimensions (Yan et al., 2024). Because distinguishing different kinds of complex causality can better reflect authentic learning conditions, there is a need to investigate programming learning experiences with generative AI tools from a configurational perspective.
Therefore, this study is expected to fill these gaps through an integrated means-end chain and fsQCA approach. The means-end chain model is employed to identify the attribute-level and outcome-level factors that influence learning experiences with generative AI tools for programming. Then, the fsQCA method is applied to investigate the relations between these elements. This research contributes to the following areas: (1) expanding the learner-driven research perspective and establishing a research model that integrates functional and psychological consequences of the learning experience; (2) providing new insights into how the determinants of AI-assisted learning experiences jointly shape learning outcomes; (3) providing strategies to improve programming learning experiences in the AI-assisted learning environments.
Theoretical Framework
The means-end chain model is a theoretical framework used to analyze the impact of product attributes on consumers’ usage behavior and value orientation. The model analyzes consumers’ product usage behavior through the outcomes generated by the product attributes (the attribute-result chain). Essentially, AI-assisted programming learning is a process in which AI tools are applied to provide services to programming learners. The learning experience can be viewed as the result of the combined effects of the tool attributes and the individual attributes of the learners. Therefore, the attribute-result chain analysis framework, derived from the means-end chain model, is used to analyze the formation mechanism of the AI-assisted programming learning experience.
For the attribute-level factors, the tool attributes inherent to the application of AI are the fundamental reasons that attract learners to use AI tools in the programming learning process. These are primarily reflected in three variables: tool ease of use, interactivity, and quality of the results. Tool ease of use refers to the degree to which the tool is convenient for learners, including aspects such as learnability, comprehensibility, and ease of operation (Davis, 1989). Interactivity refers to the tool’s ability to engage in efficient communication and feedback with learners, including real-time responsiveness, personalized interaction, and dynamic regulation (Means, 1994). The quality of the results refers to the correctness and reliability of the content generated by the tool, including aspects such as accuracy in meeting task objectives, comprehensiveness of information, and logical coherence (Gulwani et al., 2017). Since AI-assisted programming learning essentially falls within the conceptual category of service, the service experience is influenced not only by the attributes of the service-oriented product but also by the characteristics of the users themselves. Therefore, in addition to the tool attributes inherent to the application of AI, the attribute-level factors should also include the individual characteristics of the learners. These personal attributes primarily consist of two variables: self-efficacy (Ithriah et al., 2020) and knowledge level (Zviel-Girshin, 2024).
According to the means-end chain theory, learners’ experience in AI-assisted programming learning results from the combined effects of the attributes of the generative AI tools and the individual characteristics of the learners. The learning experience is primarily reflected in two aspects: functional consequences and psychological consequences. Among these, functional consequences mainly reflect the knowledge acquisition brought to learners by AI-assisted programming learning (Sun et al., 2024). Therefore, functional consequences can be measured through the knowledge acquisition variable. Psychological consequences mainly reflect the learners’ satisfaction with the assistance provided by AI-assisted programming learning (Shahzad et al., 2024), and can be measured through the satisfaction variable.
Based on the means-end chain model, this research constructs an “attributes-consequences” analytical framework for the learning experience of AI-assisted programming, as shown in Figure 1.

Research framework.
Methods
Procedure
The methodological research model is shown in Figure 2. This research begins by identifying the factors that influence learning experiences with generative AI tools for programming, based on the means-end chain theory. Then, an online survey was conducted, targeting university students who have used generative AI tools to assist with programming learning, and the survey data were cleaned. Finally, the fsQCA method was employed to identify configurations leading to functional and psychological experience outcomes, thereby determining the formation mechanism of learning experiences with generative AI tools for programming.

Methodological framework for the research.
Fuzzy-Set Qualitative Comparative Analysis Method
To further investigate the complex combinations of factors influencing learning experiences with generative AI tools for programming, the fuzzy set qualitative comparative analysis method is employed to explore the causal relationships between learning experience, programmers’ personal characteristics, and the learning environment. Unlike traditional empirical methods, which typically focus on binary relationships between independent and dependent variables, fsQCA allows for the analysis of causal relationships that exhibit variation across multiple levels. Furthermore, the incorporation of the “fuzzy” technique enables the analysis of conditions that encompass variations between high and low levels, rather than being limited to crisp boundaries.
This study employs fsQCA for the following reasons: First, learning experiences with generative AI tools for programming result from the interaction of multiple factors and cannot be attributed to a single variable. The fsQCA method analyzes complex causal relationships from a set-theoretic perspective (Misangyi et al., 2017), thereby uncovering the pathways through which combinations of multidimensional factors influence the learning experience. Second, fsQCA addresses the asymmetrical causal relationships often seen in educational contexts, where factors may contribute differently to positive or negative outcomes (J. Fan & Tian, 2024). Given its ability to handle complex interdependencies and asymmetry, fsQCA allows for a more nuanced understanding of the factors that influence learning experiences. Third, the fsQCA approach is known for its low sample size requirements and is applicable even when the number of cases is fewer than 50 (Chenxi & Haijie, 2023; Shen et al., 2025). Moreover, empirical research typically recommends a minimum sample size of more than 10 times the number of measurement items (Chin, 1998).
Participants and Data Collection
To validate the proposed conceptual research model, a self-constructed survey questionnaire was developed for data collection and analysis. The questionnaire consisted of two sections: the first part recorded the basic information of the respondents, and the second part covered the variables involved in the research. The design of the questionnaire is based on a review of the literature; however, the content has been adjusted to suit the research context. The questionnaire items were adapted from the following prior studies: Tool ease of use (Davis, 1989), interactivity (Means, 1994), accuracy of results (Gulwani et al., 2017), self-efficacy (Bandura, 1997; Ithriah et al., 2020), knowledge level (Zviel-Girshin, 2024), knowledge acquisition (Sun et al., 2024), and learning satisfaction (Shahzad et al., 2024). Each dimension contains three unique questions, totaling 21 questions, which are measured using a 5-point Likert scale (1 =
After comprehensive validation of the questionnaire content, the survey was distributed for 20 days (from August 15 to September 4, 2024) through the professional online survey platform Wenjuanxing (https://www.wjx.cn), which is widely used by over 30,000 enterprises and 90% of universities in China. To facilitate access for respondents, the survey link was distributed through social media channels, including WeChat and QQ. Before completing the survey, participants were provided with clear instructions on how to fill out the questionnaire, emphasizing the principles of anonymity and confidentiality. Respondents were required to select an “agree” option to proceed with the survey. The questionnaire was designed with a mandatory completion mechanism, ensuring that respondents had to complete all sections before submitting their responses. To ensure data quality, we screened responses based on completion time. The mean completion time was 3.42 min (
Results
Descriptive Analysis
According to the descriptive statistics of the survey sample, Table 1 shows a nearly equal gender distribution, with males accounting for 54.30% and females for 45.70%. This gender ratio demonstrates a good balance, consistent with the trend of increasing gender equality in modern programming education (Groothuijsen et al., 2024). In the use of AI-assisted programming tools, ChatGPT is the most used tool, with 53.39% of students utilizing it, followed by Copilot (including Copilot Chat), with a usage rate of 30.32%. This indicates that the application of generative AI tools is becoming increasingly prevalent in undergraduate programming learning, with ChatGPT emerging as the first-choice tool due to its ease of use and high interactivity.
Demographic Characteristics of the Respondents.
Table 1 presents the detailed demographic characteristics of the respondents. Regarding age distribution, most respondents are between 18 and 23 years old, with 35.75% of students aged 18 to 20 and 51.58% aged 21 to 23. Students aged 24 and above are relatively fewer, with 11.76% aged 24 to 26 and only 2% aged 27 and above. In terms of educational background, most respondents are bachelor’s students (46.61%), followed by master’s students (36.65%) and PhD students (16.74%).
Reliability and Validity Evaluation
To ensure the reliability and validity of the questionnaire data, tests were conducted using SPSS 27.0, IBM SPSS Statistics, Version 27.0 (IBM Corp., Armonk, NY, USA. The evaluation results for each variable are shown in Table 2. The Cronbach’s α coefficients for all variables range between .7 and .9, and the composite reliability (CR) values exceed 0.7, indicating high reliability for all variables in the questionnaire. It is generally considered that when the factor loading and average variance extracted (AVE) of each variable are greater than 0.5, the variables demonstrate good convergent validity (Fornell & Larcker, 1981). The research found that the factor loading of all variables exceeded 0.7, and the average variance extracted (AVE) values ranged from 0.690 to 0.720. This indicates that the measurement items in the questionnaire demonstrate excellent convergent validity.
Reliability and Validity Test.
To further verify the model’s convergent and discriminant validity, we compared the square root of the AVE values for each variable with the correlations between different variables. As shown in Table 3, the square root of the AVE value for each variable exceeds its correlation with any other variable. All latent constructs satisfy this criterion. Therefore, the measurement model demonstrates good reliability and validity.
Discriminant Validity.
Calibration Process
Calibration is the process of assigning set membership scores to cases (Schneider & Rohlfing, 2016). Before conducting fsQCA, it is necessary to calibrate each conditional variable based on the actual survey data collected from the selected cases. Referring to the studies of scholars such as Fiss (2011) and Greckhamer (2016), the quartile method was used to calibrate the questionnaire data. Specifically, the survey data for the five attribute variables—“tool ease of use,”“interactivity,”“quality of the result,”“knowledge level,” and “self-efficacy”—and the two outcome variables—“knowledge acquisition” and “learning satisfaction”—were calibrated using the quartile method. The calibration points were set at 5% (lower quartile) as fully unaffiliated, 50% (median) as the crossover point, and 95% (upper quartile) as fully affiliated. This approach follows the standard practice outlined by Fiss (2011) and Greckhamer (2016), ensuring a balance between sensitivity and robustness in the fsQCA analysis. The calibration points and descriptive statistics for each variable are shown in Tables 4 and 5.
Descriptive Statistical Analysis Results of Research Variables.
Summary of Calibration.
Necessary Condition Test
Conditional necessity analysis is conducted to determine whether specific conditions must exist for the outcome to occur. The necessity analysis test is primarily based on the consistency index, with a consistency value greater than 0.9 being recognized as a necessary condition (Mattke et al., 2021). The necessity analysis was conducted using fsQCA 3.0, and the results are presented in Table 6. The consistency values for individual attribute conditions with respect to high or non-high learning experience were all below 0.9, indicating that none of the conditions constitute a necessary condition for determining the level of learners’ programming learning experience.
Necessary Conditions for Learning Experience.
Configurations
Configuration analysis, also known as the sufficiency analysis of antecedent condition configurations, can reveal the sufficiency of different attribute combinations in influencing learning experience outcomes. In fsQCA configuration analysis, the calibrated case data are first subjected to logical judgment using a truth table. Subsequently, the fsQCA software generates three types of solutions based on the logical remainders in the truth table: Complex Solution, Intermediate Solution, and Parsimonious Solution. Referring to the recommendations of scholars (Fiss, 2011; Zheng et al., 2021), the frequency threshold in the truth table was set to 1.0, the consistency threshold to 0.80, and the PRI (proportional reduction in inconsistency) threshold to 0.80. Samples that did not meet these conditions were marked as 0 (Zheng et al., 2021). The results in Tables 7 and 8 show that the consistency values of all configuration paths exceed the 0.8 consistency threshold, indicating that the results of the configuration analysis are reliable.
Configurations for Achieving High Levels of Programming Learning Experiences.
Configurations for Achieving Low Levels of Programming Learning Experiences.
Core conditions exhibit strong causal contributions to the outcome, as they appear consistently in both parsimonious (simplified) and intermediate solutions in fsQCA analysis. In contrast, peripheral conditions play auxiliary roles, appearing only in the intermediate solution and indicating weaker or context-dependent influences.
Core conditions exhibit strong causal contributions to the outcome, as they appear consistently in both parsimonious (simplified) and intermediate solutions in fsQCA analysis. In contrast, peripheral conditions play auxiliary roles, appearing only in the intermediate solution and indicating weaker or context-dependent influences.
From the means—end chain perspective, AI-assisted programming enhances learners’ experiences by linking the use of AI tools to both functional and psychological consequences. Functionally, AI-assisted programming primarily enhances knowledge acquisition, whereas psychologically, it enhances learning satisfaction. Therefore, this study focuses on these two dimensions and, based on Table 7, identifies the two pathways that generate high programming learning experiences through high knowledge acquisition and high learning satisfaction.
Dual-driven-ease + knowledge (H1a and H2b): the original coverage of H1a is 0.435, which is a third path for improving knowledge acquisition. Among these paths, in driving programming learners to achieve high KA, TEOU of AI tools plays a core role as a core condition, while the KL serves as a supplementary role as a peripheral condition. The original coverage of H2b is 0.663, which is the maximum of the four configuration paths, indicating that this configuration is a significant path for the formation of high LS. For achieving high LS, both TEOU and KL play central roles, acting as core conditions. Meanwhile, in both H1a and H2b, SE is absent. This suggests that even for learners with low SE, a combination of high TEOU and KL can jointly drive the generation of both high KA and high LS. This configuration achieves a relatively high learning experience when using AI tools to assist in programming learning. Thus, this configuration represents a dual-driven type of configuration, driven by both functional consequences and psychological consequences.
Dual-driven-interactivity + accuracy + knowledge (H1b and H2d): the original coverage of H1b is 0.663, which is the maximum of the four configuration paths, indicating that this configuration serves as a primary pathway for the formation of high KA. The original coverage of H2d is 0.329, which is the minimum among the four configuration paths, and is a secondary path formed by high LS. In these two paths, it can also be observed that, both in terms of KA and LS, interactivity (IN) plays a central role, serving as a core condition driving high knowledge acquisition outcomes and high learning satisfaction outcomes. AOR and the level of relevant knowledge serve as auxiliary factors, acting as marginal conditions. Additionally, in H2d, SE is in a state of absence, which indicates that for programming learners with low SE, a combination of high IN, high AOR, and high relevant KL can still lead to a high programming learning experience outcome. Therefore, this configuration pathway can be considered a dual-driven type of configuration.
Dual-driven-ease + interactivity + accuracy + self-efficacy (H1c and H2c): in configuration H1c, interactivity of AI tools is a critical core condition, while TEOU, AOR, and SE serve as supplementary peripheral conditions. In contrast, for achieving high LS, both TEOU and IN play a pivotal role as core conditions, while AOR and KL act as peripheral conditions. In terms of coverage value, the coverage of H1c is 0.628, which is second in the configuration for generating high knowledge acquisition. Similarly, the coverage of H2c is 0.652, which is second in the configuration for generating high LS. Therefore, this pathway is a significant route to enhancing programming learning experiences. This suggests that when AI tools are highly interactive, easy to operate, and provide relatively accurate feedback, programming learners can achieve both high knowledge acquisition (KA) and high learning satisfaction (LS). Consequently, this configuration pathway also represents a dual-driven model, propelled by both functional and psychological outcomes.
Functional-only-ease + self-efficacy (H1d): the original coverage of this configuration is 0.381, which is the minimum among the five configuration paths. This suggests that this configuration serves as a secondary pathway for guiding learners to achieve high knowledge acquisition. In this path, TEOU and SE both play core roles. This indicates that when programming, learners possess a high level of SE, and AI tools demonstrate high usability. Even if learners’ knowledge level is relatively low and the interactivity of AI tools is limited, they can still drive the generation of high KA. However, as shown in Table 8, N2a reveals that the absence of IN and the absence of AOR lead to the generation of low LS. That is, although the configuration in H1d can lead to high KA as a functional consequence, it simultaneously results in low LS as a psychological consequence for learners. Therefore, this configuration cannot drive programming learners to achieve a high learning experience.
Psychological-dominant-ease + interactivity + knowledge (H2a): the original coverage of this configuration is 0.381, which is a third path for improving learning satisfaction. This path indicates that high TEOU and IN, as well as KL, can drive the generation of high LS. At the same time, by comparing the configuration pathways for generating low KA results in Table 8, it can be observed that this configuration does not drive the generation of low KA. That is, TEOU and IN and KL generate high LS without leading to low KA. Therefore, this configuration can enhance the learning experience of AI-assisted programming learning. At the same time, in this configuration, the high learning experience is primarily driven by high learning satisfaction. Thus, this configuration represents a type that is psychologically consequence-dominant.
In summary, the enhancement model of learning experiences with generative AI tools for programming can be classified into two types: the dual-driven type, characterized by functional consequences and psychological consequences, and the psychological consequence-dominant type. The dual-driven pattern, encompassing both functional and psychological consequences, consists of three primary configuration pathways: (1) Dual-driven-ease + knowledge (H1a, H2b); (2) dual-driven-interactivity + accuracy + knowledge (H1b, H2d); (3) dual-driven-ease + interactivity + accuracy + self-efficacy (H1c, H2c). The psychological consequence-dominant configuration is primarily represented by Psychological-dominant-ease + interactivity + knowledge (H2a). Notably, Functional-only-ease + self-efficacy (H1d) supports knowledge acquisition but not overall high experiences due to low satisfaction.
Robustness Test
Robustness testing is a critical aspect of fsQCA research. Common methods for testing robustness include raising the consistency threshold, adding or removing cases, and introducing additional conditions (Melendez-Torres et al., 2019). In this study, we apply the method of increasing the consistency threshold for cases, as proposed by Rihoux & Ragin (2009), to assess robustness. By expanding the case consistency threshold from 0.8 to 0.85, the robustness of all states in Tables 7 and 8 was examined, with case frequencies remaining unchanged. The results indicate that the configurations of the new model are consistent with those of the original model, and there is a clear subset relationship between the configurations of the two models (Schneider & Wagemann, 2012). Therefore, the results obtained in this study are robust.
Discussion
The research finds that the influencing mechanism of learning experiences with generative AI tools is a complex configurational process, involving multiple factor combinations under both the tool and learner dimensions (as shown in Table 7). Relying solely on AI tools or learners makes it challenging to enhance the programming learning experience, which aligns with the findings of Evans et al. (2024). The fsQCA results reveal two overarching patterns. One is a dual-driven pattern, where functional and psychological consequences work together to enhance both knowledge acquisition and learning satisfaction. The other is a psychological-dominant pattern, which boosts learning satisfaction without compromising knowledge acquisition. These findings confirm that effective AI-assisted programming learning requires a synergy between tool features and learner readiness, while also revealing multiple pathways that can achieve a high learning experience.
By integrating MEC with fsQCA, this study extends MEC beyond its typical linear “attribute–consequence–value” structure. It demonstrates that learners’ AI-assisted programming experiences are shaped by multiple parallel chains rather than a single dominant path. The identification of several sufficient but non-necessary configurations shows that MEC components interact configurationally (Wu & Wang, 2022). For example, the ease of use and interactivity of a tool may function as core attributes in some chains, while learner knowledge level or self-efficacy serve as critical links in others. Furthermore, the discovery of a psychological-dominant pathway indicates that psychological consequences can sometimes arise independently from strong functional outcomes, enriching MEC theory by highlighting asymmetry and non-linearity in how consequences form (Yan et al., 2024). This contributes to a more nuanced, context-sensitive understanding of MEC in educational technology environments, particularly when learners interact with adaptive, generative AI systems.
The findings of this study provide insights to guide educational practice. To ensure the sustainable development of AI-assisted programming instruction, educators should consider learners’ individual characteristics. Beyond the functional performance of AI tools, it is also essential to enhance the psychological value they provide. Therefore, for educators and developers, these results offer guidance on improving AI-assisted learning tools and services to meet the diverse needs of learners. Moreover, learners’ self-efficacy and knowledge levels strongly influence their learning experience (Ezeamuzie, 2023; Sweller et al., 2011). Educators should consider these characteristics when designing instructional activities and selecting AI tools to enhance their effectiveness. For instance, tools that emphasize interactivity and high-quality feedback are more suitable for learners with stronger prior knowledge. In contrast, tools with greater ease of use can help learners with moderate knowledge levels build confidence and improve their perceived self-efficacy. Personalized support and adaptive learning pathways can further help students overcome programming difficulties and enhance both learning outcomes and engagement (Demir, 2022).
Conclusion
This research employed both functional and psychological consequences as measures of learning experiences with generative AI tools, based on the means-end chain theoretical framework, and extended the configurational understanding of the factors influencing these experiences. This research collected data from 221 university students through a questionnaire. Then, this study employed fsQCA to analyze the configuration of factors that generate high and low levels of knowledge acquisition and learning satisfaction.
The following key conclusions were drawn: (1) A single factor is insufficient to constitute a necessary condition for a high-level programming learning experience. The ease of use, interactivity, feedback accuracy, relevant knowledge level, and self-efficacy, when acting independently, do not exhibit a decisive influence on a high-level learning experience. In addition, the research identified four sufficient configuration paths through fuzzy-set qualitative comparative analysis that can drive learning. (2) The enhancement patterns for AI-assisted programming learning experiences are mainly categorized into two types: dual-driven by functional consequences and psychological consequences, and psychologically dominant consequence patterns. The dual-driven pattern of functional and psychological consequences is primarily composed of three configuration paths: (1) high tool ease of use and knowledge level; (2) high interactivity, feedback accuracy and knowledge level; (3) high tool ease of use, interactivity, feedback accuracy and self-efficacy. The psychological consequence-dominated pattern is primarily characterized by a configuration consisting of high tool ease of use, high interactivity, and a high relevant knowledge level.
Limitations and Future Research
Although this research systematically explores the impact of applying generative AI tools on undergraduates’ programming learning experiences by integrating the means-end chain model and the fuzzy-set qualitative comparative analysis method, it still has several limitations that warrant future investigation. First, the sample has limited representativeness. Although 221 valid questionnaires were collected, the data primarily came from learners in a specific region, which may limit the generalizability of the findings and fail to represent learners from different cultural and educational backgrounds fully. Second, although this research explores the synergistic effects among multiple influencing factors, it does not provide a detailed analysis of the dynamic change process of these factors. Future studies could expand the scope by including cross-cultural comparisons to explore how AI-assisted learning tools perform across diverse educational and cultural settings. Finally, this study is mainly based on data from learners within a single cultural and linguistic background. Future research could further explore the applicability of generative AI tools in cross-cultural and multilingual programming environments.
Footnotes
Appendix A: Research Items
Ethical Considerations
This study adhered to ethical research principles and the APA Ethical Principles of Psychologists and Code of Conduct (section 8.05). As the research involved an anonymous online survey of adults and posed minimal risk, formal institutional ethics approval was not required. Before participating, respondents were informed of the study’ s purpose, the voluntary nature of their involvement, and the confidentiality of their responses. Informed consent was obtained electronically, and only those who agreed were able to proceed with the survey.
Author Contributions
Rui Wang: Methodology, Formal analysis, Writing—Original draft.
Jie Chen: Data curation, Data analysis, Writing—Original draft.
Jiahe Zhao: Validation, Resources, Writing—Review & editing.
Huijuan Fu: Conceptualization, Validation, Writing—Review & editing.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by the National Natural Science Foundation of China [grant number 72164015], the Humanities and Social Science Research Fund Project of Jiangxi Provincial Department of Education [grant number JY24110], and the Research Project on Undergraduate Education and Teaching Reform in Jiangxi Province [Grant No. BKJG-2026-7-38].
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability Statement
Data sharing does not apply to this article, as no datasets were generated or analyzed during the current study.

