Abstract
To optimize the usage of ChatGPT in education before the outcomes are catastrophic, it is not enough to come up with general expectations or reconstruct the meaning of “education;” the former is “too general,” and the latter requires time. Statements such as focusing more on “critical thinking” and “creativity” in the times of education with AI assistance may point out the overall flow, but they do not provide any process (the “how”) for achieving that goal.
Recent discourse in business education has increasingly emphasized the importance of critical thinking skills (Calma & Davies, 2021; Dahl et al., 2018; Larson et al., 2024). These skills, which include processes such as questioning, analyzing, synthesizing, and evaluating (Bloom et al., 1956), form the core objectives of marketing and management education (Crittenden, 2024). Critical thinking is fundamental for both academic success and future employability. For instance, a Universities UK (2023) report noted that 51% of FTSE350 (the 350 largest companies listed on the London Stock Exchange) senior leaders prioritize graduates with strong critical thinking when recruiting while the World Economic Forum (2020) ranks it among the most sought-after skills. Similarly, a Skynova (2022) survey revealed that 36% of business owners highly valued critical thinking as a soft skill in employees they are more likely to retain in the evolving digital workforce.
The integration of generative artificial intelligence (AI) tools like ChatGPT challenges traditional methods of developing critical thinking skills (Gulati et al., 2024; Lim et al., 2023; Mogavi et al., 2024). While AI tools may offer personalized, engaging learning experiences that aid in understanding complex concepts (Fuchs, 2023; Hamid et al., 2023), concerns are rising that easy access and reliance on AI-generated solutions may lead to superficial learning and hinder the development of independent analytical skills (McAlister et al., 2023; Van Slyke et al., 2023). The black-box nature of AI-generated content can also obscure critical thinking processes, such as evaluating bias and validating sources (Bearman & Ajjawi, 2023). Moreover, Chiu (2024) stresses the importance of enhancing students’ critical thinking skills to harness AI effectively, emphasizing the need for strategies that promote deeper learning in AI-assisted education. Although universities are developing AI usage guidelines, most extant documents lack concrete strategies to leverage these tools for enhancing critical thinking (Singh & Ngai, 2024). Despite increasing research on generative AI and critical thinking, scholars highlight a gap in research offering practical approaches, urging for pedagogical redesigns that prioritize critical thinking to balance AI’s influence and develop essential soft skills (Dwivedi et al., 2023; Mandai et al., 2024; McAlister et al., 2023).
Critical thinking refers to the ability to engage in reflective and independent thinking, questioning assumptions, analyzing information, and making reasoned judgments (Scriven & Paul, 1987). Business education research on critical thinking has two main focuses. First, it examines pedagogical methods such as case studies (Kennedy et al., 2001; Klebba & Hamilton, 2007), debates (Roy & Macchiette, 2005), simulations (Deitz et al., 2022; Devitt et al., 2015), and projects (Ye et al., 2017) that effectively promote critical thinking. Second, it explores critical thinking conceptually, analyzing definitions, dimensions, and challenges in the field (Calma & Cotronei-Baird, 2021; Calma & Davies, 2021; Dahl et al., 2018; Larson et al., 2024). Together, these research paths provide valuable insights into fostering critical thinking, especially as educators adapt to AI-driven learning environments.
Bloom’s Taxonomy of Educational Objectives is widely used to design curricula, learning outcomes, and goals in business education. Its hierarchical stages—knowledge, comprehension, application, analysis, synthesis, and evaluation—provide a structured pathway for developing cognitive skills (Anderson & Krathwohl, 2001; Bloom et al., 1956; Calma & Cotronei-Baird, 2021). Although originally created for broader teaching and assessment purposes, Shaw and Holmes (2014) argue that Bloom’s greater strength lies in its capacity to facilitate critical thought rather than merely define educational objectives. Researchers have linked the upper levels—analysis, synthesis, and evaluation—to higher-order thinking skills, such as critical thinking (Krathwohl, 2002) asserting, for example, that evaluation involves judgment based on evidence, while synthesis encompasses creativity through integration and reorganization of information (Huitt, 1998). Despite its utility, some scholars argue that Bloom’s Taxonomy does not fully capture the complexity of critical thinking and may even limit the development of curricula aimed at nurturing it (Hussey & Smith, 2002; Paul, 1985). However, it remains a valuable tool for fostering critical thinking skills (Calma & Cotronei-Baird, 2021). For example, Blijlevens (2023) aligns design thinking phases with Bloom’s levels to deepen students’ understanding, while Watson et al. (2022) apply Bloom’s taxonomy to structure learning outcomes in an introductory marketing course and guide students from basic recall to higher-order thinking.
Bloom’s Taxonomy encompasses three domains: cognitive (intellectual skills), affective (attitudes, values, and interests), and psychomotor (motor skills; Berezan et al., 2023; Bloom et al., 1956). This study focuses on the cognitive and affective domains, as they are most impacted by AI in learning contexts. Recognition of technology’s influence has sparked revisions to Bloom’s framework, adapting it to fit modern digital and AI-enhanced environments as summarized in Table 1. Passig (2003, 2007) introduced the concept of “melioration,” which involves selecting, integrating, and applying the appropriate amalgam of information and tools to solve problems and address complex tasks. 1 In a similar digital update, Churches (2010) expanded Bloom’s framework by adding digital action verbs like “finding,” “bookmarking,” and “using,” while retaining the original taxonomy’s structure, to capture technology-driven learning objectives. Yusuf et al. (2024) further advanced this by suggesting an AI-specific model with five phases—familiarizing, conceptualizing, inquiring, evaluating, and synthesizing—designed to improve critical thinking when synthesizing AI-generated texts. Despite these adaptations, there are calls to further integrate essential skills, such as ethical reasoning, critical evaluation, communication, and collaboration, to ensure Bloom’s framework remains fully relevant and responsive to the evolving demands of AI-driven learning environments (Mogavi et al., 2024; Ng et al., 2021).
Revisions and Extensions of Bloom’s Taxonomy for the Digital Age.
Research demonstrates that while AI tools like ChatGPT can enhance cognitive skills, their impact varies (e.g., Essien et al., 2024; Qawqzeh, 2024). Qawqzeh (2024) found that ChatGPT promotes critical thinking and creativity, though its effectiveness depends on individual engagement. Similarly, Essien et al. (2024) showed that AI text generators improved critical thinking across Bloom’s levels, with greater success in lower-order skills (remembering, understanding) than higher-order skills (evaluating, creating). However, these studies rely on cross-sectional designs that offer only surface-level insights, failing to capture the iterative, dynamic nature of learning with AI. Such dynamics may involve fluid transitions across Bloom’s stages and challenge the taxonomy’s established hierarchical structure (Irvine, 2021). Furthermore, while the cognitive domain has been explored extensively (Essien et al., 2024; Qawqzeh, 2024; Yusuf et al., 2024), affective and metacognitive dimensions remain underexamined in both theoretical frameworks and practical applications, limiting the understanding of holistic learning in AI-assisted contexts (Irvine, 2021; Larson et al., 2024).
This study addresses these gaps by employing a longitudinal methodology to reveal how iterative interactions with AI foster strategic thinking, adaptive learning, and metacognitive growth. By examining MSc Marketing students’ use of generative AI tools over 4 weeks, this research provides evidence of recursive, non-linear learning cycles. Students moved fluidly between cognitive, affective, and metacognitive domains, refining problem-solving approaches and integrating competencies such as melioration and ethical reasoning into their workflows (Mogavi et al., 2024; Ng et al., 2021). Unlike static, snapshot-based studies, this approach highlights the evolving interplay between AI and critical thinking, offering a richer understanding of how AI both challenges and enhances educational outcomes (Essien et al., 2024; Mandai et al., 2024; Van Slyke et al., 2023). The findings inform actionable pedagogies by reimagining Bloom’s Taxonomy to integrate AI-specific competencies. This revised framework captures the recursive engagement necessary for modern, AI-driven education and provides strategies for bridging theoretical and practical applications. The following research questions guide this investigation:
How does the integration of generative AI in marketing education assessments impact students’ critical thinking skills?
How can Bloom’s taxonomy be revised to incorporate AI-specific competencies based on empirical evidence from generative AI use in education?
Method
Design and Rationale
This study employed a naturalistic inquiry approach to examine how MSc Marketing students used generative AI, particularly ChatGPT, during their coursework in the Marketing Theory and Practice module. The study explored the real-time impact of AI tools on students’ critical thinking skills. Approval was obtained from the institution’s Research Ethics Committee (MRA-22/23-38289). Data collection occurred between May 19 and June 20, 2023, aligning with the students’ access to the assessment brief and the free version of ChatGPT (GPT-3.5). This approach allowed for authentic insights into students’ AI interactions as they completed their summative assessments, providing an in-depth view of their cognitive engagement with AI tools.
Assessment Brief and AI Tools
Students were required to develop and critically evaluate a new product launch plan for a hypothetical brand in a chosen market. The assessment brief explicitly encouraged the use of generative AI tools, including ChatGPT and image generators, for market research, idea generation, analysis, and report writing. This allowed students to experiment with various AI tools, providing a comprehensive perspective on how generative AI influenced their learning processes.
Interval-Contingent Diary Method
The solicited interval-contingent diary method was selected to capture real-time data on students’ interactions with AI tools (Bolger et al., 2003; Spencer et al., 2021). This method minimized recall bias and facilitated the collection of detailed, reflective insights over the 4-week assessment period (DeLongis et al., 1992). Previous studies have effectively used diary methodologies to explore student behaviors. For example, Nonis et al. (2006) used diary-like questionnaires with 264 business students to study time use, while Berezan et al. (2023) employed handwritten reflective journals with 17 students to capture learning outcomes in marketing education via Bloom’s Taxonomy. Compared with these and post-coursework interviews, which often suffer from recall inaccuracies and generalizations, audio diaries, as used in this study, offer a more dynamic, real-time view of cognitive processes (Crozier & Cassell, 2016).
Participants and Recruitment
Nine MSc Marketing students initially volunteered, with eight completing all phases of the study. They received a £30 voucher upon completion. Recruitment occurred via an announcement on the module’s Virtual Learning Environment page, without specific inclusion or exclusion criteria. The sample included a mix of full-time and part-time students, with four participants working in marketing-related roles alongside their studies, providing insights into both professional and academic applications of AI tools. Participants varied in their prior experience with generative AI, ranging from regular users to those new to the technology, and most used AI tools like ChatGPT for tasks such as idea generation, content creation, and concept clarification (see Table 2). Despite the small sample size, the idiographic approach enabled a comprehensive examination of individual trajectories (Crozier & Cassell, 2016), and past research supports that small samples in qualitative studies, despite presenting a trade-off between sample size, study duration, and frequency (Siemieniako, 2017), yield rich, detailed data on complex phenomena like AI-enhanced critical thinking (Spencer et al., 2021). However, the limited sample size inherently constrains the variability of perspectives, necessitating caution when extrapolating the findings to more diverse educational or professional contexts. This highlights the need for subsequent research with larger and more heterogeneous cohorts to validate and extend the study’s insights, ensuring their robustness across broader populations. Nevertheless, the study’s findings, while rooted in a specific context, offer valuable insights applicable to broader marketing education settings.
Participants’ Experiences With Generative AI Tools.
Procedures
Phase 1: Entry Focus Group
A 60-min online focus group was conducted to understand participants’ initial familiarity with generative AI tools, particularly ChatGPT. The author, who was not involved in teaching or assessing the module, facilitated the discussion to ensure objectivity. Students shared their experiences with AI in both academic and professional contexts. This phase provided a foundational understanding of students’ baseline knowledge and attitudes toward AI, although the data were not analyzed further.
Phase 2: Audio Diaries
Participants maintained audio diaries using a voice recording app, documenting reflections each time they worked on their assessments. They were guided by specific prompts, including the context of their coursework, their decision to use or not use generative AI, the decision-making process, facilitators, effort estimation, challenges faced, useful and useless practices, and the impact on their performance. Recordings were uploaded to anonymized folders on a secure cloud storage facility after marks were released for the assessment.
Audio recordings were transcribed using AI and manually verified for accuracy. The study generated 65 audio diary recordings, with an average of eight entries per participant (ranging from 4 to 21 recordings), totaling 5 hr, 37 min, and 36 s of data. Entry durations varied from 23 s to 46 min and 32 s. The diary method proved effective in capturing the dynamic, iterative process of critical thinking as students interacted with AI tools.
MAXQDA was used for data analysis, employing directed content analysis guided by Bloom’s Taxonomy (Hickey & Kipping, 1996). Initial coding categories corresponded to Bloom’s levels, with verbs associated with each level guiding the coding process. An inductive approach was also adopted to identify emergent cognitive skills that extend beyond Bloom’s model, such as melioration. Codes were primarily semantic, reflecting participants’ expressions (Braun & Clarke, 2006, 2023). Thematic analysis involved iterative discussions among the three research team members comprising the author and two doctoral level research assistants specialising in Education to resolve coding discrepancies, ensuring a rigorous and comprehensive examination of the data. Open codes were grouped into axial codes through team discussions, establishing relationships, such as “interrogation” and “articulation.” To ensure rigor, some excerpts were coded under multiple codes where they were interpreted as relevant to multiple categories, capturing the complexity of students’ cognitive processes. It is recognized that these interpretations were significantly shaped by the research team’s perspectives, rather than emerging solely from the data itself (Varpio et al., 2017). This approach enabled the capture of the complexity of students’ cognitive engagement with AI tools.
Phase 3: Exit Focus Group
The 60-min exit focus group debriefed participants on their experiences using generative AI tools. Students discussed AI’s practical applications for tasks such as ideation and analysis, sharing both positive experiences and frustrations. None had paid subscriptions to ChatGPT or other AI tools during the study. Their motivations for participating included curiosity about generative AI, its relevance to their professional work, and a desire to save time on tasks like idea generation. They also saw practical benefits for coursework and valued the opportunity to stay ahead as AI becomes more integral in Marketing. No further analysis was conducted on this data.
Results
The analysis reveals how MSc Marketing students engaged with generative AI tools, particularly ChatGPT, and how use of these tools affects their critical thinking skills. Themes are structured according to the cognitive and affective domains of Bloom’s Taxonomy, with the addition of a metacognitive domain. The cognitive domain covers themes like discovering information, understanding complex concepts, applying theoretical knowledge, analyzing AI content, and creating. The affective domain includes collaborating with AI and ethical reasoning, uniquely emphasizing emotional engagement, motivation, and relational dynamics in learning, contrasting with the cognitive domain’s logical structuring and the metacognitive domain’s reflective self-regulation. The metacognitive domain encompasses interrogating and refining AI responses, articulating precise prompts, iterative learning, meliorating information and tools, and reflective thinking.
The affective domain also serves as a bridge between the cognitive and metacognitive domains, as emotional responses—such as trust in AI or frustration with its limitations—influence analytical depth and motivate adaptive collaboration, which in turn enhances reflective practices (Irvine, 2021; Larson et al., 2024). This interrelation fosters a holistic understanding of AI-supported education, highlighting the importance of emotional and relational dynamics in driving deeper cognitive and metacognitive engagement. These themes underpin a revised taxonomy, detailed in Table 3, showing how AI supports and challenges traditional learning. Each section presents propositions linking AI interactions with cognitive, affective, and metacognitive skills, offering insights into how generative AI influences critical thinking across Bloom’s Taxonomy.
Revised Expanded Bloom’s Taxonomy for AI-Enhanced Learning and Critical Thinking.
Cognitive Domain
Discovering Information
In line with Bloom’s “remembering” stage, ChatGPT proved pivotal for students in gathering, organizing, and making sense of foundational information. This process went beyond simple retrieval, as participants engaged in iterative exploration and synthesis. For example, Participant 1 stated, “I’m going to use ChatGPT now to help me with finding things I need to do or read to create a powerful brand,” illustrating how the tool provided structured content while sparking creative exploration. By offering a scaffold of ideas—such as brand identity elements and storytelling techniques—ChatGPT facilitated brainstorming and served as a foundation for deeper, customized work.
Participant 3 similarly noted, “I don’t want to directly reference ChatGPT . . . so I had to cross-check with sources like Google Scholar and reputable websites.” This approach combined the speed and breadth of AI-generated outputs with the reliability and rigor of traditional research practices. Participant 2 further illustrated this dynamic, using ChatGPT for initial brainstorming and then expanding on its suggestions through manual research. This iterative cycle of guidance, validation, and refinement demonstrates how AI can complement traditional learning by transitioning students from broad exploration to critical inquiry.
Despite these benefits, participants also identified limitations in the credibility and specificity of ChatGPT’s outputs, often requiring supplementary research. For instance, Participant 6 expressed frustration at ChatGPT’s inability to provide reliable statistics or source citations, saying, “I’m getting the feeling that ChatGPT is not able to provide this sort of statistics,” and instead turned to external databases to confirm findings. This limitation led Participant 6 to refine their research approach, blending AI-generated insights with traditional sources to ensure accuracy and rigor.
Overall, this iterative process of integrating AI with traditional research reflects a pattern of deeper analytical engagement. Students critically evaluated AI outputs, validated them with trusted sources, and refined their strategies, transforming the discovery process into a reflective and critical inquiry.
Understanding Complex Concepts
In line with the “understanding” stage of Bloom’s Taxonomy, students found ChatGPT effective for breaking down complex ideas into clear, digestible explanations, creating accessible starting points for deeper learning. Participant 6 shared, “ChatGPT explained ‘Halo of Capital’ in simple terms, which I could then apply in the context of a tech unicorn,” demonstrating its ability to demystify jargon and contextualize abstract terms within practical frameworks. Similarly, Participant 1 used ChatGPT to understand concepts like brand identity, leveraging AI-generated insights to establish a foundation for more advanced and creative exploration.
These examples highlight ChatGPT’s role in bridging critical knowledge gaps while fostering intellectual progression and critical thinking. By combining clear definitions with contextual relevance, ChatGPT provides a springboard for critical inquiry, encouraging students to question, interpret, and apply insights in nuanced ways. This iterative engagement mirrors active learning processes, as students move from basic comprehension to interrogating underlying principles and implications (Blijlevens, 2023). In doing so, ChatGPT transforms static knowledge into dynamic, applied understanding, helping students develop the analytical skills needed to refine and expand their grasp of complex ideas.
Applying Theoretical Knowledge
Aligning with Bloom’s “applying” stage, AI tools such as ChatGPT acted as a conduit for transforming theoretical knowledge into actionable solutions. Participant 4 explained, “It suggested approaches that helped me shape the positioning and pricing decisions for my project,” emphasizing the tool’s ability to translate academic frameworks into specific, practical outcomes. This integration demonstrates how AI-supported insights can bridge the gap between abstract theory and the complexities of real-world application, enabling students to navigate and adapt strategies in contextually relevant ways. Participant 4, for example, described how they initially envisioned a premium pricing strategy for their project but, after engaging with ChatGPT, considered additional approaches such as value-based pricing and bundled offers. By prompting them to critically evaluate and adapt their initial strategy, ChatGPT fostered a deeper engagement with theoretical constructs.
Analyzing AI-Generated Content
Corresponding to Bloom’s “evaluating” and “analyzing” stages, participants critically evaluated AI-generated content for accuracy and bias. Participant 2 remarked, “I didn’t just take ChatGPT’s answer. I looked for validation in Google Scholar.” This behavior highlights their growing capacity to cross-check AI outputs against authoritative sources. Participant 3 expressed similar concerns, noting, “The data couldn’t be validated on it. It could never tell me the source of where it got the information from.”
This evaluative process required trust, skepticism, and judgment (Larson et al., 2024), pushing students beyond passive validation to examine the logic, assumptions, and biases within AI outputs. Participant 3’s frustration over the lack of source transparency underscores the cognitive effort needed to navigate ambiguity and assess reliability. These critical inquiries helped students refine their understanding and engage deeply with alternative resources.
Creating
Reflecting Bloom’s “creating” stage, AI acted as a transformative catalyst for innovation, empowering students to synthesize diverse inputs into original concepts. Participant 1 used ChatGPT to develop promotional strategies tailored to specific market contexts, emphasizing inclusivity and empowerment. This capacity to adapt AI outputs highlights how generative tools extend creative boundaries by offering a foundation for personalization and refinement. Similarly, Participant 2 utilized ChatGPT for brand name generation, producing options that were both compelling and market-aligned: “It helped me come up with names that actually made sense for the product.” Participant 3 echoed this utility, streamlining the brainstorming process to enhance the relevance and appeal of final choices.
Beyond generating ideas, ChatGPT encouraged iterative innovation, merging AI suggestions with personal expertise and creative intuition. Participant 4 refined pricing strategies by integrating AI recommendations with human insights, discovering novel approaches like value-based pricing aligned with their project’s broader goals. These examples highlight how AI enables students to navigate the intersection of generative technology and human creativity, unlocking solutions not revealed through traditional methods. By fostering experimentation and expanding ideation, AI appears to support a reimagining of students’ creative processes, helping them articulate and execute cohesive, innovative strategies with greater precision and sophistication.
Affective Domain
Collaborating With AI
Extending Bloom’s affective domain, AI served as a collaborator. Participant 5 described, “I asked the equivalent of Selfridges in the US, and they gave me quite a few names . . . this information I actually used.” Here, ChatGPT functioned as a research assistant, helping with market analysis. Similarly, Participant 8 used AI to distinguish between technical terms: “I asked it to help me distinguish the difference between aeroacoustics and aerodynamics . . . it was able to give me four paragraphs of explanation.” This demonstrates how students treated AI not just as a tool but as a responsive partner in their academic process, assistant, and a tutor, co-creating knowledge and ideas with the tool.
This collaboration transcended the cognitive and metacognitive domains by prioritizing the relational and adaptive aspects of learning. Students did not merely consume AI-generated information; they actively shaped its role—whether as tutor, assistant, or intellectual sparring partner—integrating its capabilities into their workflows. This required nuanced trust and adaptability, turning AI into a co-creator of knowledge. Crucially, this relational engagement activated the affective domain, where emotional and creative agency underpinned iterative dialogue. The result was a profound integration of intellectual rigor with emotional flexibility, enabling enriched learning experiences that fostered critical reflection and co-constructed understanding.
Ethical Reasoning and AI Usage
Beyond Bloom’s Taxonomy, students exhibited sophisticated ethical reasoning when engaging with generative AI, balancing its utility with concerns about academic integrity and originality. Participant 3 shared, “I was careful not to copy the text directly. Instead, I used the ideas to guide my own writing,” reflecting deliberate efforts to preserve intellectual ownership while using AI as a source of inspiration. This intentional delineation of AI’s role highlights how students leveraged its outputs as starting points for ideation, fostering both critical and ethical engagement.
Participants also addressed the challenges of reliability and transparency in AI-generated content. As previously stated, participant 6 observed, “I’m getting the feeling that ChatGPT is not able to provide this sort of statistics,” revealing concerns about the opacity of AI sources. Such limitations prompted students to validate AI outputs through traditional research methods, sharpening their ability to critically assess credibility. This interrogation of AI content deepened their awareness of ethical concerns surrounding unverifiable or incomplete information, driving more rigorous reasoning processes.
Ethical reasoning extended to contextual applications, as students tailored AI outputs to specific goals. Participant 2 rejected ChatGPT’s brand name suggestions, explaining, “None of them was a good idea, in my opinion,” before independently refining results. This accountability demonstrates students’ ability to evaluate and adapt AI contributions thoughtfully. These examples illustrate that ethical reasoning in AI usage encompasses originality, reliability, and contextual alignment, fostering critical thinking and a nuanced integration of AI insights into academic and professional contexts.
Metacognitive Domain
Interrogating and Refining AI Responses
Extending Bloom’s framework, metacognition emerged prominently as students engaged in iterative refinement of AI responses. This process involved critically assessing initial outputs, refining prompts, and aligning responses with specific research goals. Participant 1, for instance, described asking ChatGPT about product launches but finding the answers too broad, prompting them to refine their queries until they received actionable insights. Similarly, Participant 5 recounted using ChatGPT to explore potential markets in Indonesia, refining their prompts when initial results did not align with their objectives.
Students’ interactions often began with generalized outputs, requiring successive iterations to yield clarity or precision. For example, when clarifying complex concepts or seeking granular details, participants adjusted their inquiries to elicit more targeted and practical responses. These refinements reveal their growing ability to interrogate AI-generated outputs critically, challenging their assumptions and assessing the relevance of information against their academic and project-specific needs. By recognizing and navigating the limitations of AI—such as gaps in specificity or reliability—and integrating its suggestions with traditional research, students developed the capacity to synthesize, evaluate, and apply diverse information sources. This process highlights how engaging with AI fosters metacognitive skills, empowering students to refine their learning strategies, deepen their understanding, and tackle complex problems with greater precision and discernment.
Articulating Precise Prompts
The ability to articulate precise prompts emerged as a distinct metacognitive skill, emphasizing clarity of thought and strategic problem-solving. Unlike iterative refinement in interrogating AI responses, this theme highlights a deliberate effort to preemptively align queries with specific objectives, ensuring outputs were relevant and actionable. Participant 3 remarked, “I started asking ChatGPT for this kind of information . . . but I wasn’t satisfied. So, I rephrased my question to be more specific, and then I got a much more relevant answer.” Similarly, Participant 6 adjusted their inquiry on market trends in autonomous driving in China, requesting insights validated by academic publications and industry reports. These examples illustrate how participants adapted their communication to optimize AI outputs by aligning abstract goals with precise language.
This theme reveals a process in which students internalized principles of question design, translating complex objectives into structured queries. By doing so, they engaged in a cognitive process of deconstructing problems into manageable components while leveraging AI’s potential to generate targeted insights. This suggests that articulating precise prompts may cultivate critical thinking skills by fostering foresight, precision, and adaptive reasoning skills essential for navigating complex academic and professional tasks. Through deliberate orchestration of AI interactions, students may also enhance their ability to communicate nuanced ideas effectively.
Iterative Learning
Students demonstrated a clear process of iterative learning while engaging with generative AI tools, characterized by continuous refinement and adaptation based on feedback. Participant 1 highlighted this, explaining, “I kept rephrasing my question until it gave me something useful,” showcasing an ongoing cycle of trial, evaluation, and adjustment. Participant 7 echoed this by describing how they narrowed broad initial queries to obtain relevant, targeted answers, reflecting a deliberate process of honing their approach. Similarly, Participant 9 described revising prompts multiple times to uncover nuanced customer segmentation strategies tailored to their specific objectives.
This process reflects students’ ability to experiment and recalibrate, balancing exploration with precision to align AI outputs with their goals. Unlike simply refining responses, iterative learning encapsulates a broader cognitive strategy: students actively experiment, evaluate results, and recalibrate their methods in response to emerging insights. This approach not only enhances problem-solving but also suggests a cognitive flexibility and adaptability in addressing complex tasks.
By engaging in reflective cycles of questioning and feedback, students appeared to develop a heightened awareness of their learning processes, potentially recognizing patterns and refining their approaches over time. This iterative practice may encourage adaptability, critical thinking, and an improved capacity to navigate ambiguity, suggesting that students could be leveraging AI tools to generate increasingly actionable insights. Iterative learning might support a transformative approach to problem-solving, blending exploration, evaluation, and refinement in ways that foster deeper engagement with complex tasks.
Meliorating Information and Tools
Students exhibited melioration—a process of skillfully integrating diverse knowledge sources and tools to solve complex problems (Passig, 2003)—in two interrelated ways: the melioration of information and the melioration of tools and technologies. This practice highlights students’ strategic thinking and adaptability in balancing AI’s efficiency with the depth of traditional methods.
The melioration of information emerged as students synthesized AI-generated insights with authoritative sources, ensuring credibility and depth. Participant 3, for example, initially relied on ChatGPT for general demographic data but later validated and expanded these findings through Google Scholar, stating, “ChatGPT gave me a general idea, but I had to use Google Scholar for more detailed stats on market size.” This approach exemplifies how students positioned AI as a starting point for quick knowledge generation while leveraging traditional research to refine accuracy. Similarly, Participant 2 utilized AI for brainstorming brand development but critically evaluated its outputs, incorporating more reliable data to produce actionable results. These practices demonstrate a nuanced ability to align AI’s rapid information delivery with the rigorous demands of traditional research, fostering more comprehensive and credible outcomes.
In parallel, students demonstrated melioration of tools by integrating AI technologies into their workflows while tailoring them to meet specific academic and creative needs. Participant 8 used ChatGPT to draft survey questions but refined them through conventional methods to ensure they adhered to scholarly standards. Participant 4 extended this approach by incorporating AI tools like Midjourney for visual outputs, combining them with traditional research to enhance creative elements in their projects. This proactive integration of AI with non-AI tools underscores students’ capacity to optimize workflows by leveraging the unique advantages of each, blending innovation with critical oversight.
This dual practice of melioration reflects a sophisticated metacognitive skill set, enabling students to synthesize diverse resources while navigating ambiguity. By validating AI insights with authoritative sources and combining generative tools with specialized technologies, students constructed adaptive learning frameworks that balanced analytical precision with creative exploration. These strategies highlight their ability to reconcile varied perspectives, address complex challenges, and develop solutions that are both context-sensitive and innovative. Ultimately, meliorating information and tools not only underscores students’ resourcefulness but also catalyzes deeper critical thinking. This practice suggests evolving adaptability where students leverage AI to enhance flexibility and rigor, enabling them to generate more nuanced, dynamic, and impactful problem-solving strategies in both academic and professional contexts.
Reflective Thinking on AI’s Impact
Reflective thinking was evident as participants critically assessed AI’s utility and adapted their approaches for deeper learning. Participant 5 remarked, “When it comes to creativity, [ChatGPT] is not the right place to do it,” reflecting on the limitations of AI and adjusting their strategy accordingly. Similarly, Participant 8 noted, “I took some of [the suggestions] into consideration,” when using AI-generated feedback to enhance their text, demonstrating a thoughtful evaluation of AI’s contributions. Participant 2 further highlighted their reflective process by evaluating AI-generated product ideas, stating, “None of them was a good idea, in my opinion,” and then conducting additional research to find more suitable options. Participant 6 expressed doubts about the accuracy of AI-generated data, which led them to validate the information using traditional sources. These examples illustrate how participants engaged in critical evaluation of AI’s outputs, balancing its strengths and limitations, and integrating AI insights with human judgment.
This reflective engagement suggests a deeper awareness of the iterative nature of learning, as students refined strategies in response to AI’s gaps. It reveals the role of reflective thinking in fostering autonomy, adaptability, and the ability to critique technologies critically, preparing students to manage evolving digital tools and complex academic tasks with discernment.
Discussion
This exploratory study examined how integrating generative AI, particularly ChatGPT, into marketing education assessments affects critical thinking and how Bloom’s Taxonomy can be revised to incorporate AI-specific competencies. By integrating elements like melioration, ethical reasoning, and iterative learning, this revised framework offers a more nuanced approach for educators and policymakers to enhance critical thinking in an AI-driven context. The findings align with Essien et al. (2024), demonstrating that generative AI tools positively impact critical thinking, facilitating tasks such as research and idea generation, thereby enhancing engagement and personalized learning (Gulati et al., 2024; Mogavi et al., 2024). However, the complexities of how AI reshapes cognitive processes challenge traditional educational frameworks, suggesting the need for theoretical adaptation. This study provides actionable strategies and a research agenda to assist teaching academics in designing curricula and assessments that foster critical engagement, adaptive learning, and ethical AI integration—essential for preparing students to navigate AI-enhanced education and professional contexts.
Revisiting Bloom’s Taxonomy: Adapting for the Digital Age
The revised taxonomy, as detailed in Table 3, integrates themes that emerged directly from students’ critical thinking practices with AI, reflecting their dynamic engagement across cognitive, affective, and metacognitive domains. For instance, “discovering” encapsulates how students utilized AI for iterative exploration and synthesis, representing a new way of interacting with AI and search engines (Churches, 2010; Passig, 2007). This addition challenges the assumption of a passive process of information acquisition (Larson et al., 2024) by emphasizing rapid synthesis and validation.
Other additions—such as “collaborating,” “meliorating,” and “ethical reasoning”—address students’ demonstrated capacities for relational, integrative, and ethical approaches to learning (Joksimovic et al., 2023). Furthermore, “interrogating AI outputs” and “articulating precise prompts” capture critical skills for navigating AI’s limitations and maximizing its potential (Chiu, 2024; Ng et al., 2021). These skills highlight the importance of nuanced questioning and intentional communication, fostering strategic problem-solving and adaptive thinking—essential in AI-assisted assessment contexts, where ambiguity and precision are increasingly intertwined (Bearman & Ajjawi, 2023). These revisions mark a shift from linear, hierarchical stages to interconnected, adaptive processes, redefining Bloom’s Taxonomy as a responsive framework for AI-enhanced education (see Table 3). This enables educators to recognize critical thinking as a recursive interplay of exploration, validation, and integration, preparing students for hybridized learning environments.
Bloom’s rigid hierarchical structure—remembering, understanding, applying, analyzing, evaluating, and creating—has long guided educational design, but does not fully capture AI’s influence on cognitive processes (Anderson & Krathwohl, 2001). The study reveals that students moved fluidly between cognitive stages, often blending analysis (e.g., deconstructing AI-generated insights) and evaluation (simultaneously assessing their validity) in real time, which disrupts the linearity of Bloom’s model. This aligns with Das et al.’s (2013) suggestion that these cognitive skills are not always sequential and can co-occur. This also supports Irvine’s (2021) argument that Bloom’s levels may comprise sublevels and thus taxonomies with strictly non-overlapping levels might not be suitable for capturing complex, AI-driven learning processes. AI-enabled learning appears to involve iterative questioning and refinement, emphasizing melioration, where students merge comprehension with higher-order thinking. These findings imply that Bloom’s framework must be more adaptable to accommodate multilayered, AI-enhanced cognitive engagement, indicating a shift toward a more dynamic, iterative model of education that reflects AI’s impact on critical thinking.
The findings suggest that generative AI tools act as co-creators, transforming how students engage with cognitive tasks. This dual-agency dynamic, where both humans and AI shape learning processes (Yu et al., 2021) through the selection, negotiation, and contribution of distinct roles (Watt et al., 1995), shifts away from traditional student-centered approaches. Rather than passively receiving information, students collaboratively refined, challenged, and integrated AI outputs, embodying a deeper interaction. This supports Qawqzeh’s (2024) view of AI as fostering a symbiotic relationship, enhancing overall learning abilities. This dynamic necessitates rethinking how educators structure learning objectives, recognizing AI’s role in fostering holistic cognitive development.
Integrating AI-Specific Competencies Into Bloom’s Taxonomy
The study’s findings underscore the importance of incorporating AI-specific competencies—collaboration, melioration, and ethical reasoning—into Bloom’s Taxonomy to maintain relevance in modern education (Churches, 2010; Passig, 2007). As students engaged with AI, they not only demonstrated awareness of the need to evaluate the output’s accuracy and validity but also encountered ethical challenges, such as plagiarism and bias, affirming the need for embedding ethical reasoning within critical thinking education (Bearman & Ajjawi, 2023; Ng et al., 2021). The former is somewhat promising given existing concerns about students’ lack of engagement with critical thinking and ethical practices when using generative AI (Dwivedi et al., 2023; Mogavi et al., 2024). The latter highlights the necessity for curricula to integrate AI literacy and ethical awareness, preparing students to navigate AI responsibly (Bearman & Ajjawi, 2023; Chiu, 2024).
Students frequently engaged in metacognitive or second-order thinking, particularly when refining AI-generated content, reflecting an iterative process that aligns with Qawqzeh’s (2024) assertion that AI can facilitate self-paced, reflective learning. For instance, Participant 1’s repeated rephrasing of prompts to ChatGPT demonstrated an awareness of their thought process to improve outputs. This metacognitive engagement not only indicates that students evaluated and adapted their strategies but also reveals a gap in Bloom’s Taxonomy, which overlooks this iterative refinement crucial in AI-enhanced learning. Integrating such metacognitive elements into Bloom’s framework would foster a more profound understanding of AI, encouraging more effective and ethical use.
Critical thinking in business education is often understood in applied contexts, such as strategic thinking, leadership, and decision-making (Calma & Davies, 2021). In this study, students effectively used AI to develop practical skills by critically evaluating and applying AI-generated insights to tasks like creating marketing strategies and brand names, highlighting AI’s role in enhancing practical critical thinking abilities. However, the absence of a consistent definition of critical thinking within business education (Calma & Davies, 2021) suggests a need for a more comprehensive conceptualization in AI-enhanced educational frameworks, where theoretical clarity is crucial for guiding effective pedagogical practices.
Enhancing and Hindering Critical Thinking
While this study confirms AI’s potential to enhance cognitive skills (Essien et al., 2024), it also reveals risks of dependency and cognitive offloading (Lodge et al., 2023; Ratten & Jones, 2023). Participants’ tendency to rely on AI for foundational tasks such as information retrieval and high-order tasks, such as idea generation, underscores concerns that AI may facilitate superficial learning if not critically engaged (Crittenden, 2024; McAlister et al., 2023). These findings illustrate a paradox: although AI can promote critical thinking, it can also undermine autonomous cognitive development if students bypass deeper engagement, highlighting the importance of fostering critical engagement strategies in AI-driven education.
These findings reveal two intertwined but distinct forms of critical thinking in AI-enhanced learning. The first form, “critical thinking toward the AI,” involves critically engaging with AI by refining prompts, evaluating biases, and interrogating outputs, requiring curiosity, skepticism, and ethical reasoning. This approach fosters metacognitive skills like reflective thinking and melioration, extending beyond the traditional stages of Bloom’s Taxonomy. The second form, “critical thinking for the assignment,” focuses on synthesizing and applying AI-generated insights to real-world tasks, emphasizing creative problem-solving and practical application, as described by Yusuf et al. (2024). This distinction reinforces the idea that, while all critical thinking relates to problems, not all problem-solving necessarily involves critical thinking (Calma & Davies, 2021). These insights carry important pedagogical implications, suggesting the need for educational strategies that nurture both metacognitive engagement with AI and the practical application of AI-generated insights.
Practical Implications
Students are already independently using generative AI for tasks such as idea generation, simplifying complex concepts, and refining content, as seen in their diary entries. This reality highlights the need for educators to guide students in critically engaging with AI tools, rather than simply relying on them for answers. Collaborative projects that use AI for data collection—such as gathering market data or customer feedback—followed by student-led evaluation and interpretation, can deepen understanding of complex concepts. To support this, training programs for educators should emphasize understanding the revised cognitive engagement levels and providing strategies for effectively developing and assessing these skills, ensuring that AI acts as a facilitator, rather than a replacement, for critical thinking (Chan, 2023). This approach can help address concerns like AI ghostwriting by reinforcing AI’s role as a learning tool rather than a shortcut. Policymakers should establish guidelines that promote this balanced integration, ensuring that AI use supports independent thought, creativity, and academic integrity.
The revised taxonomy introduces innovative use cases that transform traditional learning objectives by fostering recursive exploration and contextual adaptability. New learning outcomes could emphasize dynamic engagement, such as equipping students to critically interrogate and refine AI-generated content in real time. For example, assignments requiring students to use AI tools to explore market trends, validate findings through external research, and synthesize actionable strategies can cultivate iterative thinking and evidence-based reasoning. These outcomes advance critical analysis and promote cross-validation skills, preparing students for the complexities of AI-enhanced contexts.
Educators should also guide students in negotiating AI’s role in collaborative knowledge creation, alternating between treating AI as a consultant, competitor, or reflective foil in strategic decision-making. Marketing students, for instance, can use AI to generate campaign ideas, evaluate these critically, and refine them by integrating human insights. This dual-agency approach fosters the synthesis of contributions from both human and AI actors, aligning learning outcomes with adaptive problem-solving and collaborative innovation.
Practically, the findings suggest that ethical reasoning extends beyond assessing AI-generated content for plagiarism or bias to encompass meta-reasoning. This critical dimension encourages students to analyze the systemic implications of AI-driven decisions. Assignments could challenge students to explore the ethical dimensions of algorithmic marketing, such as biases in targeting or inequities in access, fostering abstract thinking about power dynamics and systemic impacts. Such activities equip students to navigate the complex ethical landscape of AI-integrated professional contexts.
Melioration, both an information- and tool-oriented competency, supports the synthesis of AI outputs with traditional research methods across disciplinary boundaries. Students might use AI for branding concepts while applying statistical models to assess feasibility, bridging abstract theories with applied practices. To achieve these enhanced outcomes, educators must establish foundational activities and scaffolding that ensure a level playing field. This includes equitable access to diverse AI tools, technical training, and preparatory exercises that build baseline competencies. Providing students with the necessary resources and foundational skills enables inclusive participation and ensures all learners can fully engage with these activities.
Limitations and Future Research
This study offers valuable insights, but some limitations must be acknowledged. The small, non-representative sample size limits generalizability; however, the idiographic approach allowed for an in-depth exploration of AI-assisted critical thinking (Spencer et al., 2021). Participant performance varied: one scored moderately on the assignment (63) with full participation (100%), six were high performers with scores between 67 and 72 and full engagement, while Participant 4 had a lower score (53) and 83% participation. As the sample was mainly high-achieving, fully engaged students, these findings may not fully represent the experiences of lower-performing or less-engaged learners, warranting caution in generalizing results to a wider population.
The proposed revision to Bloom’s Taxonomy could face criticism regarding its adaptability across diverse educational contexts and potential overshadowing of traditional critical thinking skills. A comprehensive classification and hierarchy of cognitive processes in AI-assisted education has yet to be developed (Ng et al., 2021), highlighting the need for further research to refine frameworks that address AI-specific competencies. Future research should involve larger, more diverse samples to validate these findings across varied settings, and longitudinal studies to assess AI’s long-term impact on critical thinking (Essien et al., 2024). Expanding the focus to include alternative frameworks, such as those emphasizing intellectual values (e.g., relevance, accuracy, and rigorous reasoning; Carlson, 2013) or decision-making skills (Baldwin et al., 2011), would offer a more holistic understanding of AI’s impact on critical thinking. Table 4 outlines a comprehensive future research agenda for AI-enhanced learning and critical thinking.
A Research Agenda for AI-Enhanced Learning and Critical Thinking.
Finally, while audio diaries provided rich data, they may have influenced participants’ behavior due to self-monitoring effects (Dommeyer, 2007) and the reflective nature of journaling, which can prompt deeper thought (Berezan et al., 2023). To gain further insights, future studies could employ reflective essays, cognitive interviews, or other qualitative methods to better understand how critical thinking develops in AI-assisted settings. Incorporating additional metacognitive elements would aid in comprehending how students’ awareness, regulation, adaptation, and integration of their learning processes evolve (Parwata et al., 2023).
In conclusion, this study highlights the need to rethink Bloom’s Taxonomy in AI-driven education, supporting Mandai et al.’s (2024) call for actionable strategies beyond “general expectations.” As Shaw and Holmes (2014) argue that to effectively foster critical thinking, educators must look beyond rigid objectives and deeply consider the cognitive processes involved at each level. The true challenge is to leverage AI as a tool for enriching critical thinking, rather than allowing it to become a superficial aid.
Footnotes
Acknowledgements
The author would like to thank Dr Yusuf Oc, Zhonghan Lin and the participants for their collaboration on this study.
Data Availability Statement
The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Ethical Approval and Informed Consent Statements
Approval to conduct this study was obtained from the King’s College London Research Ethics Committee (MRA-22/23-38289) on 06/09/2023. Respondents gave written consent for review and signature before participating in the entry focus group.
