Abstract
Keywords
AI Tools for Literature Review, Qualitative Research, and the Evolving Social Construction of Knowledge
Artificially intelligent (AI) tools designed for research are increasingly able to handle complex tasks within knowledge development (Pettersen, 2019). The possibilities of AI to process large quantities of information means they will undoubtedly be used for knowledge innovation (Haefner et al., 2021). Unlike the weeks or months a researcher might spend searching and retrieving literature manually, VosViewer can export 2500 records, including citations, in a zipped CSV file within approximately 3 minutes (Williams, 2020). Other tools can synthesise relevant literature into a table with one-sentence abstract summaries (Elicit, n.d), outline supportive and contradictive evidence for a particular hypothesis within papers (SciteAI, n.d) or act like a ‘Spotify’ for research collections by utilising algorithms to suggest new content (Research Rabbit, n.d). While the processes of literature searching and retrieval is sometimes written off as merely a ‘rite of passage’ that prefigures research – and therefore
Much is written about the ways in which AI might change society and its potential application to qualitative analysis and interpretation (particularly ChatGPT i.e., see Morgan, 2023). Yet, little theoretical critique considers how AI may nuance processes and outcomes of literature reviews as findings driven by their use become imbricated in the evolving social construction(s) of knowledge over time. If we take the logic that literature mapping is research (Fox, 2023; Kunisch et al., 2023; Thorne, 2019) which enables researchers opportunities for novel interpretation, insight and research development (Greenhalgh et al., 2018); then the potential ethical implications of using AI to map and synthesise literature must be critically theorised. Our manuscript uses four forms of reasoning (inductive, deductive, abductive and retroductive) as thinking tools to critically imagine how tools that apply AI might nuance outcomes of literature mapping; and in turn, opportunities for curiosity and creativity in the futures of qualitative research/ers. We engage with the evolving social construction of knowledge in which qualitative research takes place and wish to illuminate tensions and meeting points between the (often positivist) logic of artificially intelligent tools and the interpretivist nature of qualitative thinking (Saldaña, 2015).
As such, we move beyond the technical realm of how AI tools perform in qualitative analysis (Wachinger et al., 2024), reported challenges in effectiveness of AI (Auer & Griffths, 2023); and also seek to avoid the looming dystopian futures in which these tools ‘outsmart’ qualitative researchers (Pettersen, 2019). Rather, we explore the way/s the shifting role of AI tools in processes of the literature review might nuance emergent spaces of knowledge production co-constitutive of qualitative research, by contouring the building blocks for critical interpretivist investigation/s. Qualitative researchers generate new insights through iteratively navigating interplays between social dynamics and interpretive frameworks (Foley et al., 2023, 2024; McLean et al., 2023) as well as creativity and social context (Città et al., 2019); always both subject and object in research configurations (Denzin, 2010) and the ‘instruments’ through which qualitative thinking can happen (Saldaña, 2015). The possibilities for thinking exist within and reflect established and emerging patterns of knowledge and meaning (Grondin, 2015) as well as political settings (Denzin, 2008) – both of which AI technologies have scope to influence as they rapidly process and produce syntheses of existing literature that are leveraged as evidence for decision-making in diverse contexts.
Our purpose is to build reflexive awareness of how using AI tools during the literature review, so that these tools can best enliven the aims of qualitative research rather than constrain them. Our broad perspective considers the non-standard (Konecki, 2019; Thorne, 2019), emergent (LaMarre & Chamberlain, 2022) and contextually-driven (Hunter et al., 2002; Pawson, 2006) nature of qualitative research. We consider how positivist logics of objectivity that underpin AI tools can interact with qualitative thinking and theorising, without stripping the complexity and nuance (Maclure*, 2007; Pawson, 2006; Thorne, 2017) that co-constitute patterns of meaning and understanding. Accordingly, we consider the literature as a complex amalgam of meanings drawn from diverse contexts (Smythe & Spence, 2012); each of which can be interpreted in different ways. We use four forms of logical reasoning to examine: How might using AI tools during qualitative literature review nuance analytical, interpretive and decision-making processes and possibilities that prefigure and co-constitute qualitative research?
We turn now to (1) outline the most popular AI tools and typologies used for exploring established research and detail the logic through which they work. We then describe and deploy (2) multi-level inference (induction, deduction, abduction and retroduction) to examine the possibilities of AI tools in relation to openings and closings for qualitative interpretation and knowledge development; which in turn unfolds (3) a critical imagination of how they interact with the knowledge systems entwined with qualitative research. Our manuscript concludes by (4) sketching some ethical implications and reflexive responsibilities for the use of AI tools to respond to their potential impacts on processes of qualitative literature review and appraisal, as knowledge systems in which qualitative research unfolds.
The authorship team who developed this critical imagination is broad and diverse; reflecting different locations of origin across the Global North and South as well as a range of professional (health-related) backgrounds and various experiences with qualitative research. All of these aspects help to provide a broad landscape from which our reflexive critique of the use of AI tools in methodologies of literature takes place. All authors, being doctoral students, have recently undertaken literature reviews for which some have used AI tools, so are also ‘sensitised’ to the valuable roles these can play (Giddens, 1984). It was during these lived and shared experiences we became interested in how to use these tools in ways which would ethically and responsibly benefit, augment and extend our qualitative exploration of prior research. We raise this diversity here because we recognise that social and intellectual positions matter to thinking creatively and abductively (Timmermans & Tavory, 2012), while texturing possibilities for reflexivity in research generally (Sweet, 2020).
Outlining Key AI Types and Tools for Searching and Appraising Literature
Describing Tools That Map and Explore the Literature Environment.
How AI Tools can Support Reasoning Processes, and Potential Limit/ations to Qualitative Thinking and Theorising.
aIt must be emphasised that the AI tools in themselves do not have inherent properties or capacities for reasoning. The way they are used works in conjunction with their design and capabilities to different types of reasoning, some of which are outlined within the table, but depends on the creative application by the researcher. This is why ‘researcher strategies’ are suggested in the following column.
bIn all AI tools, missing values are a limitation to returns from searches. This will be more problematic the older an article is and especially notable for scientific papers published prior to 2000, when digitalisation of scientific research became commonplace (in the affluent west).
We have focused on selecting a relevant sample of tools with distinct features designed for literature review to explore of logical interplays between the inputs, outputs and mechanisms of AI tools amenable to processes of literature review and appraisal (rather than a comprehensive selection). Purposively sampling in this way enables us to achieve depth and coherence (Ames et al., 2019) in exploring how AI tools ‘work’ in different contexts, to give our critical imagination the best chance of being challenged and energised in differing directions. Working at this level of ‘logic’ to develop a critical imagination minimises the impact to our implications of how fast these tools are evolving; as rather than attending to their inner workings and discrete challenges (Auer & Griffiths, 2023) we seek to elucidate the materials and processes they work with and through, as well as the shifting opportunities for knowledge development they may entail over time. In this way, we recognise that each feature makes ‘cuts’ in terms of what is available for qualitative exploration and these cuts are a critical domain for understanding and reflexivity; as much so as the fine-grained understandings of exactly how they work in/on any specific task.
Table 1 includes a description of each tool. It is structured via a heuristic of increasing complexity of literature navigation/review tasks the tools identify they can assist with throughout the literature review. We start with tools that identify their use-value in helping to discover literature and understand how it connects via the citation environment (i.e., Semantic Scholar), through to tools that search and summarise content of included articles (i.e., Elicit) as well as those that classify literature or answer specific questions based on the positive, negative or neutral evaluation of specific evidence using natural language processing (i.e., SciteAI). While we include the year of development in the table to showcase the pace of change, we further recognise that this pace is impossible to ‘get ahead of’ – yet, these timeframes are useful to enhance reflexive awareness of the speed at which these tools are evolving.
Table 1 demonstrates the opportunities for saving time in processes of literature searching (Williams, 2020); and further, in automating processes of record-keeping which will preserve researchers’ cognitive and emotional load so they can engage with the taxing work of interpretation and meaning making (Saldaña, 2015). The visual and dynamic outputs of these tools – particularly those that use interactive graphics – may widen the scope for diverse brains to engage with large quantities of information simply because formats amenable to neurodivergence exist (Garrison et al., 2023). The capacity to manipulate sizeable bodies of information in multiple ways (i.e., by citation, keyword) will be helpful, and likely encourage researchers to critically engage with heterogenous elements of literature for creative thinking (Greenhalgh et al., 2018) that may have previously been too time-intensive for retrieval or ongoing critique. Broadly, these features will help researchers to understand what current understandings are (Schwandt, 1999) – or at least are represented to be (Fox, 2023). Ostensibly, these are excellent assets for qualitative researchers.
Less desirable influences on processes of qualitative literature review are also evident from considering Table 1. Tools that operate only in the citation environment to trace impact are always limited (Dardas et al., 2023); and likely to struggle with areas of sparse and novel literature, simply because they have less material and fewer discriminatory features to work with. New amalgams to handle complexity in granularities and similarity matrices will continue to unfold (Shu et al., 2023), but some knowledge work can only be understood in contextual, social and relational terms which are more complex to represent within AI systems (as well as ‘oversee’ or ‘direct’ when using them), because the mechanisms by which technologies process information to make decisions are not always transparent (Sallam, 2023).
The politics of citation is another issue that must be noted: as most AI tools rank ‘importance’ of papers to show these to researchers but with a narrow assessment of value; only reflecting citation outputs by numbers (Dardas et al., 2023). Older papers will continue to collect a higher number of citations given the cumulative nature of research and general archaeology of knowledge that excavates back to its origin (Foucault, 1974). This is likely why AI is very good at re-creating common sense knowledge (Auer & Griffiths, 2023) and less capable at innovating new ideas (Haefner et al., 2021), because while they map what is present following positivist logic; in some instances, it is negative, silent or absent value that holds meaning (Skeggs & Yuill, 2019). Along this vein, some citations may be negative (i.e., citing weaknesses rather than excellence) or cursory (i.e., the first thinker in a given field, while ‘better’, thicker or more refined thoughts have since been forthcoming they are unlikely to overtake the citation outputs of the first thinker). While this nuance (of ‘negative’ citations) may be captured by contextual content analysed by some tools (i.e., SciteAI), it is not clear if/how citations for ‘seminal’ papers will decelerate as they lose relevance or trustworthiness over time.
In the broader environment of knowledge production, the commercial nature of scientific publishing is problematic (and is distinct from the issue of the commercial nature of AI literature review tools, most of which are subscription-based). Many of the tools utilise Semantic Scholar’s corpus, which sources its content from web indexing, partnerships with publishers, and content providers (Semantic Scholar, n.d, a). Their website states that they index papers based largely on web crawling, but that some papers may be filtered out because they were not parsable, not discovered, or not formatted correctly; were behind a subscription or log in wall; used JavaScript which is difficult to ‘navigate’; or were not published in English (Semantic Scholar n.d, b). This likely introduces significant gaps to the retrieved results. Journals that work through predatory mechanisms are more likely to have open access pipelines for publication; yet concerns about scientific quality are extensive and currently minimised via formalised research databases, a gate which these AI tools would (tacitly) bypass (Oviedo-García, 2021).
Inequities in knowledge production may become more pronounced as AI tools become more prominent: as research from high-resource environments of the Global North, anglophone world could be elevated (exponentially) in circles and cycles of knowledge production. At the same time, the knowledge of those from the Global South may be/come even less visible than it is now (Collyer, 2018; Pratt & De Vries, 2023), deepening the multifaceted barriers that hamper research development (Mweemba et al., 2019) and dissemination (Naidu et al., 2024). This might occur simply because AI is largely learning in/on/through the English language, which constrains the capacity of systems to think and learn in/on/with different languages – given that deep learning neural networks (i.e., generative AI which seeks to replicate human thinking) rely on the formation of thousands of intricately-devised rules developed through iterative interactions with millions or billions of training data items – conveyed linguistically (Castelvecchi, 2016; Holzinger, 2016; Wagner et al., 2022). Consider ResearchRabbit, which uses generative AI to function as a ‘Spotify’ for papers; extracting data from a personalised set to suggest and introduce new papers. Papers in other languages will be excluded from the suggestions, and therefore from potential citations. Further, because the Global North holds more resources for scientific funding and publishing (linked to anglophone dominance), scientific papers which exist behind non-standard paywalls (i.e., journal subscriptions) may not be included. The extent to which papers are published open or subscription depends on a host of factors, including discipline and funding allocation – exacerbating the exclusion of knowledge from resource-poor environments which correlates with the Global South – which will subsequently have cumulative impacts on what is available for AI tools to search, track, and ‘extract’ for researcher review.
Specific features of AI tools may further compromise the contribution of the Global South to global knowledge production, such as the ‘suggested authors’ feature in Research Rabbit, which relies on citations (Cole & Boutet, 2023) and thus, prioritises prominent authors in the field. The list of recommendations might be hundreds of authors long and include relevant authors from the North and South; however, the mind map-style visualisation provided alongside the scrollable list is oriented to displaying authors with connections to one another. The visibility of researchers in the North is, therefore, likely boosted due to greater opportunity to collaborate with other local authors that have similar high impacts – because of geographic cross-over of institutions and priority populations for research. To take this further, institutions and countries that have had greater funding for a longer period are, via this logic of prominence ranking, able to perpetuate and spread their high research visibility through association. Increased citations and visibility for more prestigious institutions and high impact authors is not a novel concept; but tools such as Research Rabbit’s suggested authors are likely to compound the problem. Literature that can diffuse further away from the source (i.e., the author and the author’s close networks) in the citation environment is a marker of prestige, likely enabled by multidimensional forms of real and symbolic capital (Bourdieu, 1987). It further needs acknowledging that the worldviews of the global north – being mechanistic and predicated on isolation – through which AI learning models are generated, sit awkwardly with the relational onto-epistemologies of the global south, where it is not logical to find or diffuse meaning in an object out of place from its context and history of origin (Moreton-Robinson, 2013).
Reflexively Imagining How AI Tools can Support Literature Exploration Using Inductive, Deductive, Abductive and Retroductive Reasoning
We now seek to shape our critical imagination by applying four forms of logical reasoning – inductive, deductive, abductive and retroductive – to think about how AI tools might support or hinder qualitative explorations of evidence when they are recruited for various tasks during the literature review. These four modes of reasoning, sometimes called inference, are ways of analytically working between data and theory (Kennedy, 2017). As a shorthand, Multi-layered reasoning forms positioned in relation to data and theory.
Our aim is to use these forms of reasoning to situate and critique the
Inductive Reasoning
Induction involves exploring data to develop understandings about meaning (Morse, 1992). Units of analysis can be small, and through bringing these units together from the bottom up (Gilgun, 2005; 2019) to build frames and patterns, meanings in data are identified and refined during open-ended and cyclical processes (Mukumbang et al., 2018). Inductive coding is often described as the first step of qualitative analysis, an exploration of data for their ‘essence’ – which of course are dependent to the contexts they are developed in, from and through (St Pierre, 2019). The conditions that structure essence, however, are not used as a reference point for contextualising data within induction, because this form of reasoning seeks to elucidate the meaning structures within a specified dataset alone (Gilgun, 2019). It is for this lack of situatedness within concurrent or plural structures of meaning that inductive analysis is thought insufficient for novel theory generation (Timmermans & Tavory, 2012). Yet it is an important step in building rich understandings of qualitative data as starting points for developing and synthesising theory.
We propose that AI tools will be useful for inductive processes within literature reviews, in seeking to find and develop ideas from cohesive literature blocks about specific topics. Semantic scholar and Inciteful, for example, are capable of retrieving articles via topic keywords searches that are researcher-driven. However, these tools are optimised when using bibliometric data to identify patterns, rather than assessing the nuance of article content, evidence or ideas as frames of meaning. Open Knowledge Map’s focus on mapping the top 100 papers returned from a keyword search will provide a compressed rendering of how key ideas are structured within a particular block of literature, ranking articles identified as ‘most relevant’ (although the metrics used to rank relevance may not be clear to the researcher). SciteAI describes its capacity to evaluate assenting or dissenting evidence within an article (using large language models to ‘think analytically’ about the content), meaning it may be able to perform inductive analysis that a researcher would normally do on full read of an article.
The materials these tools use to retrieve and synthesise literature to capture its ‘essence’ is therefore a mixture of semantic and metadata; and it will be critical that researchers understand when the content of an article is being scanned for meaning (to the extent it is decipherable by AI), as opposed to only citation or bibliometric data. While bibliometric data might include keywords, even these can exclude relevant content from searches because of slightly different keywords, like ‘health inequality’ as opposed to ‘health disparity’. Directing AI tools to do these searches and curate literature (as data) will be useful in saving researcher time, however the capabilities of AI tools for developing pictures of meaning will be constrained by the previous frames of meaning they have access to. AI-assisted searching may make some novel patterns identifiable that were elsewise beyond researcher scope if searched manually (for example, all articles coming from a particular region) – opening these up for inductive analysis by the researcher (Ngwenyama & Rowe, 2024).
Deductive Reasoning
Deduction involves thinking with pre-defined categories in mind. It plays an important role in theory development by examining data in relation to an existing and pre-selected theoretical framework (Meyer & Lunnay, 2014). Any data falling out of scope of this framework are discarded from deductive inferential processes and used for other forms of inference. When combined with induction, deductive analysis can be useful for theory application, testing and refinement as well as critiquing prior knowledge (Fife & Gossner, 2024) by providing a systematic way to empirically examine existing and emergent theories together and in relation to each other (Gilgun, 2014, 2019). Deductive analysis can operate from an interpretivist standpoint, in that thinking deductively aims to elucidate tension points where theories don’t fit – like negative cases (Fife & Gossner, 2024), particularly useful in testing out novel theory (Kennedy, 2017) during abduction (Timmermans & Tavory, 2012). Deduction is typically seen as a positivist method because it relies on applying pre-defined concepts to explore data.
AI tools will be valuable for deductive reasoning within the literature review for researchers exploring particular combinations of concepts traceable within and across the semantic and citation planes. VosViewer and Connected Papers, for example, will produce publication, discipline, or location clusters based on details that the researcher enters. Citation Tree and Litmaps will produce reports of how similar or different research papers are (based on their ‘distance’ within the citation environment) which can be tweaked by the researcher; including by time period. Researchers will be able to use these platforms to search out and explore connections that they are interested in elucidating. This might include a particular research design, which is possible for AI tools to index and retrieve; such as two content areas that co-occur; or research about a particular topic area from a particular discipline or author.
A limitation to deductively exploring the literature with AI tools is that some variables or information may only be gleaned by manually reading the full text (so analysis may be incomplete), and ‘missing values’ in metadata particularly will mean that many articles would not be retrieved for inclusion. This issue will more severely affect and exclude ‘older’ research (pre-2000s, before metadata were typically expected for publication). AI tools will help researchers work deductively immensely, however, by enabling automatic and easy “refreshing” of literature searches, with tools that can automate content alerts – thereby automatically appraising the literature environment for the entry of new data that conforms to the search preferences outlined by the researcher (such as through Research Rabbit and Litmaps, which send new articles based on existing collections).
Abductive Reasoning
Data which fall out of scope during processes and stages of deductive reasoning can be explored using abduction (Danermark et al., 1997; Meyer & Lunnay, 2014; Timmermans & Tavory, 2012). This type of reasoning involves the introduction of new ideas (Meyer & Lunnay, 2014) and connections between data and context to show how different objects of inquiry may relate to one another in ways that are initially non-evident (Danermark et al., 1997). As such, abduction is concerned with how something might be and why; and relies on interpretation and recontextualization to consider the ‘most likely’ explanation(s) for why something is the way it is (Downward & Mearman, 2007). Abductive inference takes place at the level of meaning (Mukumbang et al., 2018) by bringing together inductive and deductive inference and following the element of ‘surprise’ to explore unexpected ideas in more detail (Timmermans & Tavory, 2012).
In terms of exploring ‘unexpected’ or ‘surprising’ ideas within the literature environment, AI tools will be useful in curating searches of new areas quickly for researchers to support these processes. Bespoke and layered searches that connect different domains will be a significant asset for researchers who wish to think around new ideas and develop theories creatively. Inciteful, for example, can ‘bridge’ different articles together by mapping them through the citation environment – to examine how different ideas, topics and disciplines might intersect and relate – but the researcher needs two different seed papers to start the search. Connected Papers can start with only one seed paper, then use metadata as well as topic content to explore similar papers, which might be useful for canvassing how one research topic has been approached (methodologically, for instance) in ways that might elucidate new reflections on, and directions for, further research.
Both SciteAI and Elicit use generative AI to respond to questions posed by a researcher, even when questions have multiple layers. They ‘read’ and ‘analyse’ content within specific papers to support with hypothesis generation, beyond other tools which complete their searches using metadata that has already been pre-defined. These could be useful in helping researchers spark or start new processes of abduction, as well as retrieve existing evidence on obscure linkages more easily. Enabling AI tools to support abductive reasoning, however, will require the researcher to have enough of a sense of the novel connections that they want to explore so that they can build a search which is interpretable by the tools to get started. The typical logic of AI-driven literature searching (based on similarity and cohesion) means that opposing literature (i.e., negative or juxtaposed cases) which might be amenable to introducing new ideas will be largely excluded, which will constrain spontaneous processes of abduction.
Using AI tools to map literature environments using induction and deduction will likely lead to greater iteration and tightening of search concepts, which may help to crystallise researcher thinking earlier in the theory development process – and preserve researcher judgement for more complex forms of reasoning that can be cognitively draining (Saldaña, 2015). This may unearth research that would not have been otherwise found, so could increase the reach of exploration of specific concepts from across disciplines or geographies that might not have been possible via traditional searching methods. Plausibly, this could facilitate better connections between Global North and Global South literatures, although because similar linguistic terms must be used and most AI is trained using English, this poses a hugely limiting factor that will reconstitute the gap between Global North and Global South knowledge environments. It is further plausible that these novel searching capacities will retrieve
Surprise, or ‘unexpectedness,’ is an important feature of abduction inference (Timmermans & Tavory, 2012). AI tools will not experience these feelings nor be able to link them to a human corpus of knowledge about how the world works (driven by lived experience or prior knowledge as such). It follows that thinking with philosophy or
Retroductive Reasoning
Retroduction involves a close attention to contextuality, where circumstances that prefigure research phenomena and concepts are explored – including the contextual factors (Meyer & Lunnay, 2014) or conditions of reality (Jagosh, 2020) in which phenomena cannot or do not exist, or exist in configurations not yet intelligible to AI tools. Retroductive reasoning enables dualisms between induction and deduction to be overcome through synthesis (Saether, 1998), and when brought together with abduction, coheres all three other inferential processes, possibilities and products (Jagosh, 2020; Mukumbang et al., 2018, 2021) to examine why things are (not) the way they are (not). As with thinking abductively, AI tools will support researchers to undertake retroductive inference by examining established knowledge for reasons why a certain research phenomenon might be the way it is – getting right at the heart of qualitative inquiry (Denzin, 2008).
Literature on the existence of theory in a particular area or correlations between objects (which might be meanings or interpretations) can be more expansively searched with the assistance of AI tools, as with abductive processes. Test runs of ChatGPT against human qualitative researchers suggest that AI tools
SciteAI and Elicit, claiming to be the most ‘intelligent’ of these AI tools, will be able to respond to researcher questions in ways which will get ‘smarter’ and ‘smarter’ over time. Already, their generative intelligence supports rudimentary responses to layered questions, which may help to unearth new pockets of literatures or explanations for phenomena that can then be combed over and thoughtfully considered in more detail by researchers. Being able to slice literature on a particular topic by features of research design (provided these are listed in metadata or searchable within-content) may be helpful in understanding the nature of research evidence, to the extent that it predicates certain explanations of research outcomes and the social world. As with the other tools, AI will be able to support processes of retroduction; but only when reflexively applied by the researcher during literature review.
Critically Imagining How AI Might Inflect the Knowledge Systems Entwined with Qualitative Research as a Practice
AI tools can be broadly considered as ‘computational agents that act
(Critical) Human reflection about how AI may lean towards particular data and patterns while occluding others (Morgan, 2023) is essential if AI is to be used productively towards desired objectives (Jungwirth & Haluza, 2023). Of course, human reflection and intelligence are also constructed objects, having been developed over long histories (Gadamer, 1977). The constructed nature of both is therefore critical for understanding (Schwandt, 1999). (Good) Qualitative research appreciates and articulates with these histories via cyclical refinements and immersive encounters that enable deep contemplative juggling of ideas (Chung-Lee & Lapum, 2024), including a push and pull didactic between theory and methods (Collins & Stockton, 2018). The path for AI to support this requires skilful researcher oversight and engagement.
The propensity of AI tool use during the literature review risks returning ideas that are stagnant and stale (Eakin & Gladstone, 2020); bare as bones (Mykhalovskiy et al., 2018); and sedative of rich storytelling potentials that unfold novel findings for understanding (and changing) the world (Connelly & Peltzer, 2016). Poor application of AI tools during the literature review likely will have similar outcomes to poor qualitative research: being superficial rather than deep (Chung-Lee & Lapum, 2024); concrete and descriptive as opposed to interpretive (Morgan, 2023) or creative (Sandelowski, 2011); and risks limiting the departure from pre-set concepts (Thorne et al., 2004). How can we safeguard and optimise creative capacity of researchers when they use AI – applying human experiences like turning a painting upside down or on its side (Chung-Lee & Lapum, 2024) to enable a different perspective?
Plausibly, AI can learn to build its own reflexivity, which might prevent engaging too thinly with data and analytical possibilities (Braun & Clarke, 2019). AI is potentially more effective when it is designed to learn as an entity and programs itself autonomously, and not when simply created through programming (Yunkaporta, 2019). Future potential suggests that AI can stimulate creative thinking as well as solely ‘rational’ purposes as part of a creative-possibility perspective – so long as these tools are recruited to ‘think with’ rather than ‘think for’ (Eriksson et al., 2020) and do not become bureaucratic tools which threaten research quality and academic expertise/identity (Chubb et al., 2022). A central point of concern here is how to navigate the lack of transparency about how AI tools generate their search returns during the literature review (Sallam, 2023) so that the contextualities which underpin knowledge development can be weighed for their impact. The way some of these platforms manipulate language – such as ChatGPT – will alter the possibilities for conceptual understanding, both in and over time (Esmaeilzadeh, 2023).
Advocacy about the commercial nature of scientific findings
Ethical and Reflexive Implications for Integrating AI Tools within Qualitative Literature Reviews
Extending from our critical imagination, we delineate some reflexive responsibilities for qualitative research/ers using AI to explore existing literature: 1. Clearly 2. Rather than stopping at a ‘technical disclosure’ of the use of AI in the process of the literature review, researchers should demonstrate an 3. For interpretivist forms of research, this could include a particular 4. Keep in view during all stages of qualitative research that all 5. Uphold the 6. Embed a situated view of the
The possibilities of any AI must be considered as contextual, partial, and fallible, alike to the public and policy perceptions which accompany and enfold it (Cave & Dihal, 2019). These dispositions reinforce the importance of epistemic orientations of qualitative researchers towards openness, understanding and plurality (Schwandt, 1999) rather than singular and objective notions of truth (Crotty, 1998). Proactively developing partnerships and thought networks for how AI can enable qualitative research, particularly during the literature review, will help to develop more shared understandings around the mechanisms of both in our unfolding world. Changes in the roles and responsibilities for reflexivity at collective and systems-levels – rather than only in the minds of qualitative researchers – will be important for enabling the way AI tools can enable qualitative research to co-evolve as systems of knowledge and research do. We do not presume that researchers are using these tools uncritically as it stands. Our work showcases several insightful pieces on how AI tools can be used in distinct domains of qualitative thinking and theorisation, and our work seeks to build on this by cultivating a critical imagination for using AI tools critically during the literature review: within the terrain(s) of knowledge, power, commerce, and culture that will undoubtedly be shaped by their use.
