Abstract
Keywords
Background
The qualitative data analysis (QDA) is usually seen as laborious because it is not fundamentally mechanical but rather a dynamic, intuitive, and creative process of inductive reasoning. Most qualitative researchers analyse their own data, unlike some quantitative researchers, where statisticians can be of assistance (Basit, 2003). And because most qualitative researchers are required to analyse their findings on their own or have basic knowledge in the analysis to make meaningful interpretations and conclusions, amateurish qualitative researchers are occasionally stranded in this pursuit. Consequently, this makes QDA arguably the most important but difficult phase of any qualitative research process, requiring evidence-based synthesis to guide researchers.
Although labour-intensive and time-consuming, partly due to large amounts of contextually overloaded, subjective, and richly detailed data (Ngulube, 2015), the process cannot be overemphasised. In most cases, qualitative data collection and analysis are interdependent and inseparably intertwined in a practical sense (Grbich, 2012). These phases are, however, considered independently for purposes of clarity in this paper.
Qualitative data are diverse and complex and may comprise transcripts of face-to-face interviews, focus group discussions, and documents Kuckartz, 2019. Analysing data qualitatively is referred to by many as classifying and interpreting linguistic material to make statements implicit and explicit in their dimensions of meaning-making (Flick, 2013; Jowsey et al., 2021). Bengtsson (2016) argues that decontextualisation, recontextualization, categorisation, and compilation should be the four main stages in qualitative data analysis and that each stage must be performed many times to maintain the quality and trustworthiness of the analysis (Bengtsson, 2016). According to Miles and Huberman (1994), three concurrent activities are necessary, including: (1) data reduction to simplify and transform raw data, (2) data display, where data is organised and assembled into matrices, graphs, and charts and conclusions drawn or verified to interpret the data, and (3) testing provisional conclusions for credibility.
In addition, Hsieh & Shannon (2005); Zhang & Wildemuth (2009) hold the opinion that an eight-step method such as (1) preparing the data, (2) defining the unit of analysis, (3) developing categories and the coding scheme, (4) testing the coding scheme in a text sample, (5) coding the whole text, (6) assessing consistency of the codes, (7) drawing conclusions from the coded data, and (8) reporting the methods and findings (Zhang & Wildemuth, 2009) is the way to go. Mayring (2014) however, contends in differentiating between inductive and deductive qualitative data analysis. She proposed a seven-step method that includes (1) determining the research question and theoretical background, (2) defining the categories and subcategory systems based on previous literature, (3) establishing a guideline for coding, (5) reading the whole text to determine preliminary codes, (5) revising the category and coding guideline where necessary, (6) reworking data if needed, and (7) analysing and interpreting (Mayring, 2014). But for Ritchie et al. (2003) qualitative data analysis can be accurately performed in three stages, including charting, mapping, and interpreting the data (Ritchie et al., 2003). Braun and Clarke (2006) prefer to name their strategies as phases. In analysing qualitative data, Braun and Clarke argue that the researcher should familiarise oneself with the data, generate initial codes, search for themes, review themes, define and name themes, and finally produce the report. Braun and Clarke’s assertion is similar to that of many authors, including Bengtsson (2016); Mayring (2014); Zhang & Wildemuth (2009). What is not explicitly clear in these authors’ descriptions is the silence or little description of the inductive and deductive logic in analysing qualitative data except for Mayring (2014). However, the inductive-deductive logic, arguably, features in every qualitative data analysis and should be considered in contemporary academic discourse.
Apart from considering QDA in stages and strategies, some authors (Bengtsson, 2016; Erlingsson & Brysiewicz, 2017) place a premium on manifest, literal, and latent analysis, which encapsulates these stages mentioned above.
Bengtsson (2016) further advocates for inductive and deductive logic reasoning in qualitative data analysis, where inductive reasoning is considered a procedure for developing inferences from collected data and interlacing new information into concepts with an open mind to identify meaningful phrases answering the research questions. With deductive reasoning, the researcher searches for predetermined, existing phrases by testing hypotheses or principles (Bengtsson, 2016). However, Armat et al. (2018) argue that performing qualitative data analysis in the light of inductive and deductive reasoning may be misleading and ambiguous for some qualitative researchers. They argue that deductive (directed/framework) qualitative analysis occurs when there are some perspectives, prior research findings, theories, or conceptual frameworks pertaining to the phenomenon of interest, while inductive (conventional) qualitative analysis is employed when there are few or no prior theories or research findings (Armat et al., 2018). Their argument is that inductive and deductive reasoning in qualitative data analysis are inseparable and that all qualitative data is analysed using both, beginning with one depending on which one the researcher intends to start with. The argument on the inseparability of induction and deduction holds some merit and deserves further clarity as to when one should cease with induction and begin with deduction during analysis. This review therefore aims to explore and challenge the existing literature to spark scholarly debate, which may improve and clarify how qualitative data is analysed using both inductive and deductive reasoning.
Methods
This integrative literature review design synopsises past empirical literature in qualitative data analysis to provide comprehensive understanding for prospective qualitative researchers. Integrative literature reviews have the potential to advance evidence-based science by informing future research and policy initiatives and allowing for the inclusion of diverse methodologies with direct applicability to practice and policy. The methods employed are based on the five stages of Whttemore and Knaff (2005) framework, which are problem identification, literature search, data evaluation, data analysis, and presentation. This framework provides systematic and rigorous approaches in conducting integrative literature reviews in health sciences research.
Problem Identification
Graduate students’ need to analyse qualitative data and their desire to generate findings for qualitative research studies gave rise to the problem. Students and novice qualitative researchers often employ the terms ‘inductive’ and ‘deductive analysis’ as if one may do inductive analysis devoid of prior knowledge or literature assistance. There is also unclear usage of deductive analysis, leaving questions about emerging themes. These questions and unmet needs for clarity raised the enquiries: What underlines the epistemological assumptions of inductive and deductive logic in qualitative data analysis? And how might inductive and deductive analysis be articulated for clarity in qualitative data analysis?
Literature Search
The search was conducted using CINAHL, Scopus, and Web of Science. Using qualitative data analysis, thematic analysis, qualitative content analysis, inductive qualitative analysis, deductive qualitative analysis, and coding as key search terms, the researcher performed citation chaining with Google Scholar. The review used primary methodological qualitative research, including books and book chapters, with dates ranging from 2000 to 2024 to capture enough data from the databases. Only material published in English that provided an in-depth discussion of qualitative data analysis was included. Some of these articles included were not peer-reviewed; however, peer-reviewed studies were targeted mainly to ensure the integrity of the findings because those articles already have good levels of scrutiny. The search extended consultation with experts to assist in identifying relevant sources essential for this review.
This is shown in the PRISMA flowchart in Figure 1. PRISMA Flowchart
Summary of Records Reviewed
Data Evaluation
The author evaluated the records for their authenticity, methodological quality, and informational significance. Structured data extraction and a quality appraisal checklist were utilised on each record for information extraction based on the Critical Appraisal Skills Programme Checklist (Long et al., 2020), using Google Forms. We (the author and research assistants) initially selected records based on their titles and then analysed the abstracts of these selected titles to determine their relevance to the study question. Only abstracts relevant to qualitative data analysis were considered for a full-text review. The extraction excluded full-text records that did not meet the appraisal process from the review. Data extraction chained the identified full text using citation chaining, which includes both forward and backward chaining.
Data Analysis
Thematic analysis by Braun and Clarke (2006) using the inductive dominant analysis was considered. Themes were developed from data as the author separately analysed the articles in accordance with the six steps of thematic analysis. According to Braun and Clarke (2006), familiarisation, coding, generating themes, reviewing themes, defining and labelling themes, and writing up are the six steps required for a good thematic analysis. Before analysis, the author initially read all publications that were included. Second, highlighted passages from the texts that described content that corresponds to the aim of the current paper. The pattern of the phrases detected in the literature was then labelled and utilised to develop themes. Fourthly, the author examined the accuracy of the themes to confirm their precision to the aim of the study and their similarity in the group. Labelling each theme continued in the fifth step, and the sixth step, which involved the write-up, concluded the stages. Six themes emerged inductively from the papers that were included in this study. An iterative process of examining the data displayed to facilitate the difference in patterns, themes, and the relationships that existed within them using a comparative method was ensured, and conclusions were drawn from the findings for discussions.
Results
The themes that emerged after the thematic analysis were (1) planning the research, (2) the qualitative coding fundamentals, (3) inductive and deductive analysis logic, (4) qualitative content analysis, (5) manifest and latent analysis, and (6) thematic analysis.
Planning the Research
The review found that most authors argue that a good qualitative data analysis should start with planning the study (Armat et al., 2018; Bengtsson, 2016; Erlingsson & Brysiewicz, 2017; Harding, 2018; Humble & Radina, 2019). Exploratory studies that do not use models and theories as a framework to guide the study should plan to use inductive analysis (Armat et al., 2018; Bengtsson, 2016). The notion, however, is that during planning for qualitative research, where models and theories are used as a framework to thread through the stages of the research chapters as required by most institutions for graduate students, deductive logic as an analysis plan will be helpful (Assarroudi et al., 2018; Mayring, 2014). This means that an inductive analysis plan cannot and should not replace a deductive analysis plan, because one (deductive) is guided by previous literature while the other (inductive) is based on current findings. So, in facilitating clear progression from the study aims and objectives to its conclusions, it is best to have a predetermined pathway to follow (Neale, 2016). This is because analysis and write-up should feed back into the original study aim or question. Therefore, planning is necessary, as it may appear illogical to disregard issues that prior to data analysis seemed important (Neale, 2016) and as such, recalling the research question or the narrative while coding will assist in keeping the qualitative researcher focused on important codes (Stuckey, 2015).
The Qualitative Coding Fundamentals
Several authors have reported the use of coding in qualitative data analysis, such as Baralt (2011), Basit (2003), Corbin and Strauss (2014), Creswell and Báez (2020), and St. Pierre and Jackson (2014). Coding is a conceptual thinking where word descriptors are identified and mapped out to indicate phrases necessary to answer a research question. Because qualitative data are textual, non-numerical, and unstructured, coding has a crucial role in the analyses of such data to organise and make sense of them, as it facilitates reducing, condensing, distilling, grouping, and classifying (Basit, 2003). Coding is universal in the qualitative research process; it is a fundamental aspect of the analytical process and the ways in which researchers break down their data to make something new. But, coding is often under-considered in research methods training and literature (Creswell & Báez, 2020; Elliott, 2018). Coding is the process of evaluating qualitative text data by dissecting it to determine its insights and then reassembling it in a meaningful manner (Creswell & Báez, 2020; Elliott, 2018). Elliott (2018) conceptualises coding as a decision-making process that has the potential to raise problems because most qualitative researchers learn coding through a process of trial and error with guidance from one or two supervisors during their thesis research with the notion of it being a natural process they discovered. When coding is wrongly performed, it will mess up the analysis process, leading to inaccurate interpretation and subsequently poor conclusions.
According to St. Pierre and Jackson (2014) coding is decontextualising and fragmenting data collected into codable elements. It is the analytical process of organising raw data into themes that assist in interpreting the data, the activity that the researcher engages in, while codes are the names or labels and symbols used to assign a group of similar items, ideas, or phenomena that the researcher has noticed in his or her data set (Baralt, 2011). Furthermore, according to Stuckey (2015), coding is a laborious and imaginative process that consists of three steps: (1) going over the data and developing a plot, (2) classifying the data into codes, and (3) employing memos for interpretation and clarification. Stuckey avers that by using memos to help clarify how the researcher is building the codes and his/her interpretations, the analysis will be easier to write in the end and have more reliability (Stuckey, 2015).
According to Corbin and Strauss (2014), coding involves reviewing all data line by line, identifying key issues or themes (codes) and then attaching segments of text (either original text or summarised notes) to those codes that lead to a hierarchical tree of codes. The recommendation from Corbin and Strauss (2014) is that you initially code into multiple exploratory open codes, collapse it into fewer focused codes, and then merge the focused codes into a small number of broader conceptual codes (Corbin & Strauss, 2014). Meanwhile, Rhodes & Coomber (2010) suggest beginning with broader descriptive codes and then breaking these down into smaller coding units to make comparisons across the data more meaningful. Coding is one of the significant steps taken during analysis to organise and make sense of textual data (Basit, 2003), and this is usually considered descriptively or interpretively.
Inductive and Deductive Analysis Logic
Having a predetermined theme or allowing themes to emerge from the data is a concern reported by many authors (Armat et al., 2018; Bengtsson, 2016; Graneheim et al., 2017; Mayring, 2014; Neale, 2016). Where themes are derived from a guided preconceived frame, it is usually referred to as deductive analysis (Neale, 2016), etic or concept-driven, this is because the analysis has to feed back into the original aim or question. Deductive analysis, according to Armat et al. (2018) is when some views, previous research findings, theories, or conceptual frameworks regarding the phenomenon of interest exist (Mayring, 2014). The researcher begins the analysis, using the pre-existing categories, such as the analysis matrix imposed by the theory or previous research findings (Bengtsson, 2016).
Inductive analysis logic is argued as the reasoning process of developing conclusions from collected data by weaving together new information into theories where the researcher examines the data with an open mind in order to ascertain meaningful subjects that will answer a research question (Bengtsson, 2016). Mayring (2014) argues that inductive, emic, or data-driven, or conventional analysis is used when there is a lack of or limited previous theories or research findings as a guide.
These authors (Armat et al., 2018; Mayring, 2014) furthermore hold the opinion that labelling inductive or deductive would imply that the researcher absolutely selects one, and only one, of the inductive or deductive reasoning modes during the text’s analysis (Armat et al., 2018). The truth is that even in the inductive analysis, the researcher’s mind is not entirely a tabula rasa of the phenomenon at the beginning of the study. The researcher’s research question(s), study aim(s), and/or some pertinent assumptions will practically direct the analysis, which can be considered as deductive logic. And as the analysis progresses, new categories will emerge inductively where deductive logic becomes dormant to allow utilisation of the inductive logic (Armat et al., 2018). Therefore, assigning labels like ‘inductive’ or ‘deductive logic’ may appear misleading, illogical, and ambiguous because researchers will constantly switch their minds between induction and deduction to make meaning in QDA (Harding, 2018). To support the aforementioned notion, Graneheim et al. (2017) argue that the best term to fit this situation is ‘abductive logic’ because both inductive and deductive logic complement each other, where a movement is seen back and forth between inductive and deductive logic.
Qualitative Content Analysis
Previously used in quantitative analysis, content analysis has seen a jump and is a well-proven method in the qualitative health sciences research (Kleinheksel et al., 2020; Kuckartz, 2019). Content analysis is reported by many authors (Cho & Lee, 2014; Devi Prasad, 2019; Elo & Kyngäs, 2008; Erlingsson & Brysiewicz, 2017; Graneheim et al., 2017; Renz et al., 2018). The literature presents several descriptions of qualitative content analysis, including the fact that it was first used in analysing textual materials from hymns, newspaper and magazine articles, political speeches, advertisements, and folktales and riddles (Elo & Kyngäs, 2008).
Although it was employed in academic works, it mostly stayed in an exploratory, impressionistic, and less practical role. It is a technique for the subjective interpretation of the content of text data through the systematic classification process of coding and identifying themes or patterns (Devi Prasad, 2019). However, according to Cho and Lee (2014), it can be a useful tool for analysing almost any type of communication material, such as narrative answers, open-ended survey questions, focus groups, interviews, observations, and printed media.
It methodically condenses an extensive amount of text into a clear, well-organised synthesis of the main findings (Erlingsson & Brysiewicz, 2017). Epistemologically, Graneheim et al. (2017) argue for it being applicable whether knowledge is believed to be innate, acquired, or socially constructed (Graneheim et al., 2017). In its attempt to match the coding frame to the content, it is methodical, adaptable, aids in data reduction, and can combine both inductive and deductive logic during analysis (Graneheim et al., 2017).
Qualitative content analysis is a reflective process and has no linear progression in the analysis, as identifying and condensing meaning units, coding, and categorising are not one-time events but an ongoing process of coding and classifying followed by a return to the raw data to reflect on your initial analysis (Erlingsson & Brysiewicz, 2017). There are four phases according to Graneheim et al. (2017), these include (1) decontextualisation, where you break wholes into parts; (2) recontextualization, bringing parts to form whole, (3) categorisation, isolating and grouping key phrases based on similarities and differences; and (4) compilation, putting it all together to make meaning. In order to analyse meanings, themes, and patterns that may be apparent or hidden in a given text, the method goes beyond just counting words or extracting objective information from texts. Instead of calculating the text’s physical attributes, the emphasis is on meanings and patterns, which enables academics to comprehend social reality in a methodical but subjective way (Devi Prasad, 2019).
Manifest, Literal and Latent Analysis
Manifest analysis is what has been said on the surface structure, whereas latent analysis considers a deep structure of what was intended to be said (Bengtsson, 2016). For every qualitative content analysis, the researcher must strategically decide to use either manifest or latent as a choice, as reported by some authors (Bengtsson, 2016; Cho & Lee, 2014; Erlingsson & Brysiewicz, 2017; Graneheim et al., 2017; Graneheim & Lundman, 2004; Kleinheksel et al., 2020). Coding in qualitative content analysis can appear as either manifest or latent content meaning of communications. Whereas manifest content allows the researcher to code the visible and surface content of text, latent content requires the researcher to code the underlying meaning of the text (Graneheim & Lundman, 2004). Often the researcher’s desire is to go beyond the manifest content of the text and analyse latent content (Cho & Lee, 2014).
According to Kleinheksel et al. (2020) the researcher should consider the data collected from a neutral perspective and their objectivity but should choose between the manifest and the latent level, which is dependent on how the data are collected. While a manifest analysis allows the researcher to use the informants’ words and progresses step-by-step through each identified category and theme, the researcher is nonetheless mindful of the necessity of referring to the original text. This allows for a tighter adherence to the original contexts and meanings (Kleinheksel et al., 2020). Manifest analysis is defined as describing what is occurring on the surface, what is and is literally present, and ensuring that the researcher stays close to the text (Graneheim & Lundman, 2004).
Without the requirement to ascertain purpose or uncover hidden meaning, manifest analysis focuses on data that is readily apparent to researchers and the coders to support their analyses. It requires little training to perform with a goal to find targets in text that are simple to observe. The manifest content is described as close to the text, as well as interpretations. However, in contrast, a latent analysis requires the researcher to be immersed in the data to identify hidden meanings in the text, choosing appropriate meaning units in each category or theme. It is described as distant from the text but still close to the participants’ experiences. The latent content is interpretations of the underlying meaning between the lines in the text (Graneheim et al., 2017).
Thematic Analysis
Well reported in this theme was that of Braun and Clarke (2006, 2019). Others were Christou (2022); Clarke and Braun (2017); (Guest et al., 2012); Harding (2018); Harding & Whitehead (2013); and Riger & Sigurvinsdottir (2016). Thematic analysis is an extremely valuable analytic tool for qualitative studies that, if done properly, certainly does not fail to provide insights into a phenomenon under investigation (Christou, 2022). According to Clarke and Braun (2017), thematic analysis is a method for identifying, analysing, and interpreting patterns of meaning within qualitative data, usually in the standard of qualitative analytic approaches. It can be applied across a range of theoretical frameworks and research paradigms but normally is not a theory-bound tools (Clarke & Braun, 2017).
For Guest et al. (2012) thematic analyses, as in grounded theory, require more involvement and interpretation from the researcher, as it is required to move beyond counting explicit words and phrases to focus on identifying and describing both implicit and explicit ideas within the data. Codes are typically developed to represent the identified phrases and applied or linked to raw data as summary markers for later analysis. Such analyses may or may not include the following: comparing code frequencies, identifying code co-occurrence, and graphically displaying relationships between codes within the data set (Guest et al., 2012). Christou (2022) on the other hand, argues that thematic analysis is poorly demarcated and yet widely used. But the confusion in thematic analysis includes an issue of confusing summaries of data domains or topics with fully realised themes (Braun & Clarke, 2019) and this may explain reasons for the perceived poor demarcation. Researchers should note that whichever approach to qualitative data analysis is adopted, the data analysis procedure must be aligned to the data that has been gathered and the assumptions of the research approaches. Although all forms of qualitative data analysis involve interpretation, the researcher should acknowledge the possibility of alternative interpretations (Harding, 2018; Harding & Whitehead, 2013).
Braun and Clarke (2006, 2019), and Clarke and Braun (2017), proposed six phases in thematic analysis that provide a clearer view of steps to consider during thematic data analysis. According to Braun and Clarke (2006) the researcher should familiarise himself or herself with the data, work to generate initial codes, search for themes within the codes, review the themes searched for to build consistency, name and define the themes, and finally produce the reports.
Concluding Discussion
The objective of this review was to provide a harmonised literature review on qualitative data analysis for qualitative researchers in health sciences to clarify the use of inductive and deductive analysis logic. Several authors espouse scientifically sound ideas that are worth noting and considering during qualitative data analysis. Notably among these were Braun and Clarke (2006); Corbin and Strauss (2014); Graneheim and Lundman (2004); Guest et al. (2012); Miles and Huberman (1994). Conclusively, most authors agreed that all qualitative data analysis starts from planning through to interpretation, arguing that poorly planned and inadequately framed objectives will affect the research questions and, in essence, will influence data collection and analysis (Armat et al., 2018; Bengtsson, 2016; Erlingsson & Brysiewicz, 2017; Harding, 2018; Humble & Radina, 2019).
Some studies proposed coding as a starting point in qualitative data analysis (Baralt, 2011; Basit, 2003; Elliott, 2018). This confirms the notion that some qualitative researchers equate qualitative data analysis with qualitative data coding and teach analysis as coding because it is teachable (St. Pierre & Jackson, 2014). Although coding is intertwined with analysis and considered inseparable, one can clearly argue that different lenses and expertise are required to complete each successfully. The author agrees that coding is the committed step in qualitative data analysis and doing it in either flat or hierarchical form is a step that facilitates qualitative data analysis, without which the data analysis might not proceed. However, the author argues that coding must not be equated to analysis, as this is just a step in the process of the analysis.
Data analysis can be considered a priori, etic (outsider, researcher-driven), concept-driven, or deductive analysis, where a preconceived idea is required to serve as a framework for the analysis (Armat et al., 2018; Bengtsson, 2016; Graneheim et al., 2017; Mayring, 2014; Neale, 2016). This indicates that when theories and conceptual models serve as frameworks for a study, the preferred analysis logic should be deductive dominant analysis (DDA). The researcher possesses a predetermined notion concerning the themes to look for in the transcript prior to commencing the analysis procedure.
It is also possible to perform qualitative analysis as data-driven, emic (insider, participant-driven), or inductive, where the researcher has no preconceived framework as a guide (Armat et al., 2018; Bengtsson, 2016; Graneheim et al., 2017; Mayring, 2014; Neale, 2016; Stuckey, 2015). Inductive logic analysis is proposed in research contexts where conceptual models and frameworks are typically discouraged. The researcher in this context would lack explicit predetermined notions before the analysis and would therefore benefit from inductive dominant analysis (IDA). The generation of themes will be emic, participant-based, or data-driven, with key examples being grounded theory designs and exploratory qualitative investigations, as these often proceed without frameworks.
However, a careful consideration will indicate that no individual qualitative researcher can successfully perform either inductive or deductive without the back and forth from both. The argument that abduction would be a good phrase, as presented by Graneheim et al. (2017) holds some merit, and the current review supports their argument. Similarly, Armat et al. (2018); Harding (2018); Mayring (2014) argue that labelling data analysis as inductive and deductive may appear misleading, illogical, and ambiguous to amateurish researchers.
Proposed Inductive-Deductive Dominant-Dormant Analysis Logic Terminologies
Experience suggests that qualitative data analysis is a daunting and time-consuming endeavour that requires diligence and creativity to succeed. Meticulousness in planning during the process is imperative, as an unsubstantiated premise will lead to a false conclusion. The conclusion, therefore, is to consider the usage of IDA or DdA and DDA or IdA for inductive and deductive logic, respectively. Qualitative researchers are not tabula rasa when they employ inductive logic, and they also allow emerging themes when they employ deductive logic, making them unable to choose one logic conclusively during qualitative data analysis. Therefore, considering one as dominant and the other as dormant may provide clarity during qualitative data analysis, as the researcher is allowed to move back and forth between the two logics.
Limitation and Reflexivity
I acknowledge that my academic background and practical experience in qualitative research influenced how I approached, selected, and interpreted the literature. My quest to understand the methodological logic behind qualitative data analysis, particularly the integration of inductive and deductive logics, guided the formulation of the review’s aims and analysis focus. I approached the review with a constructivist lens, recognising that knowledge in qualitative research is contextually shaped and interpretive. While striving for objectivity in the selection and appraisal of studies, I am aware that my interpretive choices, such as identifying themes, categorising evidence, and emphasising certain perspectives, were shaped by my professional familiarity with thematic analysis and nursing research. To enhance transparency and minimise bias, I used a structured and documented process for article inclusion, critical appraisal, and thematic synthesis. Throughout the process, reflexivity was maintained as a continuous, iterative practice to ensure the integrity and trustworthiness of the review findings.
