Abstract
Keywords
Introduction
The nuanced design and methods involved in empirical qualitative research are key concerns of qualitative researchers across many fields. Yet, the problems that capture our interest and feel most critical to unravel are often sticky, muddy, “wicked” problems that persist despite significant empirical research. Such complex and multifaceted problems lack clear solutions because the problem and potential solutions are deeply interconnected and shaped by evolving contexts. In these contexts, empirical research can often reach a point where further empirical studies move the field forward only incrementally and new findings rehash familiar topics without offering new insights. In these moments, researchers must confront the boundaries of their thinking and explore new ways to approach their topic. This is the primary goal in critical reviews, which form the foundation of the Mixed Critical Bibliometric Review.
Critical reviews are qualitative knowledge synthesis approaches that are well suited to advancing research where a field has become “stuck.” These reviews invite researchers to critically examine a body of literature with the aim of reorienting and identifying new theoretical, conceptual, or methodological approaches (Grant & Booth, 2009; Kahlke et al., 2023a, 2023b). Conducting a critical review typically involves assembling a research team with unique and/or diverse backgrounds, engaging in consultation with others, and applying these varied perspectives to illuminate and address gaps and challenge fixed patterns of thinking about the phenomenon under study. The culmination of this process is the proposal of fresh frameworks, theories, methodologies, or methods to help the field move forward in a new direction (Grant & Booth, 2009). Yet, researchers frequently encounter challenges in knowing how to critically reassess what is already known. There remains a striking lack of advice on how to conduct critical reviews, ensuring that their findings are credible and provide a meaningful path forward.
The immense potential of critical reviews to help a field get ‘unstuck’, as well as their methodological challenges, warrant deliberate attention, as critical reviews tend to exert substantial influence on research uptake relative to other forms of scholarship (Maggio et al., 2020; Norman et al., 2022; Rotgans, 2012), and their prevalence as a research output only appears to be increasing (Grant & Booth, 2009; Maggio et al., 2020). Unlike other knowledge synthesis types that focus on aggregating or interpreting empirical findings from other studies, critical reviews concentrate on synthesizing the underlying assumptions, theories, or methodologies that have become entrenched in research in a field or on a particular topic while shining a light on generative approaches that have been ignored. Thus, critical reviews require bespoke, iterative methods and often necessitate engaging with broad and heterogeneous literatures from other fields that may provide key insights. In our experience, the dazzling array of possibilities can be daunting to parse, and even the most diverse and reflective research teams can be challenged to ‘unthink’ the ways in which a topic has been studied and discussed, often for many decades. Paradoxically, critical reviewers hold inherent perspectives (or biases) from their field and must “strive to generate and consider other possible interpretations” (Eva, 2008) of the literature with little guidance on how to do so.
In response to these challenges, we propose the Mixed Critical Bibliometric Review in this State of the Methods article, as an approach designed to extend the potential of critical reviews. We believe that researchers can benefit from integrating bibliometric strategies (e.g., publication/citation analysis, author networks, keyword mapping) within critical reviews, using concepts from mixed methods research (MMR), to check their assumptions and extend their data sets and interpretations. This integration can clarify how people, ideas, and research methods have interacted to shape a field, providing a springboard for new ideas and approaches. While qualitative critical reviews can interpret the ideas circulating within a field, bibliometric data can support a robust analysis of The mixed critical bibliometric review approach is an embedded, qualitatively oriented MMR design that centres qualitative analyses, while leveraging bibliometric analyses and visualizations alongside other qualitative critical review methods
Our Approach to Developing the Mixed Critical Bibliometric Review
As a novel approach, we believe that it is important to clarify the perspectives underpinning the Mixed Critical Bibliometric Review. Our team approaches this work from a ‘Big Q’ Qualitative research perspective. Alongside Braun and Clarke (Braun & Clarke, 2024), we see Big Q Qualitative approaches as focused on understanding meaning, experience, and context, and view knowledge as socially constructed. Researchers using Big Q approaches embrace subjectivity and reflexivity, acknowledging their role as researchers in shaping the research process and its outcomes (Olmos-Vega et al., 2023). This contrasts with ‘small q’ approaches, which employ qualitative methods within a primarily post-positivist framework. In such approaches, researchers treat words as data to be categorized, coded, and often quantified, aiming for objectivity, reliability, and generalizability (Braun & Clarke, 2024). While both may use similar tools, they are grounded in very different assumptions about what counts as knowledge and how knowledge should be produced. Big Q approaches tend to underpin qualitatively oriented MMR designs, which prioritize qualitative ways of thinking, qualitative data, and/or qualitative analyses (Hesse-Biber, 2010; Mason, 2006; Morse & Cheek, 2014; Poth & Shannon-Baker, 2022), while positioning quantitative data and methods in a supporting role (Morse & Cheek, 2014).
Our collective backgrounds inform this perspective and our proposal of the Mixed Critical Bibliometric Review as a valuable knowledge synthesis approach. RK’s work is grounded in sociocultural theory and informed by ‘Big Q’ perspectives. She is interested in methodological innovation, dexterity, and creativity (Kahlke, 2014; Kahlke et al., 2024) and has advocated for the view that critical reviews serve as a Qualitative approach to knowledge synthesis that can help researchers address the most enduring problems in a field (Kahlke et al., 2023a; 2023b). CP is a leading expert in advancing MMR for the purpose of tackling complex research problems that remain opaque despite significant qualitative and quantitative research efforts (Poth, 2018, 2022), particularly through qualitatively oriented MMR (Poth & Shannon-Baker, 2022). JY is an information scientist and academic librarian focused on leveraging bibliometric visualization tools to develop nuanced perspectives on research impact that often challenge simplistic approaches to citation counting. LM trained as an Information Scientist with expertise in both qualitative and quantitative empirical and knowledge synthesis approaches. Her work promotes bibliometrics as a key tool for mapping the contribution and influence of research in health professions education (Maggio et al., 2020, 2022). We are reviewers (all) and editors (CP and LM) who have handled many critical reviews (all) and bibliometric analyses (LM). Our perspective is informed by the challenges we see researchers face in the review processes, as well as our own experiences as researchers.
Together, we strongly believe in the value of Qualitative and Qualitatively oriented (or qualitatively driven) MMR. Such approaches allow researchers to move between the deep, contextually grounded analyses enabled by qualitative methods and the “big picture” available through quantitative methods. We see the move toward qualitatively oriented mixed-methods knowledge syntheses as a key innovation. This approach will allow researchers to capitalize on the nuance available when embracing researcher subjectivity through big Q approaches, while simultaneously leveraging the power of bibliometric data to map literature on a broader scale. In doing so, it opens the possibility of using visualizations to challenge assumptions about how ideas are taken up and circulated.
What are Mixed Critical Bibliometric Review’s Defining Characteristics?
We distinguish Mixed Critical Bibliometric Review by emphasizing two defining characteristics within its embedded MMR design:
In qualitatively oriented approaches to MMR, qualitative-prioritized integration shapes research questions, data generation processes, and interpretations (Poth & Shannon-Baker, 2022). In a Mixed Critical Bibliometric Review, this qualitative prioritization is grounded in Big Q assumptions and inductive, iterative procedures. Bibliometric patterns are continually mapped and visualized, revisited, and reinterpreted in concert with qualitative findings to generate novel insights. These insights might confirm, challenge, or expand researchers’ developing interpretations; when interpretations are challenged or expanded, bibliometrics provoke new avenues for qualitative analysis. Thus, Mixed Critical Bibliometric Reviews can offer researchers new tools to achieve a richer and more reflexive synthesis than either method could achieve alone.
The field of MMR offers valuable guidance for implementing a Mixed Critical Bibliometric Review, particularly with respect to the challenge of integration. Integration lies at the heart of MMR, yet for many, it remains challenging to achieve in practice (Fetters & Molina-Azorin, 2017; Vogl, 2019). By intentionally focusing on integration throughout the research process, MMR designs generate novel insights unattainable through qualitative or quantitative approaches alone. This characteristic sets them apart from multi-methods designs that draw on more than one data source, without seeking to integrate data, analyses, and findings to achieve novel insights (Battista et al., 2025). Drawing on the MMR scholarship provided a foundation for the Mixed Critical Bibliometric Review, making integration intentional and generative of meaningful outcomes. Specifically, MMR embedded designs inform our prioritization of qualitative assumptions and approaches while strategically incorporating bibliometric techniques to extend and deepen insights.
Specifically, qualitatively oriented Mixed Critical Bibliometric Reviews offer a unique opportunity to explore patterns and tensions in a field’s way of thinking about a problem. Inductive, qualitative analytic approaches commonly used in critical reviews remain essential for characterizing and appreciating the nuances of a scholarly conversation. However, such approaches face a familiar challenge: researchers can get stuck in their own assumptions. Bibliometrics can help overcome this by enabling researchers to zoom out to appreciate the big picture, integrating and visually displaying data across a broader dataset. Grounded in qualitatively oriented MMR practices, the Mixed Critical Bibliometric Review is positioned to generate novel insights that maintain the flexibility inherent to critical reviews while capitalizing on bibliometric analyses to enhance reflexivity and rigour.
In our view, iterative and concurrent engagement with qualitative and bibliometric analyses is central to a Mixed Critical Bibliometric Review. Researchers move back and forth between these modes of analysis, asking questions about the nature of a literature through qualitative analyses, then using quantitative bibliometric analyses to check their assumptions and explore new directions. Through this iterative process, Mixed Critical Bibliometric Reviews centre integration of qualitative and quantitative methods, since researchers rely on both approaches working in concert to develop their interpretations. In the MMR methodological literature, this approach aligns with qualitatively oriented, inductive approaches such as “follow the thread” (Moran-Ellis et al., 2006) and other iterative approaches (e.g. Teddlie, C et al., 2008), where researchers continually adjust their questions, data, and analyses in response to new insights.
Throughout this paper, we draw both on examples of critical reviews conducted by members of this authorship team and our colleagues in health professions education (HPE) to characterize critical reviews and identify points at which bibliometric analyses could have provided fresh insights. We also draw on a Mixed Critical Bibliometric Review that we (RK and JY) are currently conducting. This work began as a critical review exploring how neurodiversity has been taken up in HPE research. In this context, neurodivergence refers to people who have neurological differences (e.g., autism, attention deficit/hyperactivity disorder, dyslexia) compared to learners who are seen as “neurotypical.” Neurodiversity refers to neurological variation within populations (Shaw et al., 2024). In this review, we have aimed to uncover the perspectives and assumptions informing research on neurodivergence, mapping how articles and authors frame neurodiversity, and how that framing might contribute to supportive or pathologizing educational cultures. We are strongly invested in the neurodiversity paradigm, which frames neurodivergence as a natural and important variation in human brains (Shaw et al., 2024). Supporting neurodiversity rather than pathologizing neurodivergence creates a path toward socially just and robust systems that can appreciate and support different ways of thinking. However, during the initial stages of our critical review, we struggled to characterize the complexity of the conversation on neurodivergence in HPE, given the diversity of topics and evolving perspectives. Thus, we began engaging with bibliometrics to uncover patterns and challenge our interpretations, which contributed to the development of the Mixed Critical Bibliometric Review approach.
We now continue our description of Mixed Critical Bibliometric Review by describing critical review and bibliometric approaches, followed by three approaches to integrating bibliometrics within a Mixed Critical Bibliometric Review. Our integration discussion focuses on two MMR tools that can support researchers in integrating qualitative and quantitative data, and in representing the resulting meta-inferences, a term used to describe novel insights gleaned from the synthesis of bibliometric and critical review findings, beyond what either approach could yield alone.
What are the Key Characteristics, Contributions, and Challenges of a Critical Review?
Knowledge synthesis methodology has historically been dominated by quantitative approaches designed to determine the effectiveness of interventions, identify associations, or determine prevalence (e.g. systematic review). However, many types of qualitative knowledge synthesis have been developed and advocated (Flemming & Noyes, 2021; Tong et al., 2012). Many of these approaches still prioritize systematicity and claim to provide a comprehensive overview of the literature in some way (Barnett-Page & Thomas, 2009; Flemming & Noyes, 2021; Greenhalgh et al., 2018), often aggregating or interpreting the
Though the purpose of critical reviews has long been clear, their methods rarely are. Like many interpretative synthesis approaches, they are caught in a methodological space between the quest for systematicity and comprehensiveness in systematic reviews (Eva, 2008; Greenhalgh et al., 2018), and the flexibility of narrative or editorial scholarship, which typically does not require explicit methodological reporting and makes no claims to speak beyond the individual writer’s opinion. Thus, many published critical reviews include little methodological detail and when they do, reporting tends to focus on the number of databases or articles reviewed or excluded, while offering little insight into the interpretive reasoning that shaped the analysis. The methods detailed below are constructed based on our experience as researchers, reviewers, and editors, the scant methodological guidance on critical reviews, guidance drawn from closely related knowledge synthesis types, and example critical reviews from our field of HPE.
Searching, Appraisal, and Sampling
Critical reviews do not aim for predefined and comprehensive searching; rather, they use iterative search strategies guided by researchers’ expertise and informed through consultation with others to expand and challenge researchers’ thinking. Articles are appraised for their relevance to the research question and potential to shift the field’s thinking (Eva, 2008; Kahlke et al., 2023b), rather than applying pre-determined quality criteria (Dixon-Woods et al., 2006). Purposeful sampling strategies are used to identify articles most likely to meet the researchers’ aims (Kahlke et al., 2023b), while theoretical sampling is applied to probe new avenues as the iterative search, appraisal, and analysis cycle progresses (Barnett-Page & Thomas, 2009; Dixon-Woods et al., 2006). Because watchwords like
Analysis
Clarity in analytic process is likely the least developed component of critical review methodology. To align with the Big Q orientation and aims of critical reviews, analytic approaches are generally inductive and interpretive (Wong et al., 2013). Researchers might therefore benefit from drawing on existing inductive analysis frameworks that align with their aims, such as reflexive thematic analysis (Braun & Clarke, 2022) or Big Q approaches to qualitative content analysis (Hsieh & Shannon, 2005). Drawing on relevant qualitative reporting guidelines (e.g. Braun & Clarke, 2024; O’Brien et al., 2014; Wong et al., 2013) could greatly enhance the reflexive openness (or transparency) and credibility of critical reviews. However, we caution that when “borrowing” methodological guidance from other spaces, alignment is key (Kahlke, 2014; Varpio et al., 2022). In our own experience, it is easy to fall into the trap of borrowing practices and language that clash with the Big Q assumptions and aims of critical reviews, undermining their credibility (Braun & Clarke, 2024; Greenhalgh et al., 2018; Varpio et al., 2017). To facilitate thoughtful use of specific qualitative methods within any qualitative review, it is critical to involve those with expertise in qualitative data analysis (Tricco, Antony, et al., 2016), as well as information scientists familiar with qualitative and iterative knowledge synthesis approaches (Parker & Sikora, 2022). Regular research team meetings enhance methodological reflexivity, providing a mechanism through which to glean insight from diverse team perspectives (Olmos-Vega et al., 2023).
In the context of the Mixed Critical Bibliometric Review, we argue that reflexivity takes on a particular importance because the goal of a critical review is to challenge a field’s assumptions. Therefore, researchers must engage in reflexive practices throughout their sampling and analytic processes (Olmos-Vega et al., 2023). One strategy is to engage diverse research teams and consult with experts outside the team to challenge and expand the team’s thinking (Kahlke et al., 2023b). However, even expert collaborators have their own assumptions and lacunae. We argue that bibliometric approaches and the visualizations they produce can offer an interpretation-check, spurring new lines of thinking when bibliometric analyses contradict or expand their qualitative interpretations.
What are the Key Characteristics, Contributions, and Limitations of Bibliometrics?
Bibliometrics is the “study of academic publishing that uses statistics to describe trends and to highlight relationships between published works” (Ninkov et al., 2021). By analyzing metrics such as citation and publication counts, article downloads, publication timelines, co-authorship networks, and journal impact factors, scholars can attune to patterns of knowledge production and dissemination within a field, as well as the broader structure and evolution of the field itself. In the context of critical reviews, bibliometrics can provide insights into the structure, growth, and influence of research on a particular topic or field.
Originally developed and used by information scientists, bibliometric methods have been adopted across disciplines to evaluate research performance, monitor emerging trends, identify gaps in the literature, and inform strategic decision-making (Donthu et al., 2021). For example, one research team mapped trends in diet and cancer research by analyzing publication metadata, such as keywords and publication dates (Giles et al., 2023). In another study, researchers used bibliometric analysis of author names to estimate the representation of Black women in scholarly publishing, highlighting disparities in authorship (Seide et al., 2025).
Bibliometric methods are generally categorized as evaluative or relational. Evaluative bibliometrics describe and measure the characteristics and impact of published work using metadata such as publication counts, citation rates, and related citation-based metrics, such as the journal impact factor and H-index. These evaluative methods can be used to identify highly cited papers on a topic, track the growth of research output, or compare productivity across institutions or countries. For example, economics researchers have used evaluative bibliometrics to evaluate the impact of universities, journals, and researchers (Rousseau & Rousseau, 2021).
Relational bibliometrics aim to identify connections among scholarly entities, such as authors, articles, or institutions, based on shared metadata, including keywords, co-authorships, and citations. In this way, relational bibliometrics can help identify collaborative networks, thematic clusters of topics, and communities of scholars. Researchers may wish to explore relationships between publications based on intersections (and disconnections) between topics, disciplines, or methodologies. For example, studies have used author-supplied keywords to explore the evolution of publishing trends within a specific scholarly journal (Pesta et al., 2018). Others have relied on proprietary journal-level classification schemas, like Web of Science research categories, to explore topics like interdisciplinarity in the field of Medical Education (Maggio et al., 2023). While this metadata-driven approach works well for many use cases, some research questions or topics are too complex to address using standardized metadata fields. This can be the case in qualitatively driven knowledge syntheses like critical reviews.
In these cases, emerging machine-learning approaches can further enhance bibliometric analysis by enabling deeper, more granular classification of publication and citation metadata, rather than relying solely on readily available metadata that may not easily align with the study’s aims. Such techniques have been integrated into bibliometric analyses with varying levels of complexity. Most analyses start with a publication set that covers a topic or research area of interest. For some analyses, traditional database search strategies that rely on the presence or absence of keywords and phrases alone may be insufficient. In these cases, machine learning techniques have been used to further filter publications for inclusion (Bittermann et al., 2025) or to identify discrete topics and concepts addressed within the literature that fall outside of existing metadata (Park et al., 2024). Citation function classification, conversely, can extend our understanding of citation usage beyond simple citation counts by contextualizing the function of citations based on the location and sentiment of the sentences/phrases introducing them (Zhang et al., 2025). Qualitative analyses, including quantification of qualitative findings (turning qualitative data into numerical data), can be used to guide machine-learning. Machine learning techniques such as these can introduce an additional level of nuance to bibliometric analyses, capturing the complexity of content and conversations within a field of study–a function well aligned with the goals of critical reviews.
While bibliometric methods can offer a valuable window into scholarly activity, they have limitations (Thelwall et al., 2023). For example, as noted above, citation counts are often used as a proxy for a publication’s quality or impact. However, these citation metrics can be influenced by factors unrelated to scholarly merit, such as self-citation or the popularity of a research topic. Bibliometrics also rests on the belief that a field’s publications represent that field. However, this can be a flawed assumption when scholars share their work in venues that we are less likely to count (e.g., regional journals) or in formats that are not as easily aggregated (e.g., books, policy papers, social media). For researchers interested in redirecting a field, these assumptions about what constitutes impact and the limitations of pre-defined metadata are likely to be significant. Within a Mixed Critical Bibliometric Review, bibliometric analyses are conducted in conjunction with qualitative analyses to develop nuanced bibliometric data and analyses, going beyond simple metrics associated with impact or volume of publications, as illustrated below.
How Can Bibliometrics Support Researchers in Achieving the Aims of Their Critical Review?
Bibliometrics can support critical reviews in various ways, including identification of key research questions, selection of sampling strategies, and open reporting of the rationale behind study decisions. Here, we focus on the important role that bibliometrics can play in helping researchers examine their assumptions and interpretations–a pivotal component of all critical reviews. Researchers must appreciate the nuances of the field and topic they seek to critique if they hope to significantly shift the field’s thinking. And yet, we all have lacunae. Bibliometric approaches can produce visuals to represent how a field is constructed, allowing for a fresh perspective on the scope of the problem. Capable of analyzing much larger publication sets than the highly selective sampling approach typically used in critical reviews, bibliometric visualizations can provide a “birds-eye” or alternative view of a body of literature, providing a different way of looking at the overarching topics, research networks, and citation patterns driving knowledge forward. In this way, bibliometric analyses can offer empirical data that confirms, calls into question, or extends interpretations gleaned from qualitative analyses in a critical review.
In our experience, critical reviews often seek to characterize how a topic has been constructed in terms of: (1) temporality (e.g. how researchers’ thinking about the topic or field has changed over time), (2) relationships between ideas or people (e.g. researchers within one area may or may not take up the ideas of others), or (3) impact (e.g. some ideas have been taken up more than others). Bibliometric approaches are uniquely positioned to support researchers in checking their qualitatively derived interpretations and assumptions in each of these areas. In this section, we discuss strategies and provide examples from HPE, including our (RK and JY) neurodiversity critical review, to elucidate three approaches to integrating bibliometric analyses within a Mixed Critical Bibliometric Review.
Temporality
Critical reviews commonly include statements about change in a field, often involving claims that a field’s research on a topic was once robust, but has stagnated (e.g. Ilgen et al., 2019; Monteiro et al., 2023), or that researchers have not discarded assumptions that are disproven or no longer serve (Eva & Regehr, 2005; Monteiro et al., 2023). The idea of a field being “stuck” in its approach to a certain topic is arguably central to critical reviews. A classic example from our field comes from Eva and Regehr (2005), who observed that researchers in HPE had long drawn on approaches from cognitive psychology to explore physicians’ capacity for global self-assessment, arguing the importance of self-assessment for maintenance of professional competence. These research efforts continued despite a preponderance of evidence pointing to one conclusion–people are generally poor at producing accurate global self-assessments of their knowledge and skills. Eva and Regehr proposed that the field had been too hung up on global (am I good at this?) and summative (did I do a good job?) self-assessment, which rarely reflects practice. In their critical review, they urged the field to consider a variety of approaches drawn from other fields, and their own research evolved toward appreciating self-monitoring, or the ability to make in-the-moment assessments of one’s own performance (e.g. do I need help here?) (Eva & Regehr, 2007, 2011; Moulton et al., 2007).
However, the qualitative methods used in critical reviews rarely allow for a robust approach to characterizing this change over time. Here, bibliometrics can be invaluable in both checking this assumption and demonstrating these changes. For example, applied to Eva and Regehr’s review, bibliometric mapping could confirm trends in research output on self-assessment, revealing trends of growth or stagnation, while their qualitative analyses detailed how different conceptual framings were deployed–or not–across the literature.
In our own critical review on neurodiversity in HPE, we encountered a similar challenge. Our review data included a scoping review claiming that the conversation around neurodivergence had shifted toward a neurodiversity paradigm (Shaw et al., 2024), rather than a medical or deficit-oriented approach (Gray et al., 2025) in recent years. This claim did not align with our team’s interpretations, prompting us to test both our assumptions those of Gray et al. We conducted an interpretive (conventional) qualitative content analysis (Hsieh & Shannon, 2005), coding each article for its perspective on neurodivergence. For example, deficit model articles were often more concerned about the negative impacts of learners’ neurodivergence on educational or other outcomes, while articles taking up the neurodiversity paradigm focused on neurodivergent learners’ experiences, or how systems impact their participation in HPE. We then mapped the uptake of the neurodiversity paradigm over time, finding that articles coming from a neurodiversity paradigm did, in fact, increase–as did deficit-oriented articles–but that much of this literature had been produced within a limited number of research teams, with very limited conversation about neurodivergence in general prior to 2017. This prompted us to return to our qualitative content analysis to bring nuance to how researchers characterize and deploy the “neurodiversity paradigm,” while simultaneously engaging with bibliometric analyses to expand our understanding of the relationships between researchers and ideas about neurodivergence in HPE research.
Relationships
A launching point for many critical reviews is the assertion that particular research perspectives have not been considered. To make these claims, researchers often have significant expertise in their field and are deeply embedded in these conversations. While these expertise-informed perspectives allow for a broad view, they can also lead to significant lacunae, particularly because research conversations can be very insular.
Here, bibliometric mapping can support researchers to identify and challenge their lacunae. Building on the example of Eva and Regehr (2005), their argument that medical educators have historically leaned on cognitive psychology to understand self-assessment could be examined through bibliometric analysis. By mapping the disciplinary origins of cited sources (e.g., journal subject categories), researchers can specify which literatures informed the work, to what degree, or at what timepoints. This process of interpretation-checking, can offer simple confirmation/disconfirmation or can be used to nuance arguments about a research conversation.
Beyond exploring relationships among topics, bibliometrics can also illuminate author relationships. For example, co-authorship analysis, which maps and quantifies who publishes with whom, can provide insight into whether and to what extent authors across fields are engaging with one another or working in silos. For example, researchers have used bibliometrics to explore the well-documented and disproportionate dominance of authors based in the Global North (Castro Torres & Alburez-Gutierrez, 2022; Collyer, 2018; Naidu et al., 2024), deploying bibliometric analysis of article metadata (e.g. author affiliations, article bylines) to characterize the presence of Global South authors, including co-author relationships to understand whose voices are included and whose are missing (Maggio et al., 2023; Wondimagegn et al., 2023). If paired with qualitative analyses, these techniques can powerfully capture both
In our review on neurodiversity in HPE, we extended our initial mixed bibliometric analysis, which found a significant uptake of the neurodiversity paradigm in recent years. However, we also suspected that these conversations were relatively insular and noted that specific authors frequently appeared on teams taking up a neurodiversity paradigm (Shaw et al., 2023). To explore this line of thinking, we analyzed the citations within the neurodiversity paradigm and confirmed that citation activity was largely driven by a small group of authors. In addition, while neurodiversity perspectives often cited researchers working within the medical and deficit paradigms, the reverse was rarely true. In this way, the bibliometric analysis of citation patterns allowed us to identify a siloing of neurodiversity paradigm-oriented research in HPE. Building on this insight, we conducted a deeper mixed qualitative and bibliometric analysis of the citation patterns present in recent deficit and medical model articles. This ongoing project aims to better understand the literature and to identify opportunities for dialogue across paradigms. Ultimately, our goal is to support a more inclusive and conceptually integrated discourse, one that shifts the field toward a more neurodiversity affirming conversation that can positively impact educational policy and practice.
Impact
As we note above, critical reviews often claim that certain perspectives have had an outsized impact, or that other perspectives have been neglected. For example, Colliver et al. (2012) argue that some approaches to validity in learner assessment have been overemphasized and poorly operationalized, while others have been neglected, damaging assessment credibility. The boundaries of such assumptions can be empirically examined by assessing the frequency of different assessment approaches in the assessment literature or by analyzing the citations rates of primary texts related to each approach, in order to gauge their overall impact.
In our neurodiversity review, we similarly sought to capture the overall impact of different paradigms, based on our temporal analysis and overall citation counts from work taking up different perspectives. We then turned to explore how researchers were operationalizing the neurodiversity paradigm in their work through content analysis emphasizing latent (implicit) meanings in articles we coded as aligning with the neurodiversity paradigm. We aim to characterize how different authors signal a neurodiversity-affirming approach, and where contradictions or tensions may lie, using bibliometric analysis of their citation patterns. Simultaneously, we will conduct a bibliometric analysis to examine whether and which primary texts related to the neurodiversity paradigm are being cited in these papers. Together, these analyses aim to clarify how influence circulates within this research area and to highlight how conceptual impact can be empirically traced and critically interpreted. While our relational analysis aims to reshape the deficit/medical model conversation, this qualitative exploration and impact analysis aims to further nuance the conversation within neurodiversity paradigm-oriented work.
Strategies for Achieving Iterative Integration in a Mixed Critical Bibliometric Review
In describing the Mixed Critical Bibliometric Review, we draw upon conceptual and practical guidance from the field of MMR to support qualitatively prioritized and iterative integration. Conceptually, integration is recognized as the defining feature of MMR–challenging to achieve yet aided by visualization practices (Guetterman & Fetters, 2022). Over the past decade, two visualization practices have emerged that we see as particularly useful for Mixed Critical Bibliometric Review
Design Diagrams
Design diagrams are useful tools for visually representing how qualitative and quantitative components are sequenced, prioritized, and integrated within a study. They make explicit the flow of procedures and highlight points of integration across qualitative and quantitative research strands, helping researchers plan, conduct, and report the study. For Mixed Critical Bibliometric Reviews, design diagrams can shift reporting away from the PRISMA flow diagrams (Eva, 2008; Noyes et al., 2024) that dominate knowledge synthesis reporting (Page et al., 2021; Tong et al., 2012; Tricco et al., 2018). PRISMA flow visuals tend to prioritize comprehensiveness of search and screening protocols as a measure of quality (Eva, 2008; Noyes et al., 2024), rather than openly reporting the analytic processes central to critical reviews and Mixed Critical Bibliometric Reviews. Instead, design diagrams can make explicit how qualitative and bibliometric analyses were iteratively generated and integrated, influencing each other to shape integrated findings. Such diagrams guide research teams in planning for and documenting iterative cycles of integration, creating an audit trail (Tracy, 2010) that allows researchers to revisit the rationale for decisions they made along the way. Downstream, design diagrams also enhance transparency for readers by demonstrating how the qualitatively prioritized design was conducted to produce credible findings. Without a clear design diagram to form the backbone of an audit trail, researchers can lose the thread of how and why they made sampling decisions, generated bibliometric and qualitative data, and produced integrated insights through these decisions.
However, many MMR design diagrams are relatively linear, depicting data as generated in distinct and preplanned sequential phases or data generated in parallel and integrated at specific pre-planned points (Fetters, 2020). Researchers using the Mixed Critical Bibliometric Review will need to be creative in mapping their designs, drawing on examples from emergent and iterative MMR designs (Fetters, 2020). As in Lucero et al. (2018), design diagrams may morph significantly from the original intention, as researchers adjust their methods to make sense of findings that challenge or expand their initial interpretations. In Big Q designs, this is a strength rather than a liability. For our neurodiversity synthesis, we had anticipated a straightforward critical review, depicted in the design diagram in Figure 2; however, the process has evolved to incorporate new contributions from other research teams and our own mixed findings. Our working design diagram, depicted in Figure 3, begins to tell a story of the research decisions we made as we responded to and integrated our analyses, rather than adhering to a preset plan. The initial design diagram for our neurodiversity in health professions education critical review. Research question: What assumptions about neurodiversity underpin research about neurodivergent health professional learners? Working design diagram for our mixed critical bibliometric review on neurodivergence in health professions education. Research questions: How has the neurodiversity paradigm been understood and deployed in research about neurodivergent learners in health professions education? What is the character of the conversation among researchers holding different assumptions about neurodivergence?

Joint Displays
Joint displays offer a useful integration strategy visually representing qualitative and quantitative findings together within a single, integrated visual format. By aligning data side by side or in layered structures, they make points of convergence, divergence, and complementarity more transparent and interpretable. Joint displays are useful not only for representing evidence of integration outcomes but can also be used in planning and to support intentional integration processes (Creamer, 2023; Fetters & Tajima, 2022). Joint displays are especially valuable for Mixed Critical Bibliometric Reviews because they can facilitate
In qualitatively oriented empirical MMR, researchers are challenged to capture rich descriptions of participant perspectives in their joint displays. As a result, joint displays can default to an overrepresentation of quantitative results that often take up less space but mis-represent qualitatively oriented MMR. The quotes required to represent the data take significant space, and often require additional peritext to contextualize (Shannon-Baker et al., 2024). Mixed Critical Bibliometric Review researchers may need to move away from table or matrix-based joint displays that are most common in MMR (Fetters & Tajima, 2022), and find ways to richly convey qualitative findings. Fisher et al. (2022) offer an example of a quadrant-based joint display that includes participant quotes; Shannon-Baker and Hunt-Anderson (2025) portray multiple iterations of this display as the team grappled with the need to emphasize rich description of qualitatively oriented findings, while ensuring that the display of metainferences was still concise and reader-friendly. For a Mixed Critical Bibliometric Review, researchers may not be challenged to represent participant voices in the same way; however, they should consider the “weight” of and relationship between their qualitative versus bibliometric data when configuring a joint display. We feel that doing so is an important aspect of richly representing integrated Mixed Critical Bibliometric Review findings, translating both the deep qualitative insights gleaned and the interpretation-checking contributions of bibliometric analyses.
Conclusions and Future Considerations for Mixed Critical Bibliometric Review
In this State of the Methods article, we introduced the Mixed Critical Bibliometric Review as an innovative approach to knowledge synthesis, combining the interpretative depth of a critical review with the structural insights of bibliometric analysis. We have argued that the power of a critical review lies in researcher reflexivity, particularly in their iterative sampling and analytic processes, continually questioning their assumptions, interpretations, and methodological choices. This novel knowledge synthesis type offers a new tool to enhance reflexivity by incorporating bibliometric visualization strategies that can help researchers iteratively examine, refine, and expand their interpretations, leading to qualitatively oriented, integrated findings with a richness and reflexivity not possible in either a qualitative critical review or bibliometric analysis alone. As we illustrated through our neurodiversity review, these practices open new opportunities for dialogue within research teams and support richer, more open accounts of the analytic process.
Quality within a Mixed Critical Bibliometric Review, as in qualitative and qualitatively oriented mixed methods research more broadly, depends not on comprehensiveness, but on
Consistent with the ‘Big Q’ orientation of qualitative inquiry, the Mixed Critical Bibliometric Review embraces methodological ‘fuzziness’ as a generative feature rather than a flaw. By integrating qualitative and bibliometric insights, the Mixed Critical Bibliometric Review enables researchers to move fluidly between in-depth, contextualized analysis and broad, structural mapping, yielding insights that neither approach could achieve in isolation. In doing so, it equips scholars to both challenge entrenched assumptions within their fields and confront their own interpretive lacunae.
