Abstract
Introduction
The rapid growth of mixed methods research has reinvigorated discussions surrounding why (and how) mixing quantitative and qualitative approaches should be done. Debates started in the mid-19th century and focused on the tensions between stand-alone quantitative and qualitative approaches (see Becker & Geer, 1957; Trow, 1957). Today, mixed methods research has created a “booming field of methodological and theoretical discussions” (Flick, 2017, p. 46) surrounding the qualitative/quantitative dichotomy (Glassner & Moreno, 2013), difficulties in publishing (Mertens, 2011), and method integration (Mason, 2006; Mertens, 2014; Sligo, Nairn, & McGee, 2017). Despite this increased popularity, there is a relative lack of research
Through an examination of how mixed methods are typically understood, Creswell and Plano Clark (2018) write about approaches to classifying mixed methods research design (see also the “five major questions” from Bryman, 2006). They suggest there are four major features that help us understand the decisions and characteristics of mixed methods: purpose (or intent) for mixing, sequencing of qualitative and quantitative strands, priority (dominance) of each method, and level of interaction between each strand. This article concentrates on the “two main factors” of sequence and dominance (Molina-Azorín & Lopez-Gamero, 2016; Morse, 1991; Morgan, 1998). Sequence relates to questions of method order, the most basic being whether methods are implemented simultaneously or one after the other (Morgan, 2013). Dominance relates to emphasis or which method is more central to the paper (Creswell & Plano Clark, 2011). While both of these ideas have received significant consideration in the literature, we add how the two may be interrelated—an idea that has received little attention. We do so through the analysis of a single literature—the social acceptance of wind energy. In doing so, we use the literature as a case study—creating a moment to ponder the broader implications of sequence. More specifically, we investigate whether dominance is more likely when qualitative methods are deployed at the beginning, end, or highly integrated with quantitative methods (Bryman, 2006; Denzin, 2010).
Dominance in Mixed Methods Research
Dominance has been a central point of conversation in the study of mixed methods research. The term is somewhat synonymous with the other terms including priority, weighting, emphasis, and status and will be considered as such for the purposes of the analysis here. Others have preferred to think of this idea in terms of qualitative-driven and quantitative-driven research (S. Hesse-Biber, 2010b; S. N. Hesse-Biber, Rodriguez & Frost, 2015; Mason, 2006). Although methodological balance is not a requirement of mixed methods per se, there is value what Creamer (2018) calls meaningful interaction and representation—especially when researchers use the term “mixed methods.” At the very least, studying dominance may provide a useful point of reflection, since there are various processes (e.g., team research culture, editorial procedures) which may lead to unanticipated emphasis of one method or the other.
Much of the discussion on method dominance has originated from the concern that qualitative methods are typically marginalized in mixed methods research (Bryman, 2007; Giddings, 2006; J. C. Greene, Caracelli, & Graham, 1989; Niglas, 2004). S. Hesse-Biber (2010b) has highlighted the
Beyond these “qualitative communities,” there is fairly strong support for the idea that qualitative methods generally get subjugated in mixed methods research. Reviews of different literatures in the social sciences (Creswell, Fetters, & Ivankova, 2004; Harrison & Reilly, 2011; McManamny, Sheen, Boyd, & Jennings, 2015; O’Cathain, Murphy, & Nicholl, 2007; Plano Clark, Huddleston-Casas, Churchill, Green, & Garrett, 2008) have found that qualitative methods and/or findings are usually not given priority. Some have even defined mixed methods as research that contains one complete method alongside “one or more…supplementary components” (Morse & Niehaus, 2009, p. 9). Indeed, through a recent analysis of empirical mixed methods research, only 22% were assessed to have equal dominance (Creamer, 2018).
Of course, this trend of quantitative dominance in mixed methods is not universal. Indeed, others in this area including Creswell and Plano Clark (2011) point out that qualitative methods may well dominate in mixed methods studies (see also Creswell, Shope, Plano Clark, & Green, 2006; Mason, 2006). There is also a more recent paper from Archibald, Radil, Zhang, and Hanson (2015) which shows qualitative results were prioritized in 86% of mixed methods papers (
While method dominance has long been an important object of study (Creswell and Plano Clark, 2011; Hanson, Creswell, Clark, Petska, & Creswell, 2005; Leech & Onwuegbuzie, 2009), the latest influential text by Creswell and Plano Clark (2018) suggests it is an idea that some are shifting their focus away from in favor of a study’s
Combining these ideas, we study dominance here to spur what may be a waning discussion, increase our understanding of the ways to meaningfully to look at the concept, and make it less confusing for others going forward. Emphasizing one method over the other has mostly been thought to be a conscious choice of the researcher or is the result of pragmatic conditions such as the expertise, the scope of literature grounding the study, publication timelines, or the audience for the study (Arnon & Reichel, 2009; Creswell, Plano Clark, Gutmann, & Hanson, 2003; Shannon-Baker, 2016). Here, we argue that there may be other processes at work. In this case, method sequence may also be playing a role.
Sequence in Mixed Methods Research
The question of method sequence is detailed by Morse (1991) who asks the simple question: Are the data collected simultaneously or sequentially? Although some researchers have since examined more complex issues surrounding timing in mixed methods research through systematic reviews (Cameron, 2009; Palinkas, Horwitz, Chamberlain, Hurlburt, & Landsverk, 2011) or methodological thought pieces (Guest, 2013; Leech & Onwuegbuzie, 2009), the “why” of sequence is more often the focus.
Driven by the variety of classification schemes (Hanson et al., 2005), some have suggested that there are up to 40 mixed methods designs (Ivankova, Creswell, & Stick, 2006; Plano Clark & Ivankova, 2015). Yet, for the purpose of this study, including its focus on method sequence, we narrow the scope considerably. Using a combined set of criteria outlined by Creswell and Plano Clark (2018) and shaped by Holstein (2014) and Creamer (2018), we use a four families of [mixed method] design. Although the basic tenets and most of the terminology is taken from Creswell and Plano Clark (2018), we draw upon Holstein’s (2014) categorization largely because she groups research design based on method sequence. Below, we describe the four designs (exploratory sequential; explanatory sequential, convergent, and fully integrated) which provide important reference points in our analysis.
Sequential designs
The most important features of sequential designs are the use of quantitative and qualitative methods, one after the other. Most often findings from the first method feed into the design of the second (Teddlie & Tashakkori, 2006). In some but not all cases of qualitative followed by quantitative methods, the qualitative will act mainly as a “prestudy” to the quantitative research (Glaser & Holton, 2007). Creswell and Plano Clark (2018) label this the
The rationale for the mixed method
Nonsequential designs
In contrast to the designs described above, the
The second nonsequential design, the
The Interactive Effects of Sequence and Dominance
Sequence and dominance have been discussed in the literature in isolation; however, less attention—especially through empirical research—has been paid to the potential interactions between them. Recently Archibald et al. (2015) suggested that
Notwithstanding some commentary (e.g., Creswell and Plano Clark, 2011), and given the history of qualitative methods in mixed methods research, we suspected there may be subjugation (lack of dominance) of qualitative methods in particular (Bryman, 2007; S. Hesse-Biber, 2010a). This working hypothesis was also inspired by personal experiences with manuscript publication wherein we felt pressures from reviewers and/or editors to emphasize quantitative findings (see also Bryman, 2007).
The small but growing empirical literature which does address both sequence and dominance (see Ivankova et al., 2006; McManamny et al., 2015; Plano Clark et al., 2008; Stentz et al., 2012; van der Roest, Spaaij, & van Bottenburg, 2015; Žydžiumaite, 2007) generally analyzes the two dimensions separately, further suggesting that method dominance is a choice that researchers make separate from other methodological considerations. In their research studying mixed methods research published in
Through their recent work outlining and assessing mixed methods phenomenological research (MMPR), Mayoh and Onwuegbuzie (2014, 2015) may present the best pieces of recent literature that describe the relationship between research design and dominance. Although their primary purpose was not to reveal relationships between sequence and dominance, the analysis from Mayoh and Onwuegbuzie (2015) reveals patterns—some of which align with the literature above that states beginning with qualitative (phenomenological) methods tends to lead to qualitative-dominant research. However, they also challenge the conventional manner in explanatory designs where the initial quantitative methods allow such methods to dominate. Given these two findings, they suggest that the lack of quantitatively driven MMPR is because of the time consuming nature of qualitative inquiry. This is likely the case, yet by limiting their analysis to
Research Context and Method
In this article, questions of method sequence and dominance are posed through an empirical analysis of the mixed methods, social acceptance of wind energy literature. We choose this literature because it is one in which we are practicing authors and is an appropriate size that can allow for such an in-depth investigation.
Due to the rise of public opposition to large wind turbines in rural communities, social scientific research into wind energy development has grown immensely over the past decade. In some cases, local stakeholder opposition has slowed the development of these renewable energy projects (see McRobert, Tennent-Riddell, & Walker, 2016), causing problems for developed countries who wish to reach their renewable energy and/or climate change targets (Batel & Devine-Wright, 2015). As in many domains of social scientific inquiry, early studies mainly used quantitative-based methods and found some evidence that individual-level traits—including selfish, not-in-my-backyard attitudes—were driving wind energy opposition (Krohn & Damborg, 1999). Years later, in recognition of the need for growth in this literature, Devine-Wright (2005) suggested that such explanations were simplistic and future work should consider a broader set of ideas and methodological approaches (see also Aitken, 2010). Ellis, Barry, and Robinson (2007) contextualized this point by stating that the reliance on quantitative methods alone “has contributed to the impasse in understanding of public perception of wind farms” (p. 540). Indeed, mixing methods may help us to incorporate multiple truths and “produce [both] the generalizable and the particular” (Warshawsky, 2014, p. 161). In response to such calls, we have seen a surge in qualitative and mixed methods work in what is now deemed the social acceptance of wind energy literature. This relatively sudden shift thus allows for a critical examination of a sizable and relatively recent collection of mixed methods research in one domain.
To gather a sample of mixed methods research in the area, we conducted two database searches—one in Google Scholar and the other in Web of Science using the Boolean search terms: “wind energy” OR “wind turbines” AND “mixed method” OR “mixed methodology” OR “qualitative quantitative” OR “q method.” In Google Scholar, this produced 1,640 journal articles and books published between 2005 and 2017. The sample dwindled to 18 after selection criteria were applied. An article was included in the final sample if it (i) was published in a peer-reviewed academic journal, (ii) was relevant to wind energy, (iii) was within the social sciences, and (iv) employed both qualitative and quantitative methods. Google Scholar has been criticized for gaps in coverage (Giustini & Boulous, 2013; Jacsó, 2005), so the journal database Web of Science was also used. With the same vetting as above, this search produced 16 new articles. Using this data set (
For Research Question 1, the characterization of method sequence was based upon Holstein’s (2014) classification. We independently read through each paper to determine the research method order. In five instances, the method sequence was unclear so we contacted the author(s) and were able to confirm order in all of these cases.
To address Research Question 2 regarding dominance, we developed three analytic strategies. Together, we call these steps the Dominance in Mixed Methods Assessment (DIMMA) model (see Figure 1). We do this in the context that there is no generally accepted measure of dominance (Creamer, 2018; Creswell and Plano Clark, 2018).

The three-step Dominance in Mixed Methods Assessment model used to measure method dominance in a mixed methods data set.
The first step was a three-stage interpretive reading of how the authors represented quantitative and qualitative data throughout each paper. This involved the first author reading through each paper in its entirety to qualitatively assess which method was prioritized. This subjective assessment looked at the following: (i) How the author(s) wrote about each method including the reasoning behind the use of each method (i.e., what Creswell and Plano Clark, 2011, call the “primary aim”), (ii) the amount of detail given about each method (through data collection, analysis), and (iii) the apparent quality and rigor of each strand (Baxter & Eyles, 1997). Following the examples of Bryman, Becker, and Sempik (2008) and Baxter and Eyles (1997), we use independent criteria for analyzing the apparent rigor of quantitative (e.g., validity, reliability, generalizability) and qualitative (e.g., credibility, transferability, dependability) separately. In six cases of doubt regarding the three assessments above, the second author also read each paper to increase intercoder agreement—a process by which multiple researchers come together to discuss coding discrepancies regarding the same text (Campbell, Quincy, Osserman, & Pedersen, 2013).
Second, a quantitative assessment of the amount of text devoted to each method in the Results section (using word counts) was performed. The technique is similar to the one Creswell and Plano Clark (2011) have outlined, yet it is difficult to find examples of its usage in practice. Although qualitative research is generally “richer” (Creswell, 2013)—requiring more space (i.e., higher word counts)—this more objective step is highly replicable to the extent that people agree (a) where a passage starts and ends and (b) that the content is either qualitative or quantitative. An example and full explanation of how this was performed can be found in Figure 2.

An example of the results sections, word count process that makes up the second stage of the Dominance in Mixed Methods Assessment model (from Walker & Baxter, 2017b). Text highlighted in yellow indicates qualitative data, while text in blue indicates quantitative data. Text that could not be associated with either method would not be captured. After word counts were complete for both methods, a percentage calculation would be performed to determine the relative amount of qualitative findings in each paper’s results section. For example, using only the text above, we find 193 words devoted to qualitative results and 69 devoted to quantitative results. Thus, of the total word count 262, qualitative findings make up 73.7%.
Lastly, quantitative to qualitative sample size ratios were calculated for each paper to understand the relative size of the qualitative undertaking. While self-administered surveys can scale somewhat exponentially with minimal extra costs, each new interview tends to require the same effort and cost (Creswell and Plano Clark, 2011). Qualitative data sets most often included interviews (
After each step of the DIMMA process, we met as coauthors to review each paper and determine which method was dominant. Each step served important, if not equal roles through these discussions. In cases where no dominant method could be determined, we classified the paper as “neither” qualitative or quantitative. Together, these three approaches to deconstructing design and practice give a reasonable sense of method dominance—perhaps even beyond the conscious intent of the authors themselves.
Results
The findings are organized according to the three research questions and can be found in Table 1 (sequential designs) and Table 2 (nonsequential designs). The third question in particular is built upon the work of the previous two and culminates in the ultimate question of this research: What is the relationship between method sequence and method dominance?
Sequential Mixed Methods Articlesa (2005–2017) and Measures of Method Dominance.
aThe full citations of each paper can be found within the references.
bN/A refers to any sample (number) that was not stated in the paper. For example, some papers gave vague descriptions of the number of interviews they completed. If either a qualitative
Nonsequential Mixed Methods Articlesa (2005–2017) and Measures of Method Dominance.
aThe full citations of each paper can be found within the references. (Note: While the full citation for Mey and Diesendorf makes it clear it was fully published in 2018, it was available online in late 2017 and was gathered through our 2005–2017 search. Thus, after some discussion, it was chosen for inclusion in this study.)
bN/A refers to any sample (number) that was not stated in the paper. For example, some papers gave vague descriptions of the number of interviews they completed. If either a qualitative
cIn addition to the interviews, there were history event analysis completed for this research. The number of documents analyzed (or otherwise
Research Question 1—Method Sequence
Using our “four families” categorization of research designs, we find there are more sequential (
Research Question 2—Method Dominance
Through subjective and objective measures to determine the overall method dominance in each paper (see DIMMA, Figure 1), there appears to be an overall theme of the subjugation of qualitative design and findings in the social acceptance of wind energy literature, but perhaps not on the same level as we might expect from the mixed methods literature. Of the 34 papers analyzed, nearly half (
Research Question 3—Relationship Between Sequence and Dominance
Here, we expand upon the findings of measured dominance (see DIMMA model, Figure 1) and discuss whether method dominance is linked to method sequence. Overall, the explanatory sequential and fully integrated designs in particular showed the highest tendency to prioritize of quantitative and qualitative methods, respectively.
Exploratory sequential
Researchers who used the exploratory sequential approach (i.e., qualitative first) showed a preference for quantitative methods in their writing (Table 1). That is, of the 14 exploratory studies, 7 studies were dominated by quantitative reporting, 4 emphasized qualitative reporting, and the remaining 3 were balanced. Across the subsample of 14, most authors explain that the qualitative methods were used to “set up” or help design survey methods. Interviews were said to create measurement instruments (Brownlee et al., 2015; Devine-Wright & Howes, 2010) or to better inform “the more rigorous (quantitative) investigation” (D’Souza & Yiridoe, 2014, p. 264). Another reason for using mixed methodologies was to help overcome the complexity of the issues at hand. Zoellner, Schweizer-Ries, and Wemheuer (2008) cite their inclusion of qualitative interviews in particular as being vital because of “…wide range of social parameters that determine renewable energy processes in communities” (p. 4137).
The results sections of the exploratory papers are particularly indicative of method dominance. As shown in Table 1, there is no consistent pattern, but if there is any bias, it is toward the quantitative findings. The amount of space devoted to each method varies widely but equates to an average of 33% qualitative or a fairly strong preference for quantitative text—although this value is highly influenced by the studies of Nichifor (2016) and Brownlee et al. (2015) who devote none of their results section to qualitative findings.
In looking at sample sizes used for each method, we see similarities across the exploratory sequential research design. In most cases, the quantitative sample is much larger than the qualitative sample. Of the papers for which data are available, the ratio of quantitative to qualitative ranges from 38 to 1 (D’Souza & Yiridoe, 2014) to approximately 1 to 2 (Beckham Hooff, Botetzagias, & Kizos, 2017). Together with all of the subjective and objective measures used, we find there is a fairly strong preference for quantitative (i.e., survey) findings among exploratory sequential papers.
Explanatory sequential
Explanatory sequential (i.e., quantitative first) articles’ stated purpose for using qualitative methods avoided any mention of using one to inform or design the other. Instead, there was an indication that the qualitative methods were included to expand and delve deeper into research questions. That is, qualitative methods were used to allow for richer understandings (Janhunen, Hujala, & Pätäri, 2014), triangulation (Lombard & Ferreira, 2014), or to explore “residents” [actual] points of views” (Frantál & Kunc, 2011, p. 507). In these cases, interviews were most often used to further investigate findings that arose within the initial survey. There was also the paper by Varho and Tapio (2005) which suggested that conducting interviews after quantitative methods uncovered “interesting factors…which might not emerge through more formal methods” (p. 1945).
The findings sections within this subset of literature reveals that the authors who used an explanatory approach devoted less space toward the qualitative findings (32.7%). There was only one article which contained a majority (67.6%) of qualitative findings in its results section (Schaefer, Lloyd, & Stephenson, 2012). This trend is somewhat surprising considering how the qualitative methods were described above. The quantitative to qualitative sample ratios within the set of explanatory articles were also similar to those found in the exploratory sequential papers—although there were two with comparable ratios of 1:1 (Varho & Tapio, 2005) and 2:1 (Frantál & Kunc, 2011). Despite this indication that would suggest otherwise, overall, it is clear that explanatory sequential papers found in this study tend to prioritize the quantitative methods. That is, using both subjective and objective measures, five papers were found to prioritize quantitative methods and three presented the two methods in a somewhat balanced way.
Fully integrated
In all but one case of papers that used the fully integrated design, authors’ stated purposes for including qualitative methods centered on theoretical development or expansion. For example, J. S. Greene and Geisken (2013) used interviews to “present a more complete picture” (p. 4) and Frantál, Bevk, Van Veelen, Hărmănescu, and Benediktsson (2017) chose mental-mapping and open-ended questions to “provide an option for how to better capture and understand the subjective perceptions and preferences of people” (p. 235). The only exception to this rule was from a Q-Method paper in which Brannstrom, Jepson, and Persons (2011) used interview data to “create concourse of statements” to be used in a quantitative, sorting exercise.
In looking through the results sections of all integrated papers found, we calculate that qualitative findings make up a majority of the text (61.7%). This turned out to be the highest value found among all research designs studied. There is also a fair degree of consistency. Five of the nine articles contained between 43% and 61% qualitative findings. Interestingly, most of the “outliers” has qualitative methods dominate the results sections (i.e., 85–97%; Edsand, 2017; Jepson, Brannstrom, & Persons, 2012; Mey and Diesendorf, 2018).
The sample ratios found within fully integrated papers also reveal more preference given to qualitative methods. These ratios are approximately the same in six studies, one of which actually has a slightly larger qualitative sample (Fisher & Brown, 2009). Another unique feature found in this subset is the use of three or more separate methods of data collection. In the paper by J. S. Greene and Geisken (2013), economic modeling began the data collection, followed by in-depth interviews and finally surveys were sent to randomly chosen residents. Five more papers (Brannstrom, Jepson, & Persons, 2011; Edsand, 2017; Frantál, Bevk, Van Veelen, Hărmănescu, & Benediktsson, 2017; Jepson et al., 2012; Mey and Diesendorf, 2018) also employed at least three stages of data collection.
In all, the papers that employed fully integrated designs showed some tendencies to prioritize the qualitative data and findings. Five of the nine papers in this subsample were found to emphasize qualitative research. The other four were from Brannstrom et al. (2011), Frantál et al. (2017), and J. S. Greene and Geiksen (2013)—who emphasized the quantitative—and Haggett and Toke (2006) who presented each method equally.
Convergent
There were only three papers that used a mixed method convergent design. In two of these papers, the purpose for using qualitative methods was to deepen understanding (Mulvaney, Woodson, & Prokopy, 2013b; Maillé & St-Charles, 2012). In the other, the use of interview and document analyses was more specific—to “[assess] benefits and costs…historical data…general concerns…and community involvement” (p. 325).
Studies that used convergent designs had wide variations in terms of how much space was devoted to qualitative findings. The average of 43.4% is indicative of the fact that qualitative results served a somewhat complimentary role; however, this value was influenced heavily by one paper in particular (Mulvaney et al., 2013b) whose quantitative findings encompassed 75% of the results section. Only a single paper (Maillé & St-Charles, 2012) within this subset contained full details regarding sample sizes. Given the small numbers, there is no clear pattern or tendency for authors to prioritize one method or the other within convergent mixed methods designs. Our analysis concludes one paper emphasized the qualitative, another quantitative, and the third balanced the two.
Discussion
In light of the fact that some have “moved on” and now focus on the idea of methodological intent, this article has attempted to reinvigorate important discussions around method dominance in mixed methods research. It has introduced the DIMMA model—an analytical methodology for determining dominance—and has explored connections to sequence. With the social acceptance of wind energy literature as a case example, there is a relative balance in terms of research designs. Although most (22 of the 34) of the papers found were sequential—14 exploratory, 8 explanatory—more than one third were nonsequential (fully integrated [
One of the main messages from our analysis is that mixing methods does not necessarily lead to a paper that presents a balanced mix of findings. Of the 34 papers in our sample, we detected a lack of a dominant method in only 8. Although mixed methods do not
Somewhat in line with recent concerns that qualitative methods are only playing complementary roles (Harrison & Reilly, 2011; S. Hesse-Biber, 2016; Morgan, 2013; Teddlie & Tashakkori, 2012), we detected qualitative dominance in only 29% of the 34-paper sample. Meanwhile, nearly half (47%) were quantitative dominant. However, given the history of the social sciences, we expected an even larger difference in quantitative- to qualitative-dominant papers (Creswell et al., 2004; Harrison & Reilly, 2011; Plano Clark et al., 2008; McManamny et al., 2015; O’Cathain et al., 2007). In this sense, we conclude that qualitative methods exceeded our (low) expectations, which is encouraging for both the mixed method and qualitative research communities. A reason for this may be that those inclined toward conducting mixed methods are coming from somewhat stronger understandings of the value of qualitative research. Indeed, the “qualitative community” of mixed methods researchers are aware of the “qual-light” (Teddlie & Tashakkori, 2012) use of qualitative research and perhaps are emphasizing those methods more consciously. The collection of qualitative journals who are accepting of mixed methods research may also be a factor (see Archibald, Radil, Zhang, & Hanson, 2015).
Another key contribution of this article can be found through the development of a model for studying method dominance in mixed methods research. To date, the literature is inconsistent in terms of not only what research designs lead to qualitative versus quantitative dominance but also what methods tend to dominate in mixed methods research more generally (Creswell et al., 2006; S. Hesse-Biber, 2016; Mason, 2006; Maxwell et al., 2015). The lack of clarity is no doubt due to the lack of systematic assessments for studying method dominance. Inspired by the writings of others including Creswell and Plano Clark (2007, 2011), we develop the three-stage DIMMA model. We suggest each subjective and objective component of the model has value, and when applied to other literatures, it may increase our understanding of important methodological questions.
Regarding the interaction of method sequence and dominance in our case literature, there are two trends: (i) explanatory and exploratory (to a lesser degree) studies are associated with dominance of quantitative methods and (ii) research that uses the fully integrated approach tends to emphasize the qualitative portion. Given that our sample is only 34, we might best consider these tentative findings and working hypotheses to be explored further in other literatures. In looking at the effect method sequence may have on revealed dominance in a specific domain, this article adds to a very limited number of studies that have looked at the relationship. Traditionally, researchers have treated the two factors as independent (Hall & Howard, 2008) and thus may have ignored the possible interactive effects of sequence and dominance. The question of “why” remains—is it because quantitative researchers are drawn to these approaches or because of something inherent in the design (e.g., excitement over the relatively newer, “generalizable” findings in a project). More fodder for future research may come from the need to study the prevalence and impact of the
Mayoh and Onwuegbuzie (2015) present the best and most recent investigation into the relationship between sequence and dominance, though it is done within a domain that is inherently qualitative in nature. Similarly, recent research from Archibald et al. (2015) found strong prioritization of qualitative research in recent published research, but the sample was taken only from qualitative journals. In part to address this, our review was open to all mixed methods research in the broadly defined realm of the social acceptance of wind energy literature. Of course, there may be methodological preferences in this area as well, though perhaps less than was seen in the two aforementioned studies.
Despite our focus on method sequence, there are no doubt many other factors shaping methodological dominance in mixed methods research. Indeed, Bryman (2007) makes clear that researchers may intend or be “forced” to prioritize one method because of the many “predispositions and preferences” of researchers and funding agencies (p. 20). Based on the present findings, we suggest that research design (i.e., method sequence) should also be considered as a potential influence, though the degree to which it is so may require more in-depth research with authors themselves.
Four papers identified that they used qualitative methods but present only quantitative findings. These papers might have been omitted from analysis, but we included them to highlight that identifying mixed methods research can itself be challenging (Hurmerinta-Peltomäki & Nummela, 2006; Maxwell et al., 2015). Phrases like “based on interviews” or “insights from interviews” are perhaps meant to signal companion work published elsewhere rather than in the current paper. Less optimistically, these authors may be using “tokenistic” or “qual-light” methods simply to help their work stand out as a form of mixed methods inquiry (Ivankova & Plano Clark, 2018; Teddlie & Tashakkori, 2012). Whatever the case may be, the absence of significant qualitative work within them underscores two issues: (i) the need to be more explicit regarding broader study design and context and (2) the possibility to explore such scenarios in future, similar analyses to the one here.
As one might expect, the fully integrated approach tended to allow a relative methodological balance. Although the authors fail to highlight how they achieved that balance in any direct way, it may be that the design itself facilitates balance or simply be that authors who gravitate to this design tend to be more balanced. Future research could explore whether there are other aspects of the design and execution of the work that play an important role (Creamer, 2018).
Especially within studies that showed more quantitative dominance, there was often an implication that qualitative work holds secondary status regarding rigor or robustness (S. Hesse-Biber, 2010b). In those papers, authors tended to stress how the interviews were completed in order to feed into the design of quantitative measurement instruments (e.g., questionnaires, items for Q-sorts; Brownlee et al., 2015; Devine-Wright & Howes, 2010; D’Souza & Yiridoe, 2014). When researchers used qualitative research for theoretical advancement in our case sample, the methods tended to be much more balanced rather than simply qualitative focused.
Although we cover an entire literature, our
Conclusion
This article has provided a set of procedures for determining if and perhaps how qualitative methods are subjugated in mixed methods design, with specific reference to the relationship between method sequence and dominance. This allows a moment for researchers who use sequential mixed methods in particular to ponder whether allowing quantitative data to dominate fits with overall research goals and perhaps, whether the design predisposes researchers toward publishing as such. While there is not specific problem with quantitative-dominant publishing, we should reflect on what is lost by leaving qualitative findings on the “cutting room floor.”
Through an examination of a set of literature in detail, this article has also reminded the reader of the true value in conducting mixed methods research. As the use of mixed methods approaches becomes more and more common, it is important for academics to use mixed methods only when the research problem or question calls for it. As Fielding (2012) writes, “Rather than mixing because there is something intrinsic or distinctive about quantitative data or qualitative data, we mix so as to integrate the two fundamental ways of thinking about social phenomena” (pp. 125–126). That is to say despite all of the criticisms and complexities of mixed methods research presented here, there is still the potential for increases in our understanding of social scientific problems when using both qualitative and quantitative methods together. Especially when employed with a greater consideration for both approaches, researchers may be able to more fully and appropriately investigate social phenomena.
