Abstract
Keywords
Introduction
The central purpose of political science research is to generate knowledge about how politics works. This is done through the exploration of more specific questions such as why (some) citizens vote, how citizens formulate preferences regarding candidates and policies, and how environmental stimuli such as elite appeals and social interactions influence opinions. The preceding several decades of research on political behavior has seen innovative work by many scholars that has advanced our understanding of these broad questions, as well as many others (for but a sampling of relevant research, see Chong & Druckman, 2007; Duch & Stevenson, 2008; Hillygus & Jackman, 2003; Huckfeldt & Sprague, 1995; Iyengar & Kinder, 1987; Lau & Redlawsk, 2006; Leighley & Nagler, 2013; Lodge, Steenbergen, & Brau, 1995; Mutz, 2006; Vavreck, 2009; Zaller, 1992).
Such gains, however, are not solely attributable to the creativity and passion of the scholars who produced them. The discipline as whole has also made considerable investments in data infrastructure and collections to support this important work. For example, since 1980, the American National Election Studies (ANES) have used many millions of dollars of federal funding to provide the data used in thousands of empirical analyses, many of which have been published in the discipline’s leading journals. Likewise, the discipline’s investment in the ANES has been mirrored by huge investments by private foundations and others in largely cross-sectional survey projects (e.g., Pew’s Research Center on United States Politics & Policy, the Annenberg National Election Survey, the Cooperative Congressional Election Study, and the Cooperative Campaign Analysis Project).
As with any portfolio of investments, however, it is important to occasionally conduct an audit and ask if such allocations are justified and are producing the kinds of advances that we seek. Such an audit is important for two reasons. First, it enables the field to ascertain whether the published empirical research record captures the core theoretical concepts that scholars think are critical. Of course, what
Second, an audit is important now given the development of new measurement strategies that may challenge the continued value of large-scale resources such as the ANES. Grant-making activities from government, foundations, and universities have increasingly supported various data collection strategies such as laboratory experiments, survey experimentation (e.g., the Time-Sharing Experiments for the Social Sciences program), and, most recently, the mining of social media data (see, for example, https://wp.nyu.edu/smapp/). In addition, the development of crowd-sourced data collection tools such as Amazon’s Mechanical Turk may provide researchers with a cost-effective method for collecting data (Berinsky, Huber, & Lens, 2012; Mullinix, Leeper, Freese, & Druckman, 2015). These new resources are attractive as they enable some flexibility in data collection, although they clearly lack one of the key calling cards of resources such as the ANES—the ability to investigate mass politics over a long time period. Without occasionally stepping back and reviewing what we are funding and how it is translating into scholarship, we risk making funding decisions based on inaccurate or outdated ideas about how the discipline is or is not changing and what the drivers of our intellectual progress really are.
In this article, we seek to provide an audit for the field of political behavior—with a focus on quantitative investigations of American voting behavior, public opinion, and communication. We first use content analysis data from more than 1,100 articles about American political behavior, published in 11 leading journals from 1980 to 2009 to explore, over time, the concepts most frequently studied and the methods typically employed. We then supplement these data with a sampling of 41 published research articles from the 2010-2018 period. 1
With such data, we can ask a variety of specific questions: given available data collections, what questions and topics have dominated political behavior research since 1980? Has a growing emphasis on experimental methodology (Druckman, Green, Kuklinski, & Lupia, 2006, 2011) led to a diminution of survey methods as the tool kit of choice for political behavior scholars? What role has the ANES, the largest investment by the National Science Foundation in political science, played in driving research on these concepts? Have the core concepts measured in the ANES time-series continued to be relevant to most scholars of American political behavior? Does the data on what is being measured justify a different allocation of resources or a rethinking of the value of the ANES time series?
In the remainder of this article, we first describe the data we collected and then present our analyses and conclusions. To preview, we find that the published research in American political behavior has (since 1980) been heavily skewed toward a small number of important concepts central to understanding voting. Furthermore, over the entire period, these central concepts have been measured most often using survey methods. Although experimental data use has trended positively and surged in use in the 2010-2018 period, surveys remained the dominant data source for behavior research. Perhaps surprisingly given the plethora of alternative survey data sources in recent years and the availability of inexpensive survey alternatives (e.g., Santoso, Stein, & Stevenson, 2016), we find that researchers continue to use the ANES as a primary source of data. In addition, despite some important exceptions we discuss, there is a notable
Data: 1980-2009
In auditing the political behavior literature, we had to first decide on a time frame and set of journals from which to sample. We opted to focus on the years 1980 to 2009 as this not only encapsulates a fairly long period of time but also includes the purported rise (or return) in political behavior research of work centered on political persuasion (e.g., Mutz, Sniderman, & Brody, 1996), and experimental methods (e.g., Iyengar & Kinder, 1987). We then downloaded all (i.e., approximately 10,000) articles from a set of 11 journals that publish much of the central work in the field. 2 From these, we selected the 1,163 articles that employed some quantitative approach to study some/any question in the field of American mass political behavior. 3 This meant we had roughly 39 articles coded per year. 4 We then had a team of coders closely read and content analyzed these articles. 5 All data collection and content analyses of these articles took place from late 2011 into 2012. Of course, we recognize that the timing/nature of our sample structures the implications of our analyses for the current trajectory of the field. For this reason, as we later explain, we supplemented these data with a small sample of articles from 2010 to 2018.
Information about the
Concepts and Policy Issues.
The second type of item coders recorded concerned whether the article in question incorporated data pertaining to 11 distinct policy domains (Table 1). The coders could indicate whether the article included measures pertaining to individual attitudes, perceptions of party positioning, and/or perceptions of candidate positioning on the issues. In creating this list of issues, we relied more directly on the ANES as, each year, the ANES makes an effort to include long-standing critical issues as well as emerging ones, as reflected in policy making and news coverage. Our reliance on the ANES, however, may mean we miss issues that are salient for brief periods of time as the ANES has a commitment to maintaining some time series continuity: it is somewhat constrained in adding too many new issues for each data collection. Our results regarding issues should be read with this limitation in mind.
Overall, then, each article could be coded for the presence of up to 81 (48 concepts + 33 issue indicators) different content elements. Coders also indicated whether a concept or policy position, when present in the article, was “central to the main themes of the paper.” This enables us to speak not just to the frequency of a wide array of topics in research on American mass political behavior but also to their relative importance in the field.
Some of the concepts and issue dimensions listed in Table 1 could potentially be collapsed into broader superordinate categories. However, in our analyses we will maintain a focus on the individual concepts/issues rather than attempting to do so as it is not immediately clear on how to make nonarbitrary decisions when collapsing the categories. We did explore the potential interrelationship between these concepts and issues via factor analysis; see Figure OA7 and Table OA3 in the Online Appendix for the results. Notable here is that relative lack of clear superordinate structures. Rather, many factors explaining small degrees of variance emerged, suggesting that collapsing across categories would gain us relatively little in additional clarity when analyzing the data.
Coders recorded the data source(s) used in the manuscript in addition to their substantive content. Coders indicated whether the ANES, other survey(s), experiments, or archival sources provided the data for each of the concepts/policy issues coded as present in the article. When necessary coders could indicate that more than one data source had been employed. If the coder indicated that the ANES had been used in the manuscript, they were further queried as to whether one, two, or three or more ANES surveys had been used. These measures enable us to track the methodological progression of political behavior research as well as the frequency of use of the time-series component of the ANES. Put another way, it allows us to audit the worth of the ANES by documenting the extent of its usage and, in particular, whether the time-series aspect of the ANES drives its application.
Analyses
We begin by considering the agenda of the American political behavior literature between 1980 and 2009. Although this does not directly speak to the question of “auditing” the worth of investments in different data collection approaches, it provides indirect evidence on whether central concepts cohere with the missions of those data collections and specifically the ANES. Then, we turn to an explicit investigation of methodological orientation. In so doing, we will also consider potential differences in substantive focus by method.
The Agenda of American Political Behavior Research
One place to begin is a consideration of the “complexity” of political behavior research via a focus on the number of concepts and policy issues coded as present in the articles. On average, articles contained 4.96 (
Figure 1 provides more context concerning the core contents of the American mass behavior literature.
9
First, the left-hand subgraph in Figure 1 plots the number of times each concept and policy issue code was indicated as present in an article. A small set of factors dominate the scene; while the mean number of appearances per concept/issue is 71.63 (

Concept and policy issue use.
Figure 1 focuses on the most used concepts in American political behavior research. However, this may give a mistaken impression of the factors dominating this agenda insofar as some concepts may appear very frequently as components ancillary to the main purpose of the article (e.g., as control variables). To get a better sense of which concepts have been most
There are two ways to use this information to inform our understanding of the most important elements of American mass behavior research, both of which are displayed in Figure 2. First, we can consider centrality

Concept and policy centrality.
Figures 1 and 2 indicate that there is a clear focus to the American political behavior literature in the aggregate, but this does not tell us about any potential dynamics or evolution in these patterns. Figure 3 enables such an investigation by plotting the rate of appearance for the 15 most central elements identified in Figure 2 over time (see Figures OA3-OA6 in the Online Appendix for the remainder of the concepts/policies). Because the number of articles coded per year varies, Figure 3 focuses on the proportion of articles coded in a given year wherein the concept in question was present. On one hand, Figure 3 shows a fair degree of stability for many of these items, including vote choice, PID, voter turnout, and attitudes regarding services and spending. On the other hand, there does appear to be a noticeable increase in the use of racial identity and political knowledge over time and a decreasing emphasis on attitudes on jobs and income support and aid to Blacks. On the whole, though, Figure 3 suggests a research agenda that, despite some fluctuations, appears to be fairly consistent over time. 11

The evolution of the political behavior agenda, 1980 to 2007.
As noted, Figures OA3-OA6 in the Appendix provide an overview of the remainder of the coded items over time. We pause to note three interesting patterns that emerge. First, there is a marked increase in the use of two values items—“Equalitarianism” and “Moral Traditionalism”—perhaps reflecting the increased salience of cultural issues in American politics and concomitant efforts at understanding the nature and origins of political values among the mass public (e.g., Carmines, Einsley, & Wagner, 2012; Goren, Federico, & Kittilson, 2009; Jost, Federico, & Napier, 2009). Second, there is a slight increase in attention to “Campaign Contact” beginning in the early 1990s, signaling a renewed interest in the topic following Rosenstone and Hansen’s (1993) landmark book and the resulting field experimental literature on the effectiveness of various mobilization strategies (e.g., Gerber & Green, 2000; Gerber, Green, & Larimer, 2008; Sinclair, 2012). Finally, there is a decrease in attention to many of the individual issue attitude measures albeit with one notable exception: A positive trend in attention to respondent attitudes on gay and lesbian issues. We take this last result as only suggestive—recall that we relied on the ANES for the issues we coded and our sense is the ANES is constrained in adding novel issues. Thus, the downward trend may be due to us missing (i.e., not coding for) new issues. Regardless, overall, Figure 3 and Figures OA3-OA6 suggest a political behavior agenda with a solid anchor (voting behavior, consistent with the mission of the ANES) and insurgent interest in values and cultural issues.
Political Behavior Methodology Over Time
In the foregoing, we focused on the
Survey methodology dominates American political behavior research and the ANES dominates within this category and, hence, within this literature during the period investigated. Although 50.90% of all articles were coded as using “Other Surveys,” a sizable proportion of all articles featured the ANES (33.71%). Given this distribution, the ANES is likely the single most important data source for political behavior research on American mass politics. 12
Has the dominance of the ANES changed over time? We address this with Figure 4, which provides a temporal perspective of the methodological choices made in American political behavior research. The top row of graphs provides the proportion of articles in a given year where a particular data source was coded as present. The bottom row of graphs provides data for the two survey options combined as well as an examination of the potential trends in mixed data use (i.e., the proportion of articles using both survey and experimental data or survey and archival data). A few notable points emerge from Figure 4. First, the dominance of survey data sources over experimental and archival sources discussed above can clearly be seen at play in Figure 4. Second, while there has been a recent uptick in use of experimental methods (Druckman et al., 2006, 2011), this growth is rather modest and experiments are still a clear minority in data use compared with surveys overall and to the ANES in particular. During the last 5 years of the 2000s (2005-2009), approximately 13% of coded articles featured experimental methods, that is, nearly triple the figure from the first 5 years of the time series (1980-1984; 4.6%). However, the former number is still well below the average proportion of articles using survey methods during this time frame (73.8%) and nearly one third of the figure for the ANES (33.7%). Finally, there is some evidence of an increased tendency to mix data sources, but surprisingly between survey and archival data sources and less so with survey and experimental methods, despite the potential benefits to a study’s internal and external validity of pairing these latter data sources. Ultimately, Figure 4 shows a slowly changing data landscape, one dominated by survey methods, and particularly the ANES, but with a slow-growing emphasis on experimental data sources.

Measurement use over time.
Contributing to the predominance of the ANES is surely the ability of researchers to explore important questions over a long time frame, something which most alternative data sources cannot equal. This fact is captured in Figure 5, which plots the proportion of ANES coded articles, both overall and over time, using a single, two, three or more, and two or more ANES surveys. Nearly 70% (264 / 392) of the articles coded as containing ANES data use the time series component of the survey (i.e., at least two surveys were used). Notably, researchers appear to have made increasing use of the time series as the remainder of Figure 5 attests. For instance, during the time span of 1980 to 1984 approximately 29% of ANES-coded articles per year used three or more ANES surveys. This number doubled by the end of the coded time frame to approximately 68% per year during the time frame of 2003 to 2007. 13 Clearly, researchers are making use of the overtime continuity available in the ANES. Although the emergence of online data survey collection resources, such as YouGov, GfK, and Mechanical Turk, may enable researchers some greater flexibility in designing studies to capture important elements of political behavior during single time periods, the ANES seems poised to remain the key resource for American political behavior researchers interested in overtime analyses.

Use of the ANES time series.
Finally, we can square the circle here and return to our discussion of the contents of political behavior research and, specifically, how concept use varies across these different data sources. In Figure 6, we provide a series of box plots showing the relative use of concepts, issues, and both by data source of the article. Note that all cases of mixed data use are indexed under “Mixed” for this purpose, that is, “ANES” indicates that the ANES was the

Article complexity by methodology.
Data: 2010-2018
Our coding included articles from 1980 to 2009. While that provides a lengthy evolutionary time period, it also ends at a point when the discipline underwent some of the notable changes previously mentioned. This includes the continued rise of experimental methods, as further exemplified by the founding of the American Political Science Association’s section on experimental methods in 2010 and the publishing of the
To answer this question, we drew a sample of articles from 2010 to March 2018. We did this by identifying all political behavior articles in the same set of journals as analyzed above (based on the readings of abstracts). We then drew a random sample of 41 articles from this set, stratified such that it included at least five articles from each year (from 2010 to 2017 and three from 2018), and at least two articles from each journal (over the entire time period). 15 We then had a team of four coders, after training and practice, code each article using the same scheme as above. This coding took place in the winter/spring of 2018.
Figure 7 provides an overview of the 2010-2018 sample of articles. We begin our discussion with the top two subgraphs, which plot the proportion of times each concept was coded as either present or central in the 2010-2018 sample versus the 1980-2009 sample. The key takeaway from these subgraphs is the

Concept and data use in the 2010-2018 sample.
The bottom half of Figure 7 focuses on the type of data used in the new articles. We again see important similarities alongside a key deviation. On one hand, survey sources continue to dominate the behavior landscape with approximately 85% of articles in the 2010-2018 period coded as using a survey of some sort. The ANES largely maintains its prime position as well. On the other hand, the growing use of experimental data we documented in Figure 4 has continued apace with 24% of articles using an experiment and 15% of them using an experiment in combination with a survey. 16 However, while experimental research now occupies a much more sizable fraction of the behavior universe, survey use still predominates.
These results beg a question of why the shift toward experiments became most evident after 2009. As explained, there certainly were some notable institutional developments concerning experiments post-2009; however, on the contrary, experiments were ostensibly “mainstream” by the 1990s and early 2000s (e.g., Gerber & Green, 2000; Kinder & Palfrey, 1993). We suspect the “slowness” reflects a path dependency in the published literature: The publication process likely favors extant approaches that are familiar to editors and reviewers. If one were to explore the “gray literature” that consists of unpublished works (e.g., Fanelli, Costas, & Ioannidis, 2017), the picture—time wise—may be quite distinct. Given that path dependency, then, the shift we document is notable. At the same time, the ANES’s stability is also impressive: even in the midst of major disciplinary changes, the ANES still accounted for nearly one quarter of all political behavior work. The ANES continually balances over-time (time series) continuity and innovative science, and there is no indication that it has failed to be successful to do this so far. On a related note, the continued dominance of the central concepts shows that the methodological changes in the discipline had little effect on the substantive focus.
Concluding Discussion
In this article, we have discussed the results of a novel content analysis of over 1,100 published articles concerning American mass political behavior. These analyses suggest at least four key takeaways. First, the agenda of this literature during the time frame investigated is heavily skewed toward voting, which may not be all that surprising given that two of the landmark books in this broader literature are titled
Survey methods, and the ANES in particular, constitute the lion’s share of data for political behavior researchers during the time frame explored here. And, while experimental methods constitute a growing share of the behavior literature, this growth appears to be relatively slow thereby suggesting a continued role for the ANES and other surveys in guiding research on American political behavior. To be clear, while we used the ANES for guidance in constructing our coding scheme, it was not the only source of the scheme and by no means did it interest with our selection of articles to code. In other words, our results were not bound to find a central place for the ANES either in its presence in behavior research or in its underlying prominence in concept determination. Even with the rise of alternative methods and data collection opportunities since the mid- to late 1990s, the ANES still dominates. It is a sound investment: It is the most used source of data, focuses on the concepts central to the field, and provides unparalleled access to over-time dynamics.
Moreover, we believe our analyses suggest that the ANES will continue to play a central role in guiding research on American political behavior even with the growing movement to use what Groves (2011) refers to as “organic data”—behavioral measures such as Google search patterns, Twitter feeds, and other digital residues of politically relevant activities. As Groves (2011) notes, “data streams have no meaning until they are used” and, instead, “the user finds meaning by bringing questions to the data and finding answers in the data” (p. 868). The stability of concepts in this literature suggests that the ANES will continue to be the guiding intellectual standard in the questions that are asked. In addition, we believe the ANES will remain central to political behavior research because it has no real competitor in terms of the availability of over-time repeated
Although our content analysis was quite fine-grained in its focus and incorporated a large number of articles, it still possesses some important limitations that future work could address. One limitation concerns the time frame explored. Although we have a sampling of articles from 2010 to 2018, this sample is obviously much smaller than what we possess in the 1980-2009 time frame, which necessarily prevents us from making overly confident assertions about whether the trends we observed here also characterize the development of the behavior literature post 2009. We observed a growing focus on experimental data sources in our data, albeit one that did not shake the continued dominance of survey data in the behavior literature. It is likely that the gap in use of survey and experimental methods will only continue to further narrow in the ensuing near decade due to an increased focus on field experimental work (e.g., Broockman & Butler, 2017; Gerber, Huber, & Washington, 2010; Panagopoulos, 2010) and the adoption of crowd-sourcing data platforms such as Mechanical Turk as a cost-effective method for fielding survey experiments (e.g., Arceneaux, 2012; Dowling & Wichowsky, 2015; Robison, 2017). However, this narrowing is likely still one wherein survey use remains more popular in the field given the very high base rate for the use of survey methods.
A second potential limitation concerns our coding list for concepts and the journals from which we sampled. In constructing our coding scheme, we attempted to construct as broad a list of concepts and issues as possible. However, no coding list is ever complete or perfect. In addition, we strove to capture the behavior literature as revealed from a wide array of journal sources, but in doing so, we necessarily had to omit attention to important sources of behavior research, such as
A third limitation is our focus on
Finally, our analyses are clearly limited by their geographic focus. As articles focusing on political behavior outside of the United States were excluded from analysis, we are unable to speak to any potential differences in content or methodological focus based upon geographical focus. One obvious likely difference is in the use of the ANES, although it is possible that a similar exercise would reveal the World Values Survey or some analogous survey as serving a similar role. Ultimately, we view this study as one that could be readily applied to non-U.S. data and, thanks to recent advances in crowd-sourced text analyses (Benoit, Conway, Lauderdale, Laver, & Mikhaylov, 2016), one that is quite ready to be made. Overall, though, our results reveal a stable methodological and conceptual field that relies on surveys and focuses on voting. We leave it to others to assess the substantive advances made within the confines of the topics and methods, and the desirability of such stability. But we do conclude that, by all accounts, the investment in the ANES has handsomely paid off as it not only provides the central data source, over time, but also is foundational in terms of concepts studied by political behavior researchers.
Supplemental Material
SGO794769_Online_Appendix_CLN – Supplemental material for An Audit of Political Behavior Research
Supplemental material, SGO794769_Online_Appendix_CLN for An Audit of Political Behavior Research by Joshua Robison, Randy T. Stevenson, James N. Druckman, Simon Jackman, Jonathan N. Katz and Lynn Vavreck in SAGE Open
Footnotes
Declaration of Conflicting Interests
Funding
Author Biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
