Abstract
Recent studies have shown that political campaigns, particularly in Latin America, distribute a wide variety of electoral gifts (Gonzalez-Ocantos et al., 2012; Kiewiet de Jonge, 2015; Nichter and Palmer-Rubin, 2015). This literature has been particularly attentive to survey methodologies for measuring vote buying since they are subject to significant levels of measurement error. Following recent contributions (Gonzalez-Ocantos et al., 2012; Kiewiet de Jonge, 2015), this research conducted list experiments to estimate the percentage of voters who received gifts during the 2015 legislative and the 2015 and the 2017 subnational campaigns in Mexico. The findings indicate that vote-buying studies should be cautious when their findings rely on such a technique. Consistent with past studies on sensitive survey techniques (Böckenholt and van der Heijden, 2007), our results show that more politically sophisticated respondents with higher levels of education are more likely to follow the rationale of the list experiment. This suggests that previous studies that rely on list experiments tend to underestimate the percentage of voters who receive electoral gifts since this technique tends to work better among the most educated respondents who are, in fact, least likely to be targeted by clientelistic strategies (Calvo and Murillo, 2004; Weitz-Shapiro, 2012). This study also analyses the convenience of including the phrase “in exchange for your vote” in the item aiming to measure vote buying. The results show that including that phrase does not substantially affect voters’ responses.
The Mexican party system is an ideal case for analysing sensitive survey techniques that, estimate vote-buying because Mexican parties have strong organisations for distributing gifts during campaigns (Langston, 2017; Magaloni, 2006). Despite the expectation that programmatic linkages between parties and voters would be strengthened after Mexico’s transition to democracy in 2000 (De la O, 2015), clientelism has persisted as a campaign strategy. The once hegemonic party, the Institutional Revolutionary Party (PRI), continues to rely on machine politics built during its decades in power (Greene, 2007; Magaloni, 2006), and other parties like the National Action Party (PAN) and the Party of the Democratic Revolution (PRD) increasingly engage in clientelistic practices (Nichter and Palmer-Rubin, 2015). However, different studies suggest variations in the amount of clientelism. While some studies find that one-fifth of voters receive electoral gifts during campaigns (Kiewiet de Jonge, 2015; Lawson et al., 2013), other studies suggest that up to half of the electorate receive gifts from parties and candidates (2015 National Electoral Study, CSES; Beltran and Castro Cornejo, 2019).
Sensitive Behaviour and List Experiments
Sensitive survey techniques (SSTs) are frequently used in the social sciences to study sensitive behaviour such as corruption, vote buying, tax evasion, belief in God, the prevalence of xenophobia and anti-Semitism, sexual violence, and other topics that can be socially sensitive. The rationale is straightforward: since direct questioning is likely to underestimate sensitive behaviour, SSTs try to protect survey respondents’ privacy and anonymity in order to reduce social desirability bias - the tendency of respondents to present themselves in a favourable way to interviewers by underreporting undesirable attitudes or behaviours (DeMaio, 1984; Nadeau and Niemi, 1995). Sensitive survey techniques include indirect ways of measuring sensitive behaviours such as the randomised response (RR) – which introduces a randomising device like a spinner or a die (Gingerich, 2010; Krumpal, 2012; Warner, 1965); the crosswise models – which rely on an indicator of membership in a non-sensitive group (Gingerich et al., 2016; Tan et al., 2009; Yu et al., 2008); and the item count technique, better known as list experiment (Blair and Imai, 2012; Blair et al., 2014; Gilens et al., 1998; Gonzalez-Ocantos et al., 2012; Kuklinski et al., 1997; Miller, 1984; among others).
While SSTs are widely used in the social sciences, they also have several drawbacks that have been noticed by recent studies. First, a methodological one (Gingerich et al., 2016): list experiments require the collection of two different samples (e.g. treatment and control), which increases the survey’s burden and significantly reduces the sample size for subsequent analysis. A second drawback relates to the unexpected results that, in many cases, the list experiments produce. 1 List experiments can estimate results that contradict direct questioning or even estimate a negative prevalence of the sensitive behaviour (Holbrook and Krosnick, 2010; e.g. measuring voter turnout; Coutts and Jann, 2011). These results are particularly problematic if we assume that the estimates provided by the sensitive survey techniques are closer to the true “value” of the studied sensitive behaviour than the one estimated by direct questioning.
And third, recent studies suggest that the complexity of the method (and the taxing cognitive process required) can make RR difficult to use with populations with lower levels of education (Böckenholt and van der Heijden, 2007). The results of these studies are particularly relevant for this research. As we proceed to explain, we argue that list experiments face a similar problem: our results suggest that this SST seems to perform better among respondents with higher levels of education. This is problematic for the focus of our study – estimating the percentage of voters who receive electoral gifts – since voters with higher levels of education are typically not the target of clientelistic strategies (Calvo and Murillo, 2004; Weitz-Shapiro, 2012).
List Experiments and Clientelism in Latin America
The relationship between citizens and politicians entails a wide range of exchanges of goods and services, including programmatic and non-programmatic distributive policies (Stokes, 2005). This study focuses on the transaction of political favours in which politicians offer material incentives to citizens in exchange for the latter’s vote — a specific form of clientelism (Brusco et al., 2004; Gans-Morse et al., 2014; Schedler, 2004; Stokes, 2007). Recent studies have focused on the puzzling observation that although qualitative studies find vote buying to be widespread in many Latin American countries (Auyero, 2000; Stokes et al., 2013; Szwarcberg, 2015; Zarazaga, 2014, among others), quantitative studies relying on surveys have found little evidence of this practice. To understand such a discrepancy, the literature has highlighted that since vote buying constitutes a sensitive behaviour and the interviewer might find receiving an electoral gift to be reprehensible, respondents may choose to hide their behaviour. This is why recent vote-buying studies have relied on indirect, experimental strategies, such as list experiments, which seek to reduce social desirability bias (Gonzalez-Ocantos et al., 2012; Kiewiet de Jonge, 2015, among others).
The list experiment follows this logic. The survey sample is split into a treatment and a control groups. The interviewer shows the respondent a card that includes a list of campaign activities and reads a statement like the following one: “I’m going to hand you a card that mentions various activities, and I would like for you to tell me if candidates or political parties carried them out during the last electoral campaign.” Said card differs in the number of activities: while the treatment includes each of the activities shown in a card, the control group excludes the activity related to vote buying (e.g. “they gave you a gift”; Table 1). The respondents are expected to read each activity carefully. Later on, the interviewer asks how many activities candidates carried out during the last campaign. To reduce social desirability bias, the interviewer asks how many activities, instead of asking which activities were carried out by candidates: “Please, do not tell me which ones, only how many.”
List Experiment (Example: Mexico 2015).
According to past studies, respondents intuitively understand that by providing the number (and not the activities), social desirability pressures should decrease, providing less incentives to underreport receiving electoral gifts. Since respondents are randomly assigned to the treatment and control groups, the two groups are identical in terms of both observable and unobservable characteristics. Table 2 reports the proportion of respondents who receive electoral gifts as estimated by the list experiments in the most recent vote-buying studies in Latin America. The table also reports the difference between the percentages of voters receiving electoral gifts estimated by the list experiment and by the direct question, which is expected to be positive since list experiments are expected to reduce social desirability bias (and respondents have more incentives to report such behaviour than with direct questioning).
Direct Questions and List Experiments in Recent Studies.
aIn 2017, we conducted a survey to estimate vote buying in the State of Coahuila.
bThe study measured vote buying in the states that celebrated gubernatorial elections: Baja California Sur, Sonora, Campeche, Colima, Michoacán, Nuevo León, Querétaro, San Luis Potosí, and Guerrero.
cIn the 2012 presidential election, we included three list experiments (one list experiment for each major candidate/party).
The vote-buying literature tends to assume that, in addition to privacy and anonymity, list experiments report more reliable estimates because they often report a higher prevalence of vote buying than does direct questioning. This is the case with Gonzalez-Ocantos et al. (2012), a widely influential study in Latin American politics that began conducting list experiments to estimate vote buying, as well as with studies like the 2012 Mexico Panel Survey (Lawson et al., 2013), in which the list experiment estimates a higher prevalence of vote buying than direct questioning. However, as the recent literature discusses, this assumption is likely to lead to a drawer-file problem (Simpser, 2020) since list experiments that perform “well” are more likely to be published. As (Gelman, 2014) reported in a popular blog post, several scholars have shared some experiences about the “strange results” that their list experiments produced, ultimately preventing them from writing up the results.
Table 2 also reports results from Kiewiet de Jonge (2015), who conducted list experiments in nine Latin American countries. While most of the list experiments perform as expected, in three out of the nine countries, the direct questions estimate a higher percentage of voters receiving electoral gifts than do the list experiments (e.g. Bolivia, Chile, and Uruguay; Kiewiet de Jonge, 2015). 2 This is also the case in each of the face-to-face surveys that we conducted in Mexico: the difference between both estimates is negative and substantial. These results are robust even when the analysis excludes campaign merchandise (e.g. glasses, t-shirts, pencils, etc.), which is less likely to constitute vote buying. 3 Even when we consider this subset of electoral gifts (in parenthesis in Table 2), the direct question estimates a larger percentage than the list experiment. While these results are not uncommon (Coutts and Jann, 2011; Holbrook and Krosnick, 2010), they run contrary to the expectation that list experiments are better designed to elicit truthful answers.
This study argues that—similar to RR drawbacks reported by past studies (Böckenholt and van der Heijden, 2007)—the taxing process entailed by the list experiment makes the survey technique difficult to use with populations with lower levels of education. As we mentioned earlier, in the list experiment, respondents are expected to read a list of activities from a card and are then asked how many of these activities they have engaged in. During this process, as Simpser (2020) suggests, enumerators or respondents can become confused, leading to poor administration or nonsensical results (e.g. negative estimates of the prevalence of vote buying). This would be particularly relevant in contexts in which levels of education and literacy are low, as is the case in Latin America, where list experiments can be particularly challenging to conduct.
Moreover, these difficulties can be further exacerbated by the survey instrument, since most vote-buying studies tend to be embedded in long omnibus surveys—typically used in academic research—that contain comprehensive modules beyond electoral behaviour, including broader topics that survey research firms incorporate for several clients (multiple clients share the cost of conducting the survey). Vote-buying studies also rely on national electoral surveys that are not confined to the study of clientelism, but inquire about broader topics such as democracy, political parties, ideology, etcetera. These high cognitive demands are particularly relevant in this literature because they may cause greater measurement problems among poor and less educated respondents, who typically are the target of machine politics (Calvo and Murillo, 2004; Weitz-Shapiro, 2012). This sophistication bias would lead researchers to underestimate the proportion of voters who receive electoral gifts during campaigns.
In the next section of this article, we analyse the results of the 2015 National Electoral Study (Mexico), which is part of the Comparative Study of Electoral Systems (CSES). This postelectoral survey was conducted after the midterm election in Mexico and provides a unique opportunity to estimate the percentage of voters who receive electoral gifts across socioeconomic groups. It has a very large
It is important to mention that the difference between the list experiment and direct questioning (the “difference” column in Table 2) also relates to the high level of self-reported vote buying we obtain, which relies on question wording 4 that is different than the one used by Lawson et al. (2013), Kiewiet de Jonge (2015), and Gonzalez-Ocantos et al. (2012), among others. This group of studies relies on a single-filter question strategy, which, as previous survey methods studies have found, can decrease the proportion of respondents who are eligible for follow-up questions 5 (e.g. “Did you receive a gift or favour from a party or candidate?” YES/NO). In this type of question, if the respondent answers “yes,” the interviewer asks a follow-up question inquiring what electoral gift (or gifts) the respondent received and from which political party. If the respondent replies “no” to the filter question, the survey interview moves on to another topic. Instead of relying on a single filter question, we include independent questions asking whether respondents receive electoral gifts from each political party competing in the election (complete question wordings in Table A2 in the Online Appendix). To be able to make a comparison with other survey projects, we count respondents as voters who have received an electoral gift if they answer at least one of the three questions affirmatively.
While other studies systematically analyse question wording variations (Castro Cornejo and Beltrán, 2020), in the following paragraphs, we focus on two important characteristics of list experiments: (1) whether methodologically, they work as expected (e.g. providing a larger vote buying estimate than direct questioning), particularly among less sophisticated respondents; and (2) the convenience of including the phrase “in exchange for your vote” in the item aiming to measure vote buying, which can make voters less likely to report receiving electoral gifts.
List Experiments and Levels of Education
Mexico’s 2015 Midterm Election (CSES)
While we cannot directly evaluate respondents’ survey-taking behaviour in order to determine whether respondents are carefully paying attention to the enumerator, 6 Table 3 presents the percentage of voters who received electoral gifts across socio-demographic groups. We include this to analyse whether the list experiment performs better across some subgroups than across others. As mentioned before, the 2015 National Electoral Study (CSES) represents a unique opportunity to conduct an analysis across socioeconomic groups due to its large sample size. It is important to mention that, following other survey designs (e.g. Gonzalez-Ocantos et al., 2012; Lawson et al., 2013), the list experiment was included before the direct questions aimed at measuring the proportion of voters who receive electoral gift, 7 in order to avoid potential order effects driving our results.
List Experiment Across Socioeconomic Variables (Mexico 2015: CSES National Electoral Study).
As Table 3 shows, list experiments do not seem to provide an estimate larger than that of direct questions (Table 1 reports the items/activities in the list experiment). Second, it is noticeable that according to the list experiments, voters with college education were most likely to receive electoral gifts during the 2015 midterm election in Mexico. We also find the same pattern among voters with higher incomes (although the non-response is very high: 41 percent did not answer the income question). It is important to highlight that the direct questions do not establish such a relationship (see Figure 1, Table A4 in the Online Appendix for complete logistic models), suggesting that the sophistication bias found in the list experiment is driven by the technique. As Figure 1 shows (left side),
8
direct questioning finds that less educated respondents are more likely to receive electoral gifts than the college educated (

Electoral Gifts Across Levels of Education.
2017 Gubernatorial Election in Coahuila
We replicated the analysis to make sure that our findings did not rely on a single election year. We conducted an original survey experiment during the 2017 gubernatorial election in the state of Coahuila, Mexico (
List Experiment (Mexico 2017 Gubernatorial Election).
As we show in Table 5, we reached the same results as in the 2015 CSES postelectoral survey: the vote-buying estimate provided by the list experiment was higher among respondents with higher levels of education (
List Experiment Across Socioeconomic Variables (Mexico 2017). Treatments 1 and 2 Are Merged into a Single Category.
aDue to the small sample, it was impossible to report results of respondents with a college degree since only 8 percent of the sample have a college education; elementary school: 27 percent; middle school: 31 percent; high school: 34 percent; and college: 8 percent. For that reason, college-graduate respondents were included into the “High School and more” category.
In fact, these results are not uncommon. As mentioned before, some studies have found that list experiments contradict direct questioning or estimate a negative prevalence of the sensitive behaviour (Coutts and Jann, 2011; Holbrook and Krosnick, 2010) or that SSTs — the RR in particular — are difficult to use with populations with lower levels of education (Böckenholt and van der Heijden, 2007). Moreover, other vote-buying studies have found similar results. While their list experiment produced a larger vote-buying estimate than direct questioning, Gonzalez-Ocantos et al. (2012) report that the vote-buying estimate in their list experiment is higher for more educated voters in Nicaragua (no education = 16.1 percent; primary = 20.9 percent: secondary = 26.7 percent; and university = 37.4 percent).
Given the evidence reported in this article and the fact that clientelistic strategies tend to target the poor (Calvo and Murillo, 2004; Weitz-Shapiro, 2012), one implication of our results is that list experiments may underestimate vote-buying since they work better among those voters who are least likely to be targeted by vote-buying campaign strategies. This means that in contexts in which levels of education and literacy are low, relying on list experiments to estimate vote buying might be subject to alternative sources of measurement error in addition to social desirability bias: respondents’ levels of sophistication and survey-taking behaviour.
List Experiments and the Phrase “In Exchange for Your Vote”
2017 Gubernatorial Election in Coahuila
A second point that varies across vote buying studies relates to the inclusion of the phrase “in exchange for your vote” in the item seeking to measure vote buying in the list experiment (most vote-buying projects include such a phrase in direct questions, see Castro Cornejo and Beltrán, 2020). 12 This survey strategy — followed by Lawson et al. (2013) — seeks to get respondents to distinguish between gifts aimed at buying votes and those less likely to constitute a clientelistic exchange. However, an important question concerns whether respondents differentiate between clientelistic and non-clientelistic electoral gifts (e.g. campaign merchandise).
Table 6 shows that including the phrase “in exchange for your vote” does not substantially affect voters’ responses (the six percentage point difference is not statistically significant,
Question-Wording Effect (List Experiment).
Discussion
This contribution seeks to advance the study of SSTs. Overall, the findings in this article suggest that while list experiments aim to reduce social desirability bias, they might be subject to significant levels of measurement error driven by additional factors: voters’ levels of education and survey-taking behaviour. Based on data from surveys conducted in Mexico across different elections, our findings suggest that list experiments work better among voters with higher levels of education.
How this result generalises to the rest of the region remains an open question; this study encourages further replication in additional Latin American countries. However, our results are consistent with recent literature that highlights that SSTs are difficult to use with populations with lower levels of education. Given the general levels of education in the region, the results of this article suggest that vote-buying studies that solely rely on list experiments should be cautious about their findings. We hope that future studies replicate this analysis, particularly with large sample sizes, to provide additional evidence about how well list experiments work across different contexts and socioeconomic groups. However, the logic of the findings of this article is sufficiently compelling that it would be extremely surprising if voters’ levels of education do not play any role when conducting list experiments elsewhere.
Supplemental Material
Supplementary Material 1 - Supplemental material for List Experiments, Political Sophistication, and Vote Buying: Experimental Evidence from Mexico
Supplemental material, Supplementary Material 1, for List Experiments, Political Sophistication, and Vote Buying: Experimental Evidence from Mexico by Rodrigo Castro Cornejo and Ulises Beltrán in Journal of Politics in Latin America
Footnotes
Acknowledgements
Declaration of Conflicting Interests
Funding
Notes
Author Biographies
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
