Abstract
Keywords
1. Introduction
Recently, Wingen et al. (2020) argued that “in the current political climate [. . .] the credibility of scientific evidence is questioned and science is threatened by defunding” (p. 461). Such developments have been fueled, among others, by unsuccessful attempts to replicate key findings (e.g. in psychology; Open Science Collaboration, 2015) and by scientists adopting questionable research practices (QRPs; Anvari and Lakens, 2018). To counterbalance these issues, many scientific fields have seen a shift toward open science (Chambers and Etchells, 2018; Lewandowsky and Bishop, 2016; Vazire, 2018; Wallach et al., 2018), generally defined as the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. (Bezjak et al., 2018: 9)
Despite increasing adherence to open science practices (OSPs) in the scientific community, little is known about the public’s expectations of such practices and their effects on the perceived trustworthiness of science. This is striking because knowledge on whether OSPs lead to increased trust has considerable implications, both for the research process itself and with regard to the communication of scientific findings. For example, recent research on COVID-19 has shown that individuals who trust in science as an authority for justifying knowledge claims are less likely to believe in COVID-19-related conspiracy theories (Beck et al., 2020), are more likely to engage in protective behaviors (Soveri et al., 2021), and exhibit a higher vaccination willingness (Rosman et al., 2021). Considering such positive effects, it becomes extremely important to examine the predictors of epistemic trust—and OSPs are an especially promising candidate in this regard since they directly relate to the (quality of the) research process itself. For these reasons, this article addresses the following research questions: To what extent does the public value OSPs? Is it possible to increase the public’s trust in science through such practices? Do OSPs buffer reductions in trust evoked by commercially funded research? Addressing these questions, the present investigation is attempting to map out the general public’s assumptions and values concerning open and transparent scientific practices, with a special focus on whether such practices contribute to increasing trust in science. We thereby focus on the German general population, but expect that our analyses are generalizable to other populations as well because our study materials are largely independent from any national context.
This work is divided into two parts. In a first study, we ask respondents about their opinions toward science and the scientific process using an online survey format, thus mapping out the public’s general expectations toward OSPs and their potential trust-enhancing effects. In a second study, we take a more bias-controlled approach to replicate and extend these research questions: Drawing on a vignette-based experimental design, we investigate whether certain scientific practices (i.e. OSPs and public vs private funding) affect individual trust in and opinions toward science.
2. Background
OSPs and trust in science
We suggest two central mechanisms by which OSPs may influence trust in science. First, on the level of science itself, research quality may increase if scientists adopt OSPs on a larger scale: OSPs restrict scientists’ degrees of freedom, thereby effectively reducing malpractices such as HARKing (hypothesizing after the results are known; Kerr, 1998) or
As a second (and more direct) mechanism by which OSPs influence public trust in science, we suggest that visible OSPs may increase trust because recipients may perceive them as an indicator of quality. For example, transparency suggests that the person (or organization) in question has nothing to hide (Bachmann et al., 2015), acts responsibly (Medina and Rufín, 2015), and allows an independent verification of its claims (Lupia, 2017; Nosek and Lakens, 2014). Moreover, the public may simply
However, while such survey studies offer robust evidence due to their large and heterogeneous samples, they usually rely on self-reports and may thus be subject to biases such as social desirability. Hence, taking an additional look at experimental studies (which offer better bias control) is well advised. Unfortunately, as outlined in the next paragraph, the few experimental studies on the relationship between OSPs and trust in science have yielded rather inconclusive results.
In fact, of the four experimental studies that have (to our knowledge) been conducted up to now, only one has found evidence for beneficial effects of OSPs on trust. By confronting their participants with journal article title pages that either included open science badges or not, Schneider et al., 2022 found that badges indicating adherence to OSPs positively influenced trust in scientists. However, two other studies have yielded inconclusive results, and one has even found evidence against OSPs: First, Field et al. (2018) had 209 academics from the field of psychology read descriptions of empirical studies in which they had experimentally manipulated whether the studies were preregistered or not. Subsequently, they assessed this manipulation’s effect on epistemic trust. However, due to a failed manipulation check, 86% of their data had to be excluded from their (Bayesian) analyses, resulting in inconclusive evidence toward almost all research questions. In addition, it should be noted that this study examined an expert sample (i.e. academics) and not the general population. Second, a study by Wingen et al. (2020) assessed whether decreased trust in science elicited by informing participants about replication failures may be buffered by additionally informing them about proposed reforms (i.e. open science and increased transparency). However, their analyses yielded no significant results—hence informing participants about proposed reforms did not seem effective in repairing trust (Wingen et al., 2020). Third, Anvari and Lakens (2018) tested whether trust in psychological science would be impacted by informing participants about replication failures, QRPs, and proposed reforms. However, their results suggested “that learning about all three aspects of the replicability crisis (replication failures, criticisms of QRPs, and suggested reforms)
The present research
It is noteworthy that these four experimental studies all relate to the fields of psychology and/or educational research, whereas the survey studies presented earlier focus on science as a whole. Nevertheless, we are intrigued by the discrepancies between the results of the two types of studies. Hence, considering the fact that empirical research on the effects of open science is still in its infancy and that many more corresponding studies are needed, the first part of this research has two objectives: First, we aim to replicate the survey results presented above (i.e. on the beneficial effects of OSPs on trust). To allow for a more fine-grained interpretation of our results, we will ask participants not only about their trust in science as a whole, but additionally focus on two specific academic fields—psychology and medicine. Second, to shed light on the inconclusive results in existing experimental research (see above), we combine this survey-based approach with a scenario-based experimental approach to determine whether OSPs indeed have causal effects on trust in science.
Furthermore, OSPs such as increasing research transparency are, by far, not the only factor influencing perceptions of trust (Bachmann et al., 2015). For example, several studies have consistently found that the public’s trust in research funded by private organizations (e.g. commercial enterprises) is significantly lower compared to research funded by public institutions such as universities (Critchley, 2008; Krause et al., 2019; National Science Board, 2018; Pew Research Center, 2019). Interestingly, Critchley (2008) found, using a mediator analysis, that the increased trust in publicly funded research is partially due to the fact that public scientists are “more likely to deliver any benefits from the research to the public” (Critchley, 2008: 320). With the latter assumption receiving empirical support in a 2016 scientific author survey (Boselli and Galindo-Rueda, 2016), this offers an interesting connection to OSPs because it suggests that the decreases in trust elicited by privately funded research might be buffered by delivering more research benefits to the public—a central goal of the open science movement (Lyon, 2016; Munafò et al., 2017). In the second part of the present research, we will therefore, apart from trying to replicate the aforementioned effects regarding public versus private funding, investigate interactions between type of funding (public vs private) and the adoption of OSPs (yes vs no) on trust in science.
Preregistration and hypotheses
As stated above, the objective of the present research is twofold. First, we examine, using survey questions and an experimental approach, whether OSPs have a positive effect on public trust in science. Second, we experimentally investigate whether the trust-damaging effects of research being privately funded may be buffered by OSPs. In all hypotheses, we focus on the OSP of making materials, data, and analysis code openly accessible (as has been done before, for example, Schneider et al., 2022). In line with our deliberations and the literature presented above, we suggest the following hypotheses, of which the first three are based on survey questions (see Table 1) and the last three make use of a vignette-based experimental 3 (OSP vs no OSP vs OSP not mentioned) × 2 (public vs private) between-person design. All hypotheses and the corresponding study procedures, target sample sizes, manipulation checks, inference criteria, and analysis procedures were preregistered at PsychArchives (Rosman et al., 2020):
Survey questions (Study 1).
3. Study 1
Design and procedure
Study 1 aimed at testing Hypotheses 1–3 and was realized as an online survey. It used a cross-sectional correlational design with no experimental factors. All study procedures were carried out in one session. After screening and assessing demographic questions, general covariates (e.g. trust in science) were measured. Subsequently, the survey questions were administered. All study materials, including the vignettes, measures, covariates, and manipulation checks, can be found in Rosman et al. (2022c). Inversely formulated items were recoded and scale means were calculated where appropriate.
Participants
Since Hypotheses 1–3 require no inferential testing, we preregistered, on pragmatic grounds, a target sample size of
Total sample size was
Measures
The survey questions relevant for testing Hypotheses 1–3 and for our exploratory analyses (see below) can be found in Table 1. Questions were administered in identical order to all participants.
Results
No major protocol deviations occurred, which is why all calculations were performed using the full dataset (
Confirmatory analyses
Hypotheses 1–3 were supported. Specifically, 87.2% of participants indicated that they view it as “rather important,” “important,” or “very important” that researchers make their findings openly accessible to the public, thus providing strong support for Hypothesis 1. Furthermore, 64.3% of participants indicated that it is “rather important,” “important,” or “very important” that researchers make their materials, data, and analysis code openly accessible, thus supporting Hypothesis 2. Regarding Hypothesis 3a, 74.0% of participants “rather agreed,” “agreed,” or “fully agreed,” with the statement that their trust in a scientific study increased if they saw that researchers made their materials, data, and analysis code openly accessible. When asked the same question about a psychological study (Hypothesis 3b), the respective proportion was 68.7%, and when asked about a study from the medical domain (Hypothesis 3c), the proportion was 76.6%. Detailed analyses can be found in the online supplement in PsychArchives (Rosman et al., 2022a).
Exploratory analyses
To test whether our descriptive analyses on Hypotheses 1–3 hold up to inferential testing, we dichotomized the corresponding variables by assigning the value 1 to response options indicating a positive response (e.g. “rather agree,” “agree,” or “fully agree” for Hypothesis 1) and by coding all other substantive (i.e. excluding missing data) response options as 0. Subsequently, we carried out one-sample
As additional exploratory analyses, we took a closer look at the SQ4 and SQ5 variables (see Table 1) and found that a higher percentage of participants indicated that OSPs would increase trust in medicine compared to psychology (76.6% vs 68.7%). To test this for statistical significance, we conducted a paired-samples
Finally, we conducted a number of exploratory one-factorial analyses of variance (ANOVAs) to test whether participants’ responses to the survey questions (SQ1–SQ8, see Table 1) differed with regard to the seven age groups (see participants section), gender, and educational level. The analyses revealed no significant differences regarding age, but significant gender differences on the SQ3, SQ4, and SQ5 questions. More specifically, men were more likely to agree to the statement that their trust in a scientific study would increase if they saw scientists publicly sharing their study materials, datasets, and analysis code (SQ3;
4. Study 2
Design, procedure, and materials
Study 2 aimed to test Hypotheses 4–6 using a mixed experimental design. After participant screening and assessing demographic information, general covariates were measured (e.g. trust in science; see preregistration for details), directly followed by a reading task with four vignettes describing empirical studies from medicine and psychology. The experimental manipulation involved systematically varying specific aspects of these vignettes, resulting in a 2 × 3 × 4 mixed design with the between-factors “open science practices” and “type of funding” (see Table 2) and the within-factor “texts” (four texts covering the following topics: online training for gifted students; therapeutic method to treat fear of heights; medication to treat high blood pressure; method for early detection of muscle atrophy). With regard to the “open science practices” factor, the texts varied in that (1) the scientists (allegedly) employed OSPs by making their study materials, data, and analysis code publicly available (OSP condition), (2) the scientists did not employ such OSPs (no OSP condition), or (3) the texts did not mention whether OSPs were employed or not (OSP not mentioned condition). Regarding the funding factor, the texts varied in that the studies were portrayed as being funded (1) privately (e.g. by a commercial enterprise) or (2) publicly (e.g. by a university). A sample text including all experimental manipulations can be found in Figure 1. Furthermore, all study materials, including the vignettes, measures, covariates, and manipulation checks (in German language), can be found in Rosman et al. (2022c).
Sample sizes and mean trust rating across experimental groups (Study 2).

Sample vignette from the reading task (Study 2).
Participants
Prior to data collection, sample size calculation was performed using GPower 3.1 (Faul et al., 2009). Specifying a small-to-medium expected effect size of
The sample was recruited by the same panel provider as in Study 1, who also ensured that only participants who had not participated in Study 1 were invited for Study 2. Eligibility criteria, informed consent, and all other procedures surrounding the data collection were identical to Study 1, and the study was also conducted in January 2021. In total,
Measures
The main outcome variable of Study 2 was trust in the respective study. As outlined above, this was measured separately for each of the four vignettes. After a short introductory text (“Now please give us a brief assessment of this study”), participants were asked to rate three statements (“This study is trustworthy”; “I immediately believe the result of this study”; and “I trust that this study was conducted correctly”; translation by the authors) on a 6-point scale ranging from 1 (
Manipulation checks
OSP factor
A forced-choice question was used to test whether the experimental between-person manipulation on OSPs was successful (“Do the researchers described in the texts make their study materials and their dataset and analysis code openly accessible?”; response options: “Yes, they do,” “No, they do not,” “The text does not make any statement on this”; see preregistration for the original German wordings). According to our preregistered criteria, the manipulation check was successful: As expected, (1) the frequency of responses indicating that the researchers employed OSPs was significantly higher in the OSP condition (
Funding factor
Another forced-choice question was used to test whether the experimental (between-person) manipulation on public versus private funding was successful (“Where were the four studies conducted?”; response options: “in a private company (e.g. a firm),” “at a public institution (e.g. a university)”; see preregistration for the original German wordings). As expected, the frequency of responses indicating that the research was funded privately was significantly higher in the private funding condition (
Results
No major protocol deviations occurred, which is why all calculations were performed using the full dataset (
Confirmatory analyses
As specified in our preregistration, we used the standard
Descriptive results and a visualization of results can be found in Table 2 and Figure 2. While testing Hypothesis 4, no significant effects of the between-factor

Visualization of Study 2 results (violin plots combined with notched box plots; McGill et al., 1978). The notches in the box plots indicate 95% confidence intervals of the median.
Exploratory analyses
To further investigate these results, we performed a number of exploratory analyses. First, the mixed ANOVA conducted to test Hypothesis 4 had revealed a significant interaction between the within-factor (text) and the between-factor OSPs (
As an additional exploratory analysis, we re-ran all confirmatory analyses after removing participants who had failed the respective manipulation checks. When re-testing Hypothesis 4 within the subsample of participants who had correctly indicated the OSP status of their texts (
5. Discussion
In two studies, we aimed to assess whether OSPs, such as making materials, data, and analysis code of a study openly accessible, positively affect trust in science. Furthermore, we investigated whether the trust-damaging effects of research being funded privately may be buffered by such practices. After preregistering six hypotheses, we conducted a survey study (Study 1) and an experimental study (Study 2), with around 500 participants each. Both samples thereby approximately corresponded to the German general population regarding gender distribution and age.
Key findings
The online survey of Study 1 directly asked participants about their perceived importance of OSPs, and whether seeing scientists employ such practices would make them more trustworthy. An overwhelming majority of our sample found it important that researchers make their findings openly accessible (Hypothesis 1) and that they implement OSPs (Hypothesis 2). Furthermore, a large proportion of participants indicated that their trust in a scientific study would increase if they saw that researchers made their materials, data, and code openly accessible (Hypothesis 3). Correspondingly, all three hypotheses investigated in Study 1 were supported, thus corroborating our expectations that OSPs are a trust-increasing factor.
In contrast to these results, Study 2 findings were less straightforward. Using an experimental design, we administered four vignettes describing fictitious studies to our participants. We thereby experimentally manipulated the vignettes with regard to (1) whether the scientists responsible for the study had employed OSPs (OSP vs no OSP vs OSP not mentioned) and regarding (2) the funding of the respective study (public vs private). Although our manipulation checks were successful, our preregistered criteria were not fulfilled for Hypotheses 4 and 6. After eliminating all participants who had failed the manipulation checks in an exploratory analysis, we did however find significant differences between the group receiving texts describing scientists who had employed OSPs compared to texts explicitly mentioning that no OSPs were employed. Thus, while Hypothesis 4 is not supported, there are some indications in our data that the use of OSPs may increase trust—although it should also be noted that the corresponding effect sizes were rather small. With regard to Hypothesis 5, our analyses yielded evidence for the effects of a study’s funding type on trust, such that publicly funded studies are trusted more than privately funded ones. This finding is also a precondition for testing our last confirmatory hypothesis, namely that the trust-damaging effects of research being privately funded may be buffered by OSPs (Hypothesis 6). However, this hypothesis was clearly not supported by our data.
Implications and future directions
When comparing the results from both studies, a contrast between the strong empirical support for our hypotheses in Study 1 and the small effect sizes in Study 2 becomes notable, especially given that the hypotheses are very similar on a conceptual level. A first potential explanation for this discrepancy is the rather high trust level in Study 2. While this is perfectly in line with prior research (e.g. Mede et al., 2021; Pew Research Center, 2019; Wissenschaft im Dialog/Kantar Emnid, 2018), it could mean that we have a ceiling effect (Wang et al., 2009), which hinders us to find the hypothesized effects. We further believe that the differences between Study 1 and Study 2 findings may be partly due to social desirability in Study 1, which is controlled for in Study 2 by its experimental design. Another possible explanation would be that it is easier to identify differences in research practices when directly asked about them (Study 1) compared to when they are embedded within a scenario (Study 2). In this regard, it should be noted that our second study’s design focused on written descriptions of empirical studies. Possibly, including more tangible indicators for the adherence to OSPs—for example, visual quality seals such as open science badges—may lead to stronger effect sizes. In this regard, our results are in line with prior research on the effects of OSPs on trust: A comparison of the studies by Anvari and Lakens (2018), Schneider et al. 2022, and Wingen et al. (2020) reveals that more specific manipulations of the OSP “status” of a study (e.g. in terms of badges; Schneider et al., 2022) lead to stronger effects on trust compared to the more general approach of informing participants about proposed reforms in writing (e.g. Anvari and Lakens, 2018; Wingen et al., 2020). Since the strength of our vignette-based manipulation is somewhere between these two, it comes as no surprise that we did find some indications in our data suggesting positive effects of OSPs on trust, but no overwhelming support for our expectations. Taken together, the findings from both studies thus imply that people may well recognize open science as a trust-increasing factor, especially when directly asked about it, but that other factors such as communication strategies may play a comparatively stronger role in the development of trust in science.
With regard to funding as a factor influencing trust, results were more straightforward. First, the majority of Study 1 participants indicated that their trust in a scientific study would increase if they saw that it was funded publicly. Furthermore, our Study 2 analyses revealed experimental evidence for higher trust in publicly funded studies in comparison to privately funded ones. However, it should also be pointed out that the effect sizes for the funding factor were rather small, which might be due to at least some people trusting the private sector
However, as outlined above, Hypothesis 6 was not supported, neither in confirmatory nor in additional exploratory analyses. We believe that these negative results may have been caused by the smaller-than-expected effects of OSPs on trust. In fact, simply telling people that scientists employ OSPs may not be enough to reduce a more general mistrust in privately funded research, thus underlining the need for additional efforts. For example, recent science communication frameworks emphasize the role of members of the general public as active participants in science communication, who bring their own perspectives to the interpretation of scientific findings (e.g. Akin and Scheufele, 2017; Dietz, 2013) or even participate in the production of scientific knowledge (citizen science; Silvertown, 2009). Combining increased transparency with such participatory approaches may thus be even more promising to increase trust in science compared to transparency alone.
Considering the small effect sizes in our experimental setup, we suggest that future studies employing vignette-based manipulations recruit a larger number of participants to increase the likelihood of discovering small-to-moderate effects. In addition, should one decide to employ the same vignettes as in this study, some fine-tuning is advisable to reduce the number of participants failing the manipulation checks—or one might consider changing the analysis plan so that only participants passing the manipulation checks will be included in the analysis (although it should be noted that this may result in statistical challenges due to unequal group sizes). In addition, future research should investigate whether our results are indeed generalizable to samples other than the German population. Given that our vignettes are largely independent from any national context (e.g. no references to a specific study country are given, and the covered topics have an international scope), we expect our results to be generalizable. This is especially so considering that the aforementioned study results on trust in science do not differ much, regardless of whether they were carried out in a German or international context (e.g. Ipsos MORI, 2011; Mede et al., 2021; Pew Research Center, 2019). Nevertheless, future research should try to substantiate this claim with empirical evidence.
Conclusion
In sum, our results suggest that OSPs may well contribute to increasing trust in science and scientists. However, the importance of making the use of such practices visible and tangible should not be underestimated. Furthermore, considering the rate of incorrect answers on our OSP manipulation check, people might need to be taught what open science really means before they can identify it. In addition, while we do recognize the potential role of OSPs in attenuating the negative effects of the replication and trust crisis, it should be noted that the public’s trust in science is generally high (e.g. Mede et al., 2021; Pew Research Center, 2019). Nevertheless, building trust is especially important in the small but significant proportion of the population that denies scientific evidence (e.g. Lazić and Žeželj, 2021), and a widespread adoption of open science principles may be a promising approach to do so. We, however, concede that before this can happen, considerable changes in science’s incentivizing structures are necessary, so that adopting open science principles is not only rewarded by public trust but also benefits the individual scientist.
