Abstract
Organizational research scholars are often interested in developing a thorough understanding of the processes that produce an effect, and thereby investigate the mechanisms relating to how one phenomenon exerts its influence on another. This is called a mediation analysis (Kenny, 2008). Mediation, in its simplest form, explains how or by what means an independent variable (
Deviations from normality, such as
Researchers often resort to nonlinear transformations (NLTs) to deal with nonnormality. Nevertheless, NLTs not only induce interpretation, validity, and generalization problems, but also prevent discovery of substantive information about variables and their distributions (Becker et al., 2019). NLTs are criticized for fostering flawed hypothesis testing (i.e., the misalignment between the hypotheses and tests) (Becker et al., 2019), and for masking real relationships while revealing spurious ones (Cohen et al., 2003). Similarly, scholars employ various outlier treatment techniques against nonnormality. However, the common practices about outliers in organizations research are found to be vague, nontransparent, and even inconsistent in outlier definition, identification, and treatment (Aguinis et al., 2013). More importantly, the empirical literature suffers from the omission of proper reporting for NLTs and outlier treatment (Aguinis et al., 2013; Becker et al., 2019), where such negligence threatens the base of empirically built organization theories.
Despite the importance of nonnormal distributions and outliers, so far no clear guidelines have been developed for mediation methods to deal with these issues properly. Existing literature often tackles these issues separately and does not address mediation analysis specifically (e.g., Aguinis et al., 2013; Aguinis et al., 2019; Becker et al., 2019; Gibbert et al., 2020). For mediation analysis, Zu and Yuan (2010) focus on outliers and propose procedures based on data cleaning, while Yuan and MacKinnon (2014) propose a procedure based on median regression and study various nonnormal error distributions. However, both methods can result in considerable bias and unreliable significance tests (cf. our simulations). While both studies stress the need for robust mediation methods, they are not optimal from a robustness point of view.
We introduce a novel procedure for mediation analysis, ROBMED, that is robust against deviations from normality including outliers, heavy tails, or skewness. ROBMED is an integrated set of procedures that builds upon the widely used bootstrap test for mediation (Bollen & Stine, 1990; MacKinnon et al., 2004; Preacher & Hayes, 2004, 2008; Shrout & Bolger, 2002). ROBMED utilizes the robust MM-regression estimator (Salibián-Barrera & Yohai, 2006; Yohai, 1987) instead of the ordinary least squares (OLS) estimator for regression, and runs bootstrap tests with the fast-and-robust bootstrap methodology (Salibián-Barrera & Van Aelst, 2008; Salibián-Barrera & Zamar, 2002). We illustrate the use of ROBMED in an empirical case where the data show deviations from normality and compare the results with state-of-the-art methods for mediation analysis. Our simulation studies, which cover a wide range of situations, suggest that ROBMED systematically outperforms other methods in estimating the true effect size and reliably assessing its significance. Furthermore, we discuss how ROBMED improves and integrates current best-practice recommendations for outliers and nonnormality, and we provide researchers with freely available software for ROBMED in R and SPSS. As such, our novel method serves as a useful and accessible tool for scholars who engage in mediation analysis.
Mediation Analysis
The simple mediation model can be formulized by the following equations:
where
Deviations from model assumptions pose serious threats to mediation testing based on normal-theory MLE (i.e., OLS regression), which is illustrated in Figure 1. The plot on the top left contains 100 simulated observations that follow the model assumptions, whereas the plot on the top right uses the same data except for one single outlier being added (indicated with an arrow). With the introduction of the outlier, the indirect effect

The effect of a single outlier on mediation analysis.
Numerous methods have been proposed to test the significance of the indirect effect in the literature (for reviews, see MacKinnon et al., 2004; Wood et al., 2008). A comprehensive review of these methods is beyond the scope of this study, yet we note that the bootstrap—a computer-intensive resampling technique first introduced by Efron (1979)—is found to be superior to other methods. Traditional tests for mediation often make incorrect assumptions, such as a normal distribution of the indirect effect. Since the bootstrap makes fewer assumptions, it is applicable in a wider variety of situations, especially when analytical formulas for the standard errors are not available. As such, the bootstrap provides generic ways to reliably construct confidence intervals for the indirect effect (MacKinnon et al., 2007; Preacher & Hayes, 2004, 2008).
While the bootstrap is a nonparametric technique and can therefore handle nonnormal distributions, it is sensitive to outliers. Outliers may be oversampled, which can corrupt the obtained bootstrap distribution of the estimator (Salibián-Barrera & Zamar, 2002). Thus, the size and significance of the indirect effect can be severely influenced and may lead to incorrect conclusions regarding the mediation relationships between the variables. Consequently, mediation analysis that is robust against both nonnormal distributions and outliers requires not only a robust estimator of the mediation model, but also a robust bootstrap procedure.
Treatment of Nonnormality in the Empirical Literature
In empirical research, two-step procedures are frequently used for the treatment of nonnormality. For general deviations in the observed distributions, researchers first transform the data before applying traditional statistical methods (Becker et al., 2019). For outliers, researchers first identify and remove them from the data, then apply traditional methods to the cleaned data set (Aguinis et al., 2013). However, these two-step procedures have several drawbacks.
Regarding transformations, Becker et al. (2019) report serious problems with NLT selection, reporting, interpretation, and justification: scholars usually adopt NLTs without (a) any justification of their use (more than 50% of the surveyed articles), (b) reporting the effects on results (more than 95%), and (c) alignment of hypotheses and tests in terms of the transformed variables (more than 90%). For instance, the log transformation, the most commonly used NLT, 4 changes the scale of a variable in a way that the relationship between the transformed variable and the dependent variables implies diminishing returns on the original scale of the variable (Becker et al., 2019). Such changes in the scale due to NLTs may also mask real relationships while revealing spurious ones (Cohen et al., 2003). Ideally, the use of NLTs should be motivated by theory rather than the observed distributions of variables. For example, if a log transformation of income is used as the dependent variable, then the theory should justify a relative change in income based on the independent variables, rather than a change in fixed amounts. Researchers sometimes apply NLTs that are designed to remove skewness (e.g., the log transformation) with the intention to reduce the effect of outliers (Becker et al., 2019). However, when the main part of the data is already close to normal, this would introduce left skewness, thus actually making the data less normal.
Regarding outliers, the empirical use of outlier treatment techniques is documented to be ambiguous, inconsistent, and often dismissed in manuscripts (Aguinis et al., 2013). In addition, when statistical methods are applied to the cleaned data, the resulting standard errors do not include the additional uncertainty from the initial data-cleaning step. For instance, Chen and Bien (2020) show that OLS regression after outlier removal results in confidence intervals that are much too small. In some of their simulations, the coverage of the true parameter is as low as 75%, as opposed to the nominal coverage of 95%. Consequently, the
In practice, it is often unclear if deviating observations are produced by skewness or a heavy tail in the distribution, or whether they are outliers. Different treatment methods (NLTs or outlier removal) may lead to very different results and conclusions. Therefore, NLTs and outlier removal can also be abused as dangerous post hoc practices to increase the chances of finding what the researcher wants to find (Becker et al., 2019; Cortina, 2002), which threatens the base of empirically tested theory (Bettis, 2012).
Robust Statistics
Statistical methods are traditionally designed to be as efficient as possible under a certain model and assume that all data points strictly follow this model. However, the corresponding models typically make quite strong assumptions about the data, which are often violated in empirical settings. When this is the case, such methods can give unreliable results that may yield incorrect conclusions. The field of
Modern robust methods typically aim for a
In the following, we present a discussion on how various deviations from normality assumptions affect estimation, if and how downweighting deviating observations is a suitable treatment, and how downweighting aligns with current best-practice recommendations.
Heavy Tails, Skewness, and Their Effect on Estimation
Empirically, skewness does not manifest itself much in the central part of the data but in the tails (Raymakers & Rousseeuw, 2020). By definition, the same holds for heavy tails. Therefore, when one is interested in central tendencies such as regression coefficients, the main issue is that deviating observations due to skewness or heavy tails have disproportional influence on estimators that assume normality. A gradual downweighting of these influential observations decreases their disproportional influence, resulting in more reliable significance tests. In case of skewed distributions, a gradual downweighting of the longer tail may introduce bias, for instance when interpreting regression coefficients as changes in the mean of the dependent variable. Nevertheless, this bias is often small (cf. our simulations).
In a regression setting, the gradual downweighting of deviations can be based on the residuals, leaving the observed variables untouched. This solves estimation issues due to nonnormality while still allowing for interpretation on the original scale of the variables—unlike NLTs, where the transformed model parameters are difficult to interpret (see also Becker et al., 2019). In that regard, the robust approach of downweighting deviations is in line with the best-practice recommendation that researchers should consider other treatments instead of assuming NLTs are the best option (Becker et al., 2019).
Outliers and Their Effect on Estimation
An important concept in robust statistics is that of
Examples for error outliers are measurement or recording errors, or observations from a different population that is not of interest to the researcher. According to the best-practice recommendations of Aguinis et al. (2013), such outliers should be corrected if possible, or excluded from analysis. The latter is equivalent to assigning a weight of 0 for estimation. Examples for interesting outliers are a rare, extreme value that is part of the population of interest, or an observation from a subpopulation that is of interest to the researcher but that is otherwise not represented in the sample. Aguinis et al. (2013) strongly recommend studying those observations separately with appropriate qualitative or quantitative approaches.
Crucially, the last two examples above are also examples for influential outliers in a regression setting.
7
Consider a sample size of
Modern robust methods allow to detect outliers by reporting observations with weights close to 0. The researcher should then further investigate the type of each detected outlier. For error and influential outliers, the robust method has already applied the correct treatment by downweighting them. Interesting outliers should be studied further in detail. For instance, an interesting outlier that turns out to be an extreme observation can be studied with statistical tools from extreme value theory (e.g., de Haan & Ferreira, 2006). If an interesting outlier is an observation from a subpopulation that is otherwise not represented in the sample, the researcher can study this observation qualitatively, collect more data to model the heterogeneities in the population, or design a follow-up study to analyze the small subpopulation. For a general roadmap on how to gain knowledge from outliers, we refer to Gibbert et al. (2020).
Robust Statistics and Mediation Analysis
Despite the common presence of deviations from model assumptions and the sensitivity of mediation results to such deviations, we could only find two articles on robust mediation analysis. Zu and Yuan (2010) focus on outliers and propose to clean the data via winsorization, which is neither as robust nor as efficient as modern robust regression methods (cf. our simulations). Yuan and MacKinnon (2014) propose a bootstrap procedure that replaces OLS regression with median regression, and study nonnormal error distributions and outliers in the errors. While median regression is robust in those settings, it is not robust even if a single outlier occurs in the explanatory variables (Koenker, 2005, p. 46). Yet outliers in the explanatory variables are considered to be the most harmful type of outliers in regression due to their high leverage effect on the estimates (cf. Figure 1).
Both studies pinpoint the need for robust methods for mediation analysis and propose valuable potential alternatives, but both suffer from the aforementioned disadvantages. In that sense, although these methods clearly are more robust than OLS-based procedures, they still need to be improved upon from a robustness point of view.
ROBMED: Robust Mediation Analysis
We propose a robust test for mediation, ROBMED, that builds on bootstrapping the indirect effect via linear regression. First, linear regression analysis is the most widely used mediation technique in empirical studies (Wood et al., 2008). Second, the bootstrap test is the state-of-the-art method for testing the indirect effect in mediation models, as its distribution is in general asymmetric. 8 Accordingly, ROBMED constitutes a combination of two essential building blocks.
The first building block is to use the robust MM-regression estimator (Salibián-Barrera & Yohai, 2006; Yohai, 1987) rather than the OLS estimator. Instead of the quadratic loss function of the OLS estimator, the MM-estimator uses a loss function that is quadratic for small residuals, but smoothly levels off for larger residuals. This ensures that the coefficient estimates are determined by the central part of the data and that the influence of deviations from normality is limited. The left panel in Figure 2 illustrates this loss function. The MM-estimator can be seen as a weighted least-squares estimator with data dependent weights. A compelling feature of the estimator is that the weights that are assigned to the data points can take any value between 0 and 1, where a lower weight indicates a higher degree of deviation. An illustration of this weight function is given in the right panel in Figure 2.

Loss function (left) and assigned weights (right) for OLS regression and the robust MM-regression estimator.
The second building block is to adopt the fast-and-robust bootstrap (Salibián-Barrera & Van Aelst, 2008; Salibián-Barrera & Zamar, 2002) instead of the standard bootstrap. There are two issues with the standard bootstrap. The first issue is that it is not robust to outliers. It draws so-called bootstrap samples of the same size as the original sample via random sampling with replacement and estimates the model on each of those bootstrap samples. Even if a robust method can reliably estimate the model in the original sample, it may happen that outliers are oversampled in some bootstrap samples. If those bootstrap samples contain more outliers than the robust method can handle, bootstrap confidence intervals become unreliable. The second issue is that robust methods typically come with increased computational complexity. While this is less of an issue in most applications due to modern computing power, there can be a noticeable increase in computing time compared to traditional methods, in particular when combined with computer-intensive procedures such as the bootstrap.
To solve the two issues, Salibián-Barrera and Zamar (2002) developed the fast-and-robust bootstrap. Keep in mind that the MM-regression estimator can be seen as weighted least-squares estimator, where the weights are dependent on how much an observation is deviating from the rest. The essence of the fast-and-robust bootstrap is that on each bootstrap sample, first a weighted least-squares estimator is computed (using the robustness weights from the original sample) followed by a linear correction of the coefficients. The purpose of this correction is to account for the additional uncertainty of obtaining the robustness weights.
The integration of the robust MM-regression estimator with the fast-and-robust bootstrap procedure allows us to construct a test for mediation analysis that follows the same principles as the widely used OLS bootstrap test. However, our proposed test is more reliable under deviations from the model assumptions such as outliers, heavy tails, and skewness. Technical derivations and a brief discussion of our software can be found in the appendix.
Revisiting Figure 1, the bottom plots depict the same mediation model estimated with ROBMED. Without the outlier (left), the estimated effects are nearly identical to those of OLS. When the outlier is included (right), the fitted regression lines remain virtually unchanged and all effects are accurately estimated, illustrating the merit of ROBMED.
Illustrative Empirical Case
In order to show the role of deviations from the model assumptions in mediation analysis and how ROBMED overcomes those challenges, we test an illustrative hypothesis. It is not our aim to build and test theory with this example, therefore we do not interpret the indirect effect or discuss its effect size. The data contain information on Illustrative Hypothesis: Task conflict (
More information on data collection, scales, and the underlying theory is presented in the Online Appendix 1. Tables 1 and 2 contain descriptive statistics and correlations for the studied variables.
Descriptive Statistics of the Variables Used in the Illustrative Empirical Case.
Note: The median is a more robust measure of centrality than the mean, and the median absolute deviation is a more robust measure of dispersion than the standard deviation (e.g., Maronna et al., 2006).
Correlation Table of the Variables Used in the Illustrative Empirical Case.
Note: The reported correlations are Spearman’s rank correlations, transformed to be consistent with the Pearson correlation coefficient (Croux & Dehon, 2010). Those provide more robust estimates than the sample Pearson correlation.
We focus on a comparison between ROBMED and the OLS bootstrap, but we also apply other state-of-the-art methods for mediation testing. Table 3 gives an overview of these methods and the abbreviations we use to refer to them.
10
All bootstrap tests report a bias-corrected and accelerated percentile-based confidence interval (Davison & Hinkley, 1997) for the indirect effect. Table 4 reports results on all coefficients in the mediation analyses. The estimate of the indirect effect
Methods Included in the Illustrative Empirical Case and the Simulation Study, as Well as the Abbreviations Used to Refer to Them.
Results from the Illustrative Empirical Case: Comparison of ROBMED to Various Methods.
Note: Variables are value diversity (
†
Figure 3 shows a scatterplot of task conflict (

Scatterplot of value diversity and task conflict with tolerance ellipses.
To further investigate the deviations from normality, Figure 4 shows a diagnostic plot of the robust regression weights. For varying threshold on the horizontal axis, the vertical axis displays how many observations in each tail of the residual distribution have a weight below this threshold. For comparison, a reference line is drawn for the expected percentages under normal errors. Clearly, there are more downweighted observations with positive residuals than expected and fewer with negative residuals, confirming skewness with a heavy upper tail.

Diagnostic plot of weights from robust regression of task conflict on value diversity.
Based on the two plots, ROBMED better captures the main trend in the data and can be considered more reliable than the OLS bootstrap. The SNT bootstrap, which explicitly models skewness in the errors, yields similar results to ROBMED. Other methods give somewhat different results: the Box-Cox bootstrap and the winsorized bootstrap come to the opposite conclusions regarding weak or strong significance of the coefficients
Simulations
We simulate
Settings Regarding the Error Distributions and Outliers in the Simulations.
Simulations with Mediation
Figure 5 shows the average estimates of the indirect effect (top row), as well as a measure of realized power of the tests on the indirect effect (bottom row). This measure of realized power is taken as the rate of how often the methods reject the null hypothesis and the corresponding estimate of the indirect effect has the correct sign. 13 The columns of the figure correspond to the four investigated settings for error distributions and outliers.

Results from 1,000 simulation runs for the simulation setting with mediation (
For normal error terms, all methods estimate the indirect effect very accurately. In addition, all tests show a realized power close to 100%, except for the median bootstrap with a realized power slightly below 90%. In the presence of outliers, ROBMED is the only method that still gives accurate estimates of the indirect effect. The OLS-based methods are the most affected by the outliers, but the median bootstrap, the winsorized bootstrap, and the SNT bootstrap also show a considerable bias. The results from estimation clearly carry over to the realized power of the tests, with ROBMED remaining close to 100%. The winsorized bootstrap is the only method not too far behind with a realized power slightly below 90%, despite its bias in effect size. While the median bootstrap and the SNT bootstrap have realized power of about 60–65%, the remaining tests perform poorly.
For skew-normal error terms, all methods are very accurate in estimating the indirect effect and have realized power close to 100%. Clearly, the SNT bootstrap has the smallest variance in the estimates of the indirect effect. There are again more differences among the methods for
Simulations with No Mediation
The top row of Figure 6 again shows box plots of the estimates of the indirect effect, while the bottom row displays the rejection rate of the tests. Since the tests are performed with nominal size

Results from 1,000 simulation runs for the simulation setting with no mediation (
Under normal error terms, all methods again yield very accurate estimates of the indirect effect, and the rejection rates of all bootstrap tests are close to the nominal size. As expected, the OLS Sobel test has a slightly lower rejection rate than the bootstrap tests. When outliers are introduced, ROBMED again yields the most accurate estimates of the indirect effect, and its rejection rate is the closest to the nominal size. All other methods suffer from considerable bias, in particular the OLS-based methods. The median bootstrap is the only other method that is not too far off the nominal size with a rejection rate of 12%, whereas all other tests have too large rejection rates in the range of 20-35%.
As in the setting with mediation, all methods perform very well for skew-normal error terms. The most notable result is again that the SNT bootstrap has much lower variance in the estimates of the indirect effect. For
Additional Simulations and Concluding Discussion
We extended our simulation design with a wide range of sample sizes, effect sizes, outlier configurations, and error distributions with various levels of skewness and kurtosis. The results presented here are a representative selection, while all results from our extensive simulations with 700 different parameter settings can be found in the Online Appendix 2. 14 Overall, ROBMED clearly outperforms the other methods. It is the only method that remains accurate in estimating the indirect effect and powerful in hypothesis testing across all investigated deviations from normality. In addition, only ROBMED effectively protects against false mediation discoveries (inflated Type I errors) in the presence of outliers.
Practical Guidelines for Using ROBMED
ROBMED is robust to deviations from normality, which makes it a useful tool to detect such deviations in the first place. We recommend researchers to estimate their hypothesized model with ROBMED, and to investigate the diagnostic plot of the weights from the robust regressions (cf. Figure 4). Depending on detected deviations from normality, we recommend taking the actions discussed below. A detailed flowchart for our practical guidelines regarding the use of ROBMED is given in Figure 7.

Flowchart of practical guidelines for using ROBMED.
Heavy tails and skewness:
If the diagnostic plot reveals more downweighted observations than expected, but roughly the same amounts in both tails, the distribution is symmetric but with heavy tails. We recommend reporting the results of ROBMED, since ROBMED outperformed the other methods in our simulations. The winsorized bootstrap (Zu & Yuan, 2010) could be used as an additional robustness check.
If the diagnostic plot shows that observations in one tail are downweighted more heavily than those in the other tail, the distribution is skewed. We advise verifying the findings of ROBMED by following the recommendations of Becker et al. (2019) as a robustness check.
Finally, we strongly recommend against the automatic treatment of outliers as harmful data points. Outliers may help clarifying inconsistencies in emerging theories, provide chances to integrate theoretical predictions with real-life observations (Lieberson, 1992), and may reveal essential contingency factors and boundary conditions to theories (Gerring, 2007). We refer readers to the established body of knowledge on outlier identification and treatment (Aguinis et al., 2013; Lewin, 1992; Nair & Gibbert, 2016; Pearce, 2002) and theory building by using outliers (Gibbert et al., 2020; for empirical examples, also see Gittell, 2001; Hitt et al., 1998; Pisano, 1994).
Discussion and Conclusion
Existing methods for mediation analysis are sensitive to nonnormality. The proposed procedure ROBMED integrates the robust MM-estimator (Yohai, 1987) and the fast-and-robust bootstrap (Salibián-Barrera & Zamar, 2002) in a mediation setting to overcome the widespread problem of deviations from normality assumptions. Indeed, ROBMED is shown to be more reliable than established methods for testing mediation under a variety of deviations. The key technical property that gives ROBMED its edge is that it continuously downweights deviating data points. Not only does this result in robust estimates, but also in a stable procedure, as it does not require different approaches for different deviations from normality, or any decision to fully include or exclude a data point. Instead, the weights indicate the degree of deviation of an observation. Downweighting observations based on the residuals avoids transformations of the variables that are not supported by theory, ensuring interpretability of the coefficients and correct alignment of the analysis with the hypotheses.
We stress that ROBMED should not be viewed as a tool that absolves researchers from verifying model assumptions or checking for outliers. Instead, researchers should view ROBMED as a tool that allows to reliably estimate the model while simultaneously detecting outliers and deviations from the model assumptions. It is crucial to follow up on any detected deviations. This last step must not be skipped, and findings should be transparently described. As such, ROBMED plays an integral part in ensuring robust findings in empirical research—and therefore reproducibility.
While ROBMED is designed to handle deviations from normality, one cannot expect ROBMED to work under all possible distributions. Deviations from normality are often characterized by skewness (a measure of asymmetry) and kurtosis (a measure of tail-heaviness). ROBMED can effectively compensate for various levels of skewness and kurtosis, but it is not suitable for extremely heavy tails (e.g., tails that should be modeled with extreme value distributions) or extreme skewness (e.g., if not even a log transformation would suffice to remove right-skewness). Furthermore, as mediation analysis is concerned with the central tendencies in a population, namely the coefficients in Equations (1)–(3), ROBMED is not suitable if extreme values are of primary interest. However, it does report deviating observations that, if appropriate, should be further analyzed with statistical tools from extreme value theory (e.g., de Haan & Ferreira, 2006). It is also not intended for dynamic time series analysis, where it is often of interest to study how extreme shocks travel through systems over time.
While we focus on the simple regression model in this paper, our robust approach can be used for any other mediation model that can be estimated via linear regressions, such as models including multiple mediators. 15 Furthermore, ROBMED can easily be extended to cover moderated mediation or mediated moderation models (see, e.g. Muller et al., 2005, for an overview). Granting all this, ROBMED currently focuses on mediation models with continuous dependent variables and mediators. Developing robust methods for mediation models with binary, nominal, ordinal or count variables (e.g., Huang et al., 2004; Preacher, 2015; VanderWeele & Vansteelandt, 2010) is a fruitful venue for further research.
On a final note, our R implementation for ROBMED and the R extension bundle for SPSS are freely available from https://cran.r-project.org/package=robmed and https://github.com/aalfons/ROBMED-RSPSS, respectively, making ROBMED easily accessible to empirical researchers. Users can run our code by following simple steps in the accompanying documentation and code examples. Given its technical strengths and practicality, we strongly encourage scholars to adopt ROBMED to test mediation.
Supplemental Material
Supplemental Material, sj-pdf-1-orm-10.1177_1094428121999096 - A Robust Bootstrap Test for Mediation Analysis
Supplemental Material, sj-pdf-1-orm-10.1177_1094428121999096 for A Robust Bootstrap Test for Mediation Analysis by Andreas Alfons, Nüfer Yasin Ateş and Patrick J. F. Groenen in Organizational Research Methods
Footnotes
Acknowledgments
Declaration of Conflicting Interests
Funding
Supplemental Material
Notes
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
