Abstract
Meta-analysis is a statistical procedure that is easy to understand and easy to perform, and the necessary statistical software is freely available. As a result, many early-career researchers use the systematic review and meta-analysis framework to produce scientific publications and meet academic targets. However, to perform meta-analysis competently, investigators need to know which among different statistical models to select, when, and why.
This article is meant to guide investigators in choosing between fixed-effect and random-effects models. The article will also help journal readers understand how to interpret results obtained from fixed effects as opposed to random effects models. Readers will notice some repetitiveness in the explanations. This is deliberate because explaining technical concepts in different ways helps foster understanding. Readers who want to know more about the subject are referred to other sources.1–3
Fixed Effect Model
Imagine that the studies that we plan to pool in meta-analysis were conducted in largely the same manner in patients who were largely similar in sociodemographic and clinical characteristics. So, we can reasonably assume that the samples in the different studies represent the same clinical population.
In this situation, for the outcome of interest, there is a single effect in the population. This is called the true effect, and the values of the outcome obtained in the different studies are variations of this single true effect. Because different studies examined different samples, the variations are due to differences in the samples; that is, due to sampling error
Returning to the subject, the single true effect in the population is the
Expressed otherwise, we use a fixed effect model when we believe that there is only a single true effect and that different findings in different studies represent variations of this single true effect.
Random Effects Model
Imagine that the studies that we plan to pool in meta-analysis were conducted in different ways and in patients who were dissimilar in important sociodemographic and clinical characteristics. That is, there is meaningful diversity across studies. There will therefore be sampling error
As examples, consider studies conducted with high versus low doses of medication, in ordinary versus treatment-resistant samples, in pediatric versus adult versus geriatric subjects, and in patients with versus without medical, neuropsychiatric, or substance use disorder comorbidities. It is reasonable to expect that study outcomes (magnitude of effects) would vary with such study-related differences. So, the samples in these studies will not represent a single population, and the study outcomes will not represent a single true effect. Rather, different studies will represent different populations, each with its own true effect. In this situation, we select a random effects model to pool the studies in meta-analysis. Note that the word “effects” is plural because there is more than one population, each with its own true effect.
When Should the Model Be Chosen?
The choice between fixed effect and random effects models is best made a priori, and at the time that the systematic review and meta-analysis protocol is registered with PROSPERO. This is because it is often possible to predict in advance that, for example, existing studies are very likely to represent different populations and that, in consequence, there will be many true effects, necessitating the use of a random effects model. However, the choice can also be made after a superficial literature search is completed; that is, after identifying relevant studies and examining their design, methods, and sample characteristics. This allows the investigators to make a more reasoned choice.
The choice of model is based on conceptual assumptions, and not on eyeballing the results. So, for example, the choice of model should not be made after running the analyses and examining the magnitude of heterogeneity. This is because low or high heterogeneity can occur by chance, regardless of the conceptual assumptions. Heterogeneity can be falsely low or high for other reasons, too, based on the number of studies, their precision, and other factors.
Importantly, the choice of model should not be made after running the analyses and deciding that the results from one model align better with the study hypotheses than the results from the other model; this is research malpractice.
When to Prefer a Fixed Effect Model
A fixed effect model, to assign study weights, uses sampling error (variance) from a single source: within studies in a single population. A random effects model, to assign study weights, uses sampling error from two sources: within studies plus between studies. When the number of studies is small (e.g., fewer than five to seven studies), the error between studies is hard to estimate accurately, and so the results of a random effects model will be imprecise. It may then be better to select a fixed effect model. If a random effects model is nevertheless chosen, the results should be interpreted with caution. Results should anyway be interpreted with caution if the number of studies in a forest plot is small.
Sometimes, the investigators may want to generate results that are applicable only to the set of studies that were identified in the search. That is, the investigators do not want to generalize their findings to a wider population. In such a situation, they can opt for a fixed effect model, regardless of the number of studies being pooled and regardless of the conceptual assumptions about the distribution of the true effect(s). This approach is unusual because investigators typically want to generalize their results for clinical relevance.
Choice of Model and Impact on Forest Plot Study Weights
Meta-analysis does not mathematically average results across studies. Rather, it averages results after applying weights to the results of individual studies. In general, studies that have more precise results (i.e., studies with larger samples and narrower confidence intervals) receive larger weightage than studies with less precise results (i.e., studies with smaller samples and wider confidence intervals). This happens in fixed-effect meta-analysis, where the results of one or more large studies may dominate the pooled effect size.
In random effects meta-analysis, however, the assumption is that there are many true effects, and so variations in precision could be a genuine result of different population characteristics. So, studies are less penalized for apparent imprecision. The result is that small and large studies, or studies with imprecise and precise results, tend to be treated more equally. For this reason, it is bad in principle to use a random effects model when a fixed effect model is more appropriate. The converse is also true; it is bad in principle to use a fixed effect model when a random effects model is more appropriate.
Choice of Model and Effect on Results
From the previous section, it is clear that differences in study weightage will result in differences in the value of the pooled estimate in the forest plot. Of note, when heterogeneity is low, fixed-effect and random-effects models will yield similar results. When heterogeneity is zero, fixed-effect and random-effects models will yield the same results.
As explained earlier, there is only one source of sampling error in fixed effect models and two sources of sampling error in random effects models. That is, in fixed effect analysis, the variance is defined as the variance within studies; in random effects analysis, the variance is defined as the variance within studies plus the variance between studies. So, there is greater imprecision in random effects model results, as shown by wider confidence intervals. Consequently, it may be harder to obtain statistically significant results in random effects models.
Other Notes
The conceptual assumptions underlying the choice between fixed effect and random effects models are subjective and can be a matter of opinion. Therefore, even though investigators may prefer one model and report results using that model, it is customary, such as in supplementary materials, to present results using the other model, as well, so that readers who have contrary opinions about the choice of model can examine results in the model that they believe to be appropriate.
Convergence of results between random effects and fixed effect models may suggest robustness of the results and of the conclusions of the study.
A Realistic Comment
Although this article has focused on true effect(s) within and between
As an example, if some studies underdose a drug and other studies dose the drug appropriately, there could be two true effects even if the samples had been drawn from the same population. A fixed effect meta-analysis would therefore be inappropriate. It would be better, of course, to exclude, a priori, studies that underdose the drug; however, it may not always be possible to know what variations in study methods will create variations in true effects. So, this concern applies to other methodological variations, as well, and not to dosing alone. As a realistic precaution, therefore, if studies vary significantly in design and methods, a random effects approach may be preferable. This precaution is frequently necessary in medical research.
Simple Summary
A fixed effect model assumes that all studies are measuring the same numerical value and that differences in values across studies are due to chance. A random effects model assumes that all studies are measuring the same outcome, and that differences in values across studies are due to meaningful differences in contexts (e.g., study design, study methods, and/or sample characteristics).
Parting Notes
This article emphasizes the distinction between fixed effect (singular) and random effects (plural). However, some authors shun convention and use “effects” as a plural noun for both models.
This article did not hyphenate the terms “fixed effect” and “random effects.” Some articles do, presenting the terms as “fixed-effect” and “random-effects.” The hyphenation is a less common usage.
