Abstract
Attachment is seen in both a child’s protest and proximity-seeking behavior if he or she is distressed or involuntarily separated from a primary caregiver as well as in children’s confident exploration of novelty when they feel safe in the presence of their caregiver (Tottenham, Shapiro, Flannery, Caldera, & Sullivan, 2019). Theoretically, secure attachment relationships develop when caregivers are sensitively responsive to the signals and needs of their child, whereas insecure attachment relationships may develop when caregivers ignore or respond only intermittently to signals. Accordingly, research on attachment has examined predictors and outcomes of both secure and insecure attachment relationships. To synthesize the empirical evidence on these associations, attachment researchers were early adopters of meta-analytic methodology (e.g., Goldsmith & Alansky, 1987; van IJzendoorn & Kroonenberg, 1988).
In 1985, Main, Kaplan, and Cassidy proposed that caregivers’ own mental representations regarding attachment, identified as autonomous (secure), dismissing, preoccupied, or unresolved on the basis of their responses to the Adult Attachment Interview, predict the quality of children’s attachment relationships via the sensitivity of caregivers’ responses to children. The importance of intergenerational transmission for developmental and clinical psychology, as well as for developmental psychopathology, lies in what it can tell us about caregivers’ contributions to their children’s social functioning and mental health and about factors that interrupt this contribution. In 1995, van IJzendoorn published a meta-analysis of 18 studies examining the intergenerational transmission of attachment and found an effect size of a strength rarely observed in psychological science (
The purpose of the current article is to discuss how attachment researchers have turned to IPD to overcome the limitations of single studies with small sample sizes and traditional aggregate-data meta-analysis. Some hurdles on the road to creating IPD data sets, and their potential solutions, will also be discussed.
The Promise of Data Pooling
IPD meta-analysis has been a gold-standard method of meta-analysis for some time in the biomedical sciences (Tierney, Stewart, & Clarke, 2019), but it has only recently found its way to psychology (Roisman & van IJzendoorn, 2018). IPD meta-analysis involves obtaining, harmonizing, and synthesizing the raw data for the individual participants in studies pertaining to common research questions (Riley et al., 2010). Compared with meta-analysis based on study-level aggregate data, IPD meta-analysis thus adds the data on the level of the participants to the analyses (see Fig. 1).

Schematic overview of traditional meta-analysis (purple) and individual-participant-data meta-analysis (blue).
The method of IPD meta-analysis is precisely what was needed in attachment research, as the field had hit the saturation stage foreseen by van IJzendoorn and Tavecchio in 1987. In this stage, many of the major questions seemed to have been settled and, despite countless but fragmented efforts, progress was slow in resolving the remaining gaps. Combining data from primary studies, whether large or small, capitalized on the benefits of this saturation stage and offered exciting prospects for the renewal of the attachment-research paradigm (Duschinsky, 2020).
We started the Collaboration on Attachment Transmission Synthesis (CATS) to both overcome stagnation in understanding intergenerational transmission of attachment and to test the feasibility of IPD meta-analysis for our field. On the basis of discussions, the participating investigators drafted a protocol (see https://osf.io/9p3n4/) with the aim of advancing our insight into the mechanisms underlying intergenerational transmission of attachment. All authors of the studies identified in the Verhage et al. (2016) meta-analysis were invited to participate in this project. This led to a data set of 59 samples with 4,498 parent–child dyads, but new samples continue to be added.
Advantages of and Approaches to IPD Meta-Analyses for Attachment Research
The main advantage of a pooled set of raw data over a meta-analysis of aggregate data is the increase in power and degrees of freedom (Riley et al., 2010) so that increasingly complex models and auxiliary hypotheses may be tested. In attachment research, data collection through labor-intensive methods constrains sample sizes, resulting in few adequately powered studies (Stanley, Carter, & Doucouliagos, 2018). In the 2016 meta-analysis, only 18% (15/83) of the studies reached the .80 power threshold of 82 parent–child dyads required to assess secure–insecure attachment transmission (Verhage et al., 2016). Testing more complex models, such as the ones to answer pertinent questions on moderating or mediating factors of attachment transmission, require a much larger sample for drawing replicable conclusions. The CATS data set makes this venture possible.
Our first study on ecological factors that might affect intergenerational transmission of attachment is an example of moderator testing that would not have been possible without IPD (Verhage et al., 2018). In this study, we examined, for example, whether attachment transmission differed by age of the child. The preceding meta-analysis of aggregate data had looked into this issue as well, but given that attachment was measured with different instruments in studies with younger children versus older children, there was no way of separating the effects of age from the effects of using a different instrument (Verhage et al., 2016). In the IPD meta-analysis, we controlled for the type of instrument and found that the transmission effect was stronger for older children than for younger children (Verhage et al., 2018). This finding provides support for the theoretical notion that cumulative experiences with parents lead to more stable, ingrained attachment patterns and helps to recalibrate expected intergenerational transmission effect sizes. Several other manuscripts are currently under way. These report, for example, on patterns of nontransmission (e.g., from secure to insecure classifications or between different types of insecure classifications) using pooled data, a procedure that is necessary because of low base rates for these transmission patterns (Madigan et al., 2020), and on a moderated mediation model of attachment transmission explaining why the transmission gap could not be solved with additional mediators (Verhage et al., 2019).
Empirical findings may also be made more robust as a guide for theory development by controlling “researcher degrees of freedom,” which represent the diversity of choices a researcher makes during the research process (Simmons, Nelson, & Simonsohn, 2011, p. 1359). All choices made during study design, data collection, data analysis, and reporting may affect study outcomes and hence theory development on a given topic. In the field of attachment research, there are historical reasons that allow for a variety of ways in which attachment variables may be parsed. Originally, only three categories of parent–child attachments were identified (secure, avoidant, and resistant; Ainsworth, Blehar, Waters, & Wall, 1978), but later, Main and Solomon (1990) discovered insecure disorganized attachment. From that moment, researchers could parse their attachment variables as a secure/insecure dichotomy, an organized/disorganized dichotomy, three- or four-way categorical variables, or combinations of categories and a rating scale, a set of options which was also mirrored in the variables for adult attachment representations as assessed in the Adult Attachment Interview (Main et al., 1985). In all, 38 different ways of examining the intergenerational transmission of attachment have been described in the literature (Schuengel et al., 2019). The absence of substantive or statistical reasons for choosing one variant over the other may indicate underspecification in the theoretical model, making it harder to design tests that could expose the theory’s flaws and thus undermining the credibility of the theory. The meta-analytic finding that the effect size for unpublished data was lower than the effect size for published data, even within the same studies, also hints at selective reporting of significant findings (Verhage et al., 2016). Secondary analyses are just as vulnerable to
In attachment research, a categorical model of attachment was taken up early on as the most likely representation of reality, perhaps under the influence of emerging diagnostic classification systems (Duschinsky, 2020). The categorical assumption started to be put to the test much later (Fraley & Spieker, 2003; Roisman, Fraley, & Belsky, 2007). Latent structure analyses and taxometric analyses, however, require large data sets, so that smaller studies supporting dimensional measurement models have thus far insufficiently impacted research practices. With the pooled data on parental attachment representations in the CATS data set, we were able to show that individual differences in adult attachment representations may also be consistent with a latent dimensional rather than categorical model (Raby et al., 2019), but incremental validity is still an outstanding question for which the IPD approach might be perfectly suited.
Researchers could go one step further and pool raw materials such as interview transcripts or video-recorded observations, which could also be beneficial for methodological refinement. This was shown very early on by Main and Solomon (1990), who made a case for the existence of disorganized attachment on the basis of videotapes that were impossible to code with the regular rating system shared with them by researchers working with high-risk samples. This project had an enormous impact on attachment research and the use of attachment constructs in clinical practice. Thirty years later, we aim to refine attachment measurements again by sharing the raw materials. A first project regarding the structure of the scale for unresolved loss or trauma in the Adult Attachment Interview has been registered (see https://osf.io/bu5cx).
The Challenges of IPD Meta-Analysis
Like any method, IPD meta-analysis comes with challenges and limitations. In this section, we describe three broad challenges for this type of research; for more practical challenges and concrete tips, see Table 1.
Practical Challenges to Data Pooling and How the Collaboration on Attachment Transmission Synthesis (CATS) Dealt With Them
First, pooling data is useful only when the same underlying constructs are measured in enough studies, whether or not they are measured with different instruments. Before requesting data from study authors, it is necessary to review these instruments of the constructs of interest and assess the feasibility of harmonizing different ways of operationalizing a construct. The attachment field proved eminently suited for IPD meta-analysis because it has honed a limited and well-calibrated set of standard instruments such as the Strange Situation procedure and the Adult Attachment Interview. Harmonizing the measures for parental sensitivity already required making multiple assumptions, however.
Second, sharing participant data is increasingly regulated under privacy-protection laws, which vary across countries. Consulting with institutional privacy officers is key when setting up a data-pooling project to discuss the ethical and legal basis for data sharing and making data-sharing agreements. Furthermore, to ensure the privacy of participants, it is important to establish secure ways to transfer, store, and analyze anonymized data. In CATS, we have built a data commons for storage and analysis, which is a secured, remote-access information-technology infrastructure holding the pooled data set, syntax codes, and various analysis software packages (Grossman, 2019).
A final challenge is that likely not all data from eligible studies can be acquired. This can occur for several reasons, such as authors who cannot be traced or data that have been destroyed, but also because of priority claims of authors who want to publish their research on their arduously collected data sets before sharing the data. Fortunately, in CATS, authors of 67% of the original studies contributed their raw data, but sharing rates are often lower (Jaspers & DeGraeuwe, 2014). It is therefore important to decide in advance what percentage of data would be enough to proceed and to compare aggregate data of the missing studies with the IPD data of the included studies and, whenever possible, to include them if they differ (Stewart et al., 2015).
Conclusion
Wide-scale collaboration among attachment researchers in CATS has brought rigorous testing of complex attachment-theoretical propositions within reach while enabling the exploration of the boundaries of these propositions. Capitalizing on the advantages of the saturation stage of attachment research offers new and exciting horizons that pull attachment research back into the stage of construction of attachment theory. Data pooling holds these same promises for other fields in psychology: addressing theoretical challenges, increasing methodological rigor and transparency, and strengthening the capacity to inform applied research. Together with other efforts to make psychological science more robust (Nelson, Simmons, & Simonsohn, 2018), IPD meta-analyses show both the value and the viability of moving to new levels of collaboration.
Recommended Reading
Riley, R. D., Lambert, P. C., & Abo-Zaid, G. (2010). (See References). An accessible description of what individualparticipant-data (IPD) meta-analysis is, how it is different from meta-analysis of aggregate data, when it is the preferred method of meta-analysis, and how to conduct an IPD meta-analysis.
Stewart, L. A., Clarke, M., Rovers, M., Riley, R. D., Simmonds, M., Stewart, G., & Tierney, J. F. (2015). (See References). Contains the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for reporting on IPD meta-analyses, which are originally from the medical field but can be applied to psychological research.
van IJzendoorn, M. H., & Bakermans-Kranenburg, M. J. (2019). (See References). Reviews previous research on the intergenerational transmission of attachment and the difficulty in explaining the “transmission gap”; also provides a novel theoretical framework including contextual factors and differential susceptibility to fill in the gap.
Verhage, M. L., Fearon, R. M. P., Schuengel, C., van IJzendoorn, M. H., Bakermans-Kranenburg, M. J., Madigan, S., . . . the Collaboration on Attachment Transmission Synthesis (2018). (See References). Describes the first IPD metaanalysis on the intergenerational transmission of attachment by the Collaboration on Attachment Transmission Synthesis.
Weston, S. J., Ritchie, S. J., Rohrer, J. M., & Przybylski, A. K. (2019). (See References). Explains the various uses of secondary data analysis as a tool for the generation of hypotheses, confirmatory work, methodological innovations, and analytical methods, with caveats for using secondary data.
