Abstract
Keywords
It has been argued that the process of evidence-based practice (EBP) will contribute to making informed decisions that help clients attain valued outcomes (Emparanza, Cabello, & Burls, 2015; Gambrill, 2006; Sackett, Richardson, Rosenberg, & Haynes, 1997; Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996). In EBP, compared to authority-based approaches (Gambrill, 1999), currently available research related to particular clients is sought as well as information about client circumstances and characteristics including their preferences and values, and clinical expertise is drawn on to integrate all information. Uncertainty and ignorance as well as knowledge is shared among professionals and clients. In authority-based approaches, criteria such as consensus and tradition are relied on in making decisions. Ever since EBP was promoted in social work (Gambrill, 1999), it has sparked interest. Two different approaches emerged: (1) the process of EBP and (2) empirically supported treatments (ESTs; promotion of specific interventions) also referred to as evidence-based interventions (EBIs) or evidence-based practices (EBPs). In the following, we will refer to all of these terms as ESTs for easier readability. Since there are different views of (Rubin & Parrish, 2007) and misconceptions about EBP (Gibbs & Gambrill, 2002), both approaches are addressed in this review. To date, little is known about how ESTs and/or the process of EBP are typically taught in social work education (or if they are). Thus, the aim of this article is to systematically describe the state of research on how to best teach the process of EBP and/or ESTs to social work students and practitioners as well as with regard to its quality.
Evidence-Based Practice: Two Different Approaches
There are two main different understandings of EBP. One is the process of EBP as described in original sources in medicine designed to help practitioners make informed decisions (Haynes, Devereaux, & Guyatt, 2002; Sackett et al., 1996; Straus, Richardson, Glasziou, & Haynes, 2011). A second (ESTs) refers to interventions claimed to be effective by some individual or organization. Both approaches are briefly discussed next (see Thyer & Myers, 2011, for an elaborated distinction).
The process of EBP
The term “evidence-based medicine (EBM)” was coined by Guyatt (1991; see Sur & Dahm, 2011, for a description of the history of EBM). In the process of EBP, clinical expertise is drawn on to integrate relevant research findings, and information about the clients’ unique circumstances and characteristics including their values and preferences, and hoped-for outcomes in order to arrive at informed decisions. This process involves “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual [clients]” (Sackett et al., 1997, p. 2; see also Sackett et al., 1996). Clinical expertise includes basic skills of clinical practice, including relationship skills and the practitioner’s individual experience (Haynes et al., 2002). The process of EBP includes five steps as described in original sources (Sackett et al., 1996; Straus et al., 2011). Converting information needs related to practice decisions into well-structured questions. Tracking down, with maximum efficiency, the best evidence with which to answer those questions. Critically appraising that evidence for its validity, impact (size of effect), and applicability (usefulness in practice). Integrating this critical appraisal with clinical expertise and with a client’s unique circumstances and characteristics including their values and preferences and making a decision together with the client. Evaluating the effectiveness and efficiency in carrying out Steps 1-4 and seeking ways to improve them in the future.
This approach requires a search for knowledge as well as for ignorance such as lack of relevant research concerning a problem (Gambrill, 2019). Results are shared with clients to enable informed decisions that are most likely to result in hoped-for-outcomes for clients.
ESTs
The term “empirically supported treatments” (other terms include EBPs, empirically tested interventions, and EBIs) refers to manualized interventions (e.g., cognitive behavioral therapy, motivational interviewing) deemed to be “empirically supported” based on related research (Thyer & Myers, 2011). For example, the American Psychological Association 2005 Presidential Task Force on Evidence-Based Practice suggested criteria for categories such as “well-established” (at least two good group design studies or a large series of single case design studies, study conducted with treatment manual, clearly specified sample characteristics) and “probably efficacious” (e.g., two studies showing that a treatment is more effective than a waiting-list control group, Task Force, 1993).
Implementation of EBP in Social Work
Even though EBP has become an intensively discussed topic within social work, its implementation in social work practice still lacks behind. With regard to EBP as a process, Pope, Rollins, Chaumba, & Risker (2011) found in a survey of social workers (
Research in a variety of professions has shown that implementation of EBP is difficult due to numerous barriers (e.g., Gray, Joy, Plath, & Webb, 2012; Scurlock-Evans & Upton, 2015). Skill and knowledge may be lacking. There may be insufficient preparation to use EBP (Teater & Chonody, 2018), unsound training (Bellamy, Bledsoe, & Traube, 2006), negative attitudes toward EBP (Murphy & McDonald, 2004), and diverse views of EBP (Rubin & Parrish, 2007). Therefore, social workers may be ill-prepared to use either ESTs and/or the process of EBP. It is important to identify effective educational interventions (EIs) to help students and practitioners to acquire and use related knowledge and skills if these enhance success in helping clients.
Systematic Reviews on EBP Education in Other Areas
The production of systematic reviews has greatly increased over the past decades. Yet, many reviews have been criticized as flawed (Ioannidis, 2016). There are a number of systematic reviews concerning the process of EBP in medicine. Aglen (2016) conducted a systematic review with 39 articles to provide an overview of strategies used to teach the process of EBP to nursing students at the bachelor level. Most studies (
All of these reviews were conducted in fields different from social work. We could not find a review regarding teaching EBP in social work—neither with respect to the process of EBP nor the application of ESTs. Thus, a systematic review of research on how to teach EBP in social work is lacking.
How to Teach EBP: Instructional Approaches and Knowledge Application
The question how to best teach the process of EBP and/or ESTs can be tackled from different perspectives. One research community that is particularly concerned with the teaching of complex skills is the Learning Sciences community (e.g., Fischer, Hmelo-Silver, Goldman, & Reimann, 2018; Sawyer, 2009; see Hoadley & van Heneghan, 2012, for a brief history of the Learning Sciences and their implications for instructional designs). To categorize different teaching approaches, Learning Sciences research has repeatedly differentiated between “teacher-centered” approaches on the one hand (approaches that view the teacher as the main instance regarding what and how to learn in the classroom) and more “learner- or student-centered” approaches on the other hand (approaches that provide learners with more freedom regarding how to structure their learning process). It is argued that these concepts provide a useful analytical segregation for empirical research on EIs and its potential implications (see Hmelo-Silver, Duncan, & Chinn, 2007; Kirschner, Sweller, & Clark, 2006; Sweller, Kirschner, & Clark, 2007, for a critical discussion of this dichotomy). Direct instruction (DI, e.g., Slavin, 2018) is an example for the teacher-centered approach. Problem-based learning (e.g., Hmelo-Silver, 2004) is an example of the student-centered approach. In the following, we describe the two approaches and their respective examples in more detail.
Teacher-centered instructional approaches
The basic idea of teacher-centered approaches is to have a teacher to support student learning by providing information that explains concepts and procedures (Kirschner et al., 2006) optimally in a way that “fits” the human cognitive architecture (especially not to overstrain leaners’ working memory capacity; Sweller, Ayres, & Kalyuga, 2011). DI is an example of this approach. Slavin (2018) suggests seven steps to apply this approach in an ideal way: (1) define learning goals and provide a syllabus, (2) activate prior knowledge, (3) present new subject matter in a structured and efficient way, (4) use comprehension checks like questions, (5) let learners apply previously presented knowledge, (6) induce further elaboration (e.g., homework), and (7) assess performance and give feedback. In a meta-meta-analysis, Hattie (2009) reported an average effect size of
Student-centered instructional approaches
In student-centered instructional approaches, learners are granted a more active role. This is achieved by presenting learners more complex and practical problems that they are supposed to solve either alone or in groups but optimally guided by a teacher or tutor. One example is problem-based learning (PBL; Barrows & Tamblyn, 1980; Hmelo-Silver, 2004). In PBL, after the presentation of a problem, students discuss possible explanations for it. Discussing the problem before receiving any further instructions is important to activate and evaluate prior knowledge and discover knowledge gaps that should trigger interest and motivation to find out more about the problem (Loyens & Rikers, 2011). In PBL, students learn by solving complex real-world problems and reflect on their experiences guided by a teacher or a tutor (Hmelo-Silver et al., 2007). In Hattie’s (2009) meta-meta-analysis, the average effect of PBL on student achievement compared to more traditional approaches was rather small (
Knowledge application
Since EBP can be considered application-oriented knowledge, it is important to explore how knowledge is applied within learning processes, for example, working with a fictional case or with real clients (or if knowledge is applied at all). The concept of “situated cognition” tackles the importance of knowledge application during the learning process. The basic idea of situated cognition is not to focus only on isolated aspects like cognition, but take into account the individuals and their actions as well as the situation in which practice takes place (Wilson & Myers, 2000). Proponents of situated cognition such as Lave (1988) assume that during the learning process knowledge cannot be decontextualized, transmitted, and then applied in another context (see Gruber, Law, Mandl, & Renkl, 1996, for an overview of situated learning models). She found that skills learned in informal environments are rarely generalized but remain connected to the contexts and the circumstances in which they are acquired. She emphasized the importance of everyday practice and the necessity to embed learners in social communities that support participation and increasingly independent application of skills in relevant settings (see more recent research concerning the importance of deliberate practice in enhancing expertise such as Rousmaniere, Goodyear, Miller, & Wampold, 2017).
Effects of EIs
Much research is interested in studying the effects of certain EIs on desired outcomes. An effect is the difference between what happened when people received an intervention and what would have happened if they had not received it (Shadish, Cook, & Campbell, 2002). One important outcome is knowledge acquisition that may be declarative and/or procedural (Anderson, 1996). Declarative knowledge (knowing what) refers to knowledge about facts, concepts, and principles. Procedural knowledge (knowing how) refers to skills and actions. Researchers are also interested in effects of EIs on other variables such as learner’s motivation to engage with the subject matter (e.g., Ruzafa-Martínez, López-Iborra, Armero Barranco, & Ramos-Morcillo, 2016). The development of standardized instruments to measure social workers’ attitudes toward and intentions to use EBP suggest that motivation toward EBP is an important construct related to EIs in social work (Aarons, 2004; Aarons et al., 2010; Rubin & Parrish, 2010). Finally, the learner’s satisfaction with an EI is also an outcome that is often measured in EI studies.
Quality of Empirical Intervention Studies
To determine the effectiveness of an EI on relevant dependent variables, it is important to consider the methodological quality of related empirical studies. Study quality can be operationalized at different levels including rigor in design and reliability and validity measures. Both concerns are affected by risk of bias which we also discuss.
Rigor in design
Studies that lack a controlled design can be problematic in identifying effects and do not support strong causal inferences (Shadish et al., 2002). This does not mean that discovery of important aspects of learning is restricted to well-controlled experimental research (Hoadley & van Heneghan, 2012). Yet the inclusion of a control group is a sign of quality with regard to claimed effects, especially for quantitative methods. Nevertheless, Yaffe (2013) notes that most evaluation studies in social work education do not apply a controlled design. Qualitative research usually has other goals than detecting a causal relationship such as reconstructing interpretative patterns or exploring learners’ individual adaptations of knowledge. Qualitatively oriented researchers may speculate about what would have happened if a causal factor was missing (Johnson & Christensen, 2013).
Reliability and validity
Reliability refers to how consistently a construct is measured. One way to estimate the reliability of a measure is to examine its internal consistency, how closely items on a measure are related by calculating the Cronbach’s α. An alternative is examining stability of a measure by administering this at different times and examining scores. Validity refers to whether a measure actually reflects the construct of interest. Different kinds of validity include content validity (do items accurately reflect the domain of interest?), construct validity including convergent (are two constructs that should theoretically be related in fact related?) and divergent/discriminant validity (are two constructs that should theoretically be not related in fact not related?), criterion validity that includes concurrent validity (relationship between test scores and criterion scores obtained at the same time), and predictive validity (relationship between test scores obtained at one point in time and criterion scores obtained at a later time). Self-report measures may not reflect behavior in real-life settings. Relying solely on learners’ perceived learning is problematic since we tend to overestimate our knowledge (Kruger & Dunning, 1999; Snibsøer et al., 2018). Instead, when assessing knowledge and its use, observation of performance in real life or simulated work settings using valid measures is preferable (Johnson & Christensen, 2013). Thus, the “measurement strategy” (performance tests vs. self-report) of a study is a particularly important aspect of validity in our review.
Risk of bias
Bias refers to systemic error in one direction. Factors that may contribute to such bias are, for example, incomplete outcome data (attrition bias) or selective outcome reporting (reporting bias; Higgins & Green, 2011). Risk of bias assessment is closely connected to the type of empirical data used, the theoretical rationale drawn on and the unique circumstances of a study. Different methods to assess risk of bias exist and the method used in a particular review should be selected with reference to the methodological features relevant to the included studies (Liberati et al., 2009).
Objectives and Research Questions
The aim of this study is to describe and review research on EIs used to teach the process of EBP and/or ESTs to social work students and practitioners and their effects on various dependent variables (such as knowledge, motivation, and satisfaction), considering the quality of the studies. We investigated the following research questions:
We carried out a systematic review to answer these questions. Due to the varied means of data collection and analysis in research reports (qualitative, quantitative, and mixed methods) as well as the heterogeneity of designs, samples, and interventions, we did not conduct a meta-analysis. Instead, we provide a narrative synthesis.
Method
We largely followed the
Eligibility Criteria
We included studies that met the following criteria. First, the studies had to be empirical. Second, the studies had to include one or several interventions designed to help participants develop relevant declarative and/or procedural knowledge and/or motivation regarding ESTs and/or the process of EBP (studies that address both approaches are labeled as “Both”). Studies solely interested in learners’ satisfaction with a particular EI were not included. Third, the sample used had to consist at least partially of social workers or social work students. Fourth, only studies in English or German language were included (see Online Appendix Table A1 for a detailed description of the eligibility criteria).
Literature Search
We carried out a bibliographic search to locate relevant articles using search terms grouped into the following categories:
Search Terms Used to Find Relevant Studies for a Systematic Review of Educational Intervention Studies to Teach the Process of EBP and/or ESTs in Social Work.
Study Selection
Two independent coders used the described eligibility criteria to review abstracts of >10% of all potentially relevant articles using a binary code (study to be included vs. study not to be included), until a sufficient interrater reliability (IR; Cohen’s Kappa coefficient =
Interrater Reliability for Eligibility Criteria.
aThe low
After a sufficient

Flow diagram of the inclusion and exclusion process of studies for the review. Two of the 28 articles refer to one study, thus we analyzed 27 studies.
Data Extraction
We defined a set of variables (see Table 3) to answer our research questions and extracted respective data from the articles. The procedure of data extraction differed with respect to different variables.
Variables.
Descriptive variables
Variables that are rather descriptive in nature such as location where a study was conducted or the duration of an EI were not coded but directly extracted.
Coded variables
To code variables that were not descriptive in nature such as instructional approaches or knowledge application, we developed and iteratively refined a standardized data abstraction form. A number was allocated to each subcode and studies were coded numerically. All studies were coded by two independent coders using >20% of the relevant articles until a sufficient IR (κ > 0.60) was attained. The remaining articles were coded by the first author. We encountered a great deal of vague or missing descriptions (we contacted eight authors to ask for request additional information and three answered). Thus, all ratings for variables with which we experienced problems to attain a sufficient IR were double coded by two coders based on consensus. Table 3 provides an overview of all coded variables, their operationalization, and their IR.
Risk of bias assessment
The Mixed Methods Appraisal Tool (MMAT) was used to assess the studies’ risk of bias (Pluye et al., 2011). The MMAT was developed for use with systematic reviews that include quantitative, qualitative, and mixed method studies. It has been validated with regard to content validity (Pace et al., 2012; Pluye, Gagnon, Griffiths, & Johnson-Lafleur, 2009, 2011). Its reliability ranges from
The variables “sample size,” “reliability,” and “effects” may have been simplified for evidence aggregation and/or easier readability in the following sense.
Sample size
To determine a study’s sample size, we extracted the number of participants who completed the posttest. If a study involved a pretest, we extracted the number of the participants who completed both pre- and posttests. If the study reported more than one outcome of interest, we extracted the smallest of the provided numbers. For example, if a study measured “motivation” of 34 participants and also “satisfaction” of 31 participants, 31 was extracted as the sample size. The same was done for follow-up sample sizes.
Reliability
If multiple values for measurements of internal consistency were reported for subscales relevant to a single dependent variable (DV), the range was reported (e.g., for feasibility, α = .76; attitude, α = .89; and intentions to use, α = .63, we reported α = .63–.89 for motivation). If multiple internal consistency values were reported for various points of measurement, we computed the mean (e.g., pretest, α = .90; posttest, α = .89; follow-up, α = .93; then we report, α = .91).
Effects
With respect to “effects” derived from quantitative results, we extracted the reported means, the standard deviations, and
Variables
To answer our research questions, the articles included in this analysis were coded with respect to a broad range of variables. Regarding Research Question 1, we coded them with respect to the instructional approach of the respective EI and how (or if) learners had to apply EBP knowledge (knowledge application). With respect to Research Question 2, we coded descriptive as well as procedural EBP knowledge, motivation toward EBP, and the learners’ satisfaction with the EI. Concerning Research Question 3, we coded the studies’ design, their measure instrument strategy (whether data collected by a measure instrument referred to a performance test or the learners’ self-assessment of their knowledge) as well as the studies’ analysis paradigm (the methods of analysis). Furthermore, we coded the articles with respect to a number of background variables, to characterize the studies’ broader circumstances in a preliminary analysis (see Table 3). All variables are described in more detail in Table 3.
Results
Study Selection
Our search across the different databases revealed 2,085 hits. Handsearching the
One reason for the large discrepancy between the initially identified articles and the final sample lies in the large number of duplicates (
Preliminary Analysis
Fifteen (55.5%) studies were conducted solely with social workers and/or social work students. Eleven studies (40.7%) did not provide any information on the age of the participants and six (22.2%) did not provide information on gender. Table 4 provides an overview of the samples including information on age and gender for studies that represent the two different approaches (the process of EBP and ESTs).
Sample Characteristics.
Twenty-three studies were conducted in the United States, three in the UK, and one in Israel (see Table 5). All studies used nonprobability sampling. Sixteen studies used convenience samples, 10 studies used purposive samples (participants fulfilled certain eligibility criteria), and one study did not provide information on its sampling method. Nineteen studies provided information on the ethnicity of participants. All used diverse samples. The majority of the studies evaluated a university course (
Main Characteristics.
aOne group received a delayed intervention (after T2). bArticles refer to the same study. cMean value, computed from
Main Characteristics
In line with the PRISMA guidelines (Liberati et al., 2009), Table 5 provides an overview of the study characteristics.
Research Question 1: EIs
Our first research question concerned characteristics of EIs focused on, including instructional approaches and knowledge application. As for instructional approaches, a “student–teacher” approach was used in 15 (55.6%) of the EIs and a “student” approach in 5 (18.5%) meaning that 74.1% of the EIs entailed, at least to some extent, student-centered elements. Two thirds of these studies concerned EBP as a process, focusing on certain steps in the process (see Figure 2).

Line graph of instructional approaches.
With respect to knowledge application such as working with a fictional case in classroom, working with simulated or real clients and so on, 13 (48.2%) studies asked learners to apply knowledge in real-world practice. Five (18.5%) studies did not use any case-based application (see Figure 3).

Line graph of knowledge application.
Research Question 2: Effects of EIs
Studies that addressed EBP as a process primarily measured motivation (12 effects, 26.4%) and perceived procedural knowledge (11 effects, 24.2%; see Table 6). Two negative effects were reported, both for motivation and both occurred after a semester-long research course, one with real-world knowledge application (Bender, Altschul, Yoder, Parrish, & Nickels, 2014) and one with case-based knowledge application (Smith, Cohen-Callow, Harnek-Hall, & Hayward, 2007).
Coded Effects.
For studies that addressed ESTs, 43 (95.6%) of the 45 coded effects were positive. Table 6 provides an overview of the coded effects with respect to the different EBP approaches.
Research Question 3: Study Quality
The third research question concerned quality of studies, specifically their designs, reliability, and validity of measures and risk of bias. Three (11.1%) studies were “qualitative,” six (22.2%) “mixed methods” and 18 (66.7%) “quantitative.” Twenty-one (77.8%) studies used a one-group design, 4 of which (two concerned the process of EBP and two EBPs) used only postmeasurements and 8 included follow-ups. About half of the studies applied a one-group pre-post design (48.1%) followed by one-group post-only (14.8%) and one-group pre-post follow-up (11.1%). The designs were evenly distributed among the two EBP approaches. Only six studies (22.2%) used a controlled design. Figure 4 provides an overview of the studies’ designs.

Line graph of study designs. One-group designs were summarized (pre-post, post-only, and with follow-up measurements) to provide a more accessible overview. The same is true for two two-group pre-post and a three-group pre-post follow-up (quasi-experimental) and two randomized two-group pre-post follow-up and a randomized two-group prerepeated (experimental).
Regarding the reliability and validity of measurement instruments, 38 (67.9%) of 56 measurement instruments were quantitative such as use of a Likert-type scale and 18 (32.1%) were qualitative such as an interview. Of the 38 quantitative instruments, 21 (55.3%) provided data regarding internal consistency and 5 (13.2%) provided data concerning test–retest reliability. For eight (21.1%) quantitative instruments, some sort of validity was mentioned. Two (11.1%) of the 18 qualitative instruments provided a value for internal consistency and 6 (33.3%) provided data regarding interrater reliability. Others provided no such information. With regard to measurement, only one (1.8%) measure was a performance test that was based on observation (in role-play; Sacco et al., 2017). Twenty-five (92.6%) studies based their measures solely (51.9%) or partly (40.7%) on self-report data. Figure 5 provides an overview of the studies’ measurement strategies, that is, whether the participants’ knowledge was actually tested (e.g., multiple choice test) or if they were asked to provide a self-assessment of their knowledge, motivation, and satisfaction (e.g., a Likert-type scale questionnaire).

Line graph of the measurement strategies.
As for the risk of bias assessment, 1 study scored 0, 4 studies scored 1, 12 studies scored 2, and 12 studies scored 3. No study received an optimal rating of 4. Overall, 15 (55.5%) out of 27 studies scored 0, 1, or 2 (range 0–4).
Discussion
The aim of this article was to provide a comprehensive overview of empirical studies concerned with supporting social work students and/or professionals in their development and application of EBP. We distinguished between two EBP approaches, namely the process of EBP and ESTs. Our main goals were to find out (1) what kinds of interventions have been used so far to foster EBP in social work, (2) what the effects of these interventions are, and (3) to assess the methodological quality of those studies.
EIs and Their Effects
Research Questions 1 and 2 concerned the conceptualization of EIs and their effects in order to find out how to teach the process of EBP and/or ESTs in social work in an effective way. Studies predominantly applied a guided student-centered instructional approach. This approach is viewed favorably for education in the process of EBP (Straus et al., 2011; Tian, Liu, Yang, & Shen, 2013). Based on a meta-analysis that supports the effectiveness of PBL regarding facilitation of application-oriented knowledge and skills (e.g., Dochy et al., 2003), this focus on student-centered teaching seems warranted. Most studies requested participants to apply EBP knowledge in real-world settings. Learning effects reported as a result of using guided student-centered instructional approaches were mostly positive, especially for studies attempting to foster ESTs (94.6%). Other instructional approaches were also reported to be successful. This may suggest to the uncritical reviewer that any kind of intervention may be effective (Dizon, Grimmer-Somers, & Kumar, 2012). However, reliance on self-report data and variable study quality makes it difficult to determine. Notably, there were no measures of actual use of the process of EBP or ESTs in real-life settings or of the fidelity with which ESTs were implemented with one exception. Sacco et al. (2017) assessed fidelity of an EST used with standardized clients. Clearly, more research that includes the use of relevant declarative and procedural knowledge in real-world settings is needed to discover guidelines for teaching both the process of EBP and ESTs.
Assessment of Study Quality
Our third research question addresses study quality. We approached this question in three ways. First, we looked at the designs that were used in the studies we investigated. Only about one fifth of the studies used a controlled design that allowed for comparison of the effects of different types of instruction. The majority of the studies used a one-group pre-post design, followed by a one-group post-only design. Eleven percent of the studies were qualitative and none of which uses a controlled design. Both controlled designs as well as qualitative research studies are important in educational research and both are underrepresented in our sample. As previously noted, studies without a controlled design do not support causal inferences (Shadish et al., 2002). In summary, to date, studies investigating the effects of EIs on EBP in social work leave unanswered questions regarding the best teaching approach, for example, whether the teaching approach they used is superior to alternative approaches.
Second, we looked at the reliability and validity of measures used. Only about 13% of quantitative measures provided data concerning test–retest reliability, about 20% concerning validity, and only one third of qualitative instruments were checked for reliability. To assess declarative and procedural knowledge, self-reports were much more prevalent than performance measures such as multiple choice tests or observation of performance during practice scenarios or in real-life settings. This is problematic for at least two reasons. First, as mentioned earlier, individuals tend to overestimate their knowledge (Kruger & Dunning, 1999; Snibsøer et al., 2018). And second, because the goal of EIs related to EBP is (or at least should be) to help learners become more proficient in the use of the process of EBP and/or ESTs
Third, we assessed the risk of bias of the investigated studies. More than half of the studies scored 0-2 (range 0-4). Thus, the very positive results need to be treated with caution. In fact, only one study (Smith et al., 2007) included a “test” as well as a “perception” measure regarding the same dependent variable (procedural knowledge). Even though students reported that they knew more about how to critically analyze research, tests of their knowledge showed no improvement for these skills. This result casts further doubts on relying solely on self-report measures for the assessment of declarative and procedural knowledge, which, as we have seen, seems to be the approach taken in most research on the effects of EIs on EBP in social work. Additional research is needed using reliable, valid performance measures of EBP knowledge and skills.
Recommendations
Given the findings of this review, it is difficult to offer recommendations for teaching the process of EBP and/or ESTs in social work. We should draw on related studies in other areas to inform practices in social work.
Even though most studies in the social work context used EIs based on student-centered teaching approaches, we do not know whether these approaches are actually more effective than other approaches, particularly, more teacher-centered approaches. Perhaps certain kinds of learners (e.g., novices) benefit more from teacher-centered instruction, while others such as more advanced students and practitioners would learn more from student-centered instruction. Evidence from other research areas supports this hypothesis (Kalyuga, 2007). Thus, more research is needed in the social work context to discover what kind of teaching methods under what circumstances are most effective in facilitating the use of EBP by students and practitioners.
Nevertheless, social work educators of course cannot wait for this research to be carried out. In planning courses or other kinds of interventions, we therefore recommend them to carefully review and critically appraise the research evidence they want to base their teaching on and also to consider research from other areas. Based on the review of Aglen (2016), it might be valuable to include aspects of critical thinking (e.g., Gambrill, 2013; Gambrill & Gibbs, 2017) in EBP education. Multifaceted approaches (those using a combination of methods like lectures, computer sessions, small group discussions, journal clubs, and assignments) might be more promising than interventions that offer only one of these methods or no intervention (Kyriakoulis et al., 2016; Patelarou et al., 2017).
Limitations and Conclusions
First, even though all coding was based on reliability checks through double coding, coding still remains a subjective endeavor. Coding was based on published descriptions and some reports failed to provide detailed information, for example, regarding the EI and sample. This might have contributed to moderate interrater reliability values for several variables. Another consequence of lack of detail was that it was not possible to carry out a more specific investigation of the EIs. Second, in order to include qualitative, quantitative, and mixed methods studies, we applied broad operationalizations for the effect variables. This might have contributed to subjectivity in ratings, especially for qualitative results. Also, the classification of effects in “positive,” “no,” and “negative” is coarse-grained. Third, the broad inclusion criteria used in our review resulted in a study sample including a wide variety of EIs and designs making comparison a challenge. Fourth, the almost exclusively positive results reported make it difficult to discover the most effective training methods for EBP in social work. Fifth, more than half of the studies achieved low scores (0, 1, or 2 out of 4) on risk of bias. Sixth, most studies relied on self-reports. Thus, results of research on EBP education in social work need to be treated with caution. We need more studies using controlled designs with measures that focus on performance rather than self-report.
To conclude, much remains to be done to make informed decisions regarding the design of EIs and measurement of their effects. We hope that our study stimulates additional related empirical research.
Supplemental Material
Supplemental Material, 03_EBP_Review_Spens_et_al._-_Appendix - How to Teach Evidence-Based Practice in Social Work: A Systematic Review
Supplemental Material, 03_EBP_Review_Spens_et_al._-_Appendix for How to Teach Evidence-Based Practice in Social Work: A Systematic Review by Florian Spensberger, Ingo Kollar, Eileen Gambrill, Christian Ghanem and Sabine Pankofer in Research on Social Work Practice
Footnotes
Acknowledgments
Declaration of Conflicting Interests
Funding
Supplemental Material
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
