Abstract
Keywords
Introduction
The Interprofessional Education Collaborative (IPEC) competencies represent desired outcomes for graduating health profession students and serve as a basis for the design, implementation, and assessment of interprofessional education (IPE) opportunities.1,2 While the ideal setting for students to observe, learn, and practice IPEC competency domains is a clinical practice setting, evidence supports the use of simulation as an effective method to prepare students for the necessary teamwork competencies for the clinical practice environment.3,4 Simulation has also demonstrated significant improvements in students’ attitudes toward cultural competence, understanding roles, and interprofessional communication and teamwork. 5 Despite the agreed-upon importance of simulation-based experiences, more robust and validated evaluation and assessment tools are needed. 6
Numerous rubrics have been used to assess student performance in simulated environments. Despite the armamentarium, the complex task of validating IPE rubrics leads to many such tools only being applied to small sample sizes. 6 Similarly, most tools are designed to evaluate team function in these settings, leaving a notable gap in assessing individual student's performance within the team. 7 The limited number of instruments that allow external raters to provide individual student assessment have numerous items to complete, making it difficult to deploy in a timed simulation and lacking high reliability. 7
Bismilla et al 8 used Delphi methodology to develop and achieve expert consensus (content validity) on a 16-item instrument, the Simulation-Based Interprofessional Teamwork Assessment Tool (SITAT), for assessing an individual on an interprofessional team. This tool aims to fill gaps in simulation-based education by providing individualized team member assessment though a manageable number of items to be completed by an observer. The SITAT provides the potential to examine several uninvestigated research questions related to IPE training globally, identify unique patterns of behavior within specialties of students, and potentially lead faculty to develop and tailor IPE curricula to fit the training needs of individual students. With further review and evidence, the tool could be utilized to provide a standardized measure of competency for individual students involved in simulation-based interprofessional activities. 9 The purpose of this study was to apply the SITAT to the individualized assessment of medicine, pharmacy, and nursing students working through an interprofessional simulation scenario in an interprofessional team, explore potential differences between disciplines and provide internal consistency and interrater reliability evidence for utilization of the tool.
Methods
Instrument
In 2019, Bismilla et al 8 developed a tool for the measurement of individual competence on an interprofessional team, in any role during simulation-based activities. The tool was created based on the results of a 2016 survey of both simulation experts and pediatric program director recognizing a need for competency-based assessment instruments to be utilized in different settings, specifically simulation. The survey results identified interprofessional teamwork as a subcompetency that was difficult to assess when utilizing traditional assessments. During Phase 1, while focused on interprofessionalism and/or teamwork, a systematic review of existing tools (N = 31) was conducted to determine if the following criteria were met: (1) did it assess teamwork?; (2) was it generalizable across scenarios?; (3) was it adaptable for use to assess individual performance?; and (4) did it meet AGME criteria of ease of use. 8
Upon initial review, the qualified tools were them moved to Phase 2, where they were rated based on their difficulty of adaption and scored on a scale from 1, no adaptation to use to 7, extensive adaptation necessary. All tools with an adaptability rating greater than 3 were retained. All remaining items were extracted and included in the Delphi method process with the expert panel. The Delphi method was utilized in 4 rounds with 22 pediatrics experts. The consensus resulted in a 16-item assessment measured on a 5-point rating scale from Novice to Proficient, including a “not applicable” option.
In the current study, only SITAT items related to the simulation activity were utilized. Competencies that could not be observed by the rater during the activity were not included. The selected raters participated on an expert panel which provided feedback as to which items were relevant to the study. Consensus of the panel produced a 9-item instrument (originally 16). See Table 1 for a comparison of the SITAT items and those assessed for the current study.
Original SITAT Compared to Adapted.
Procedure
At a large, public Midwest University, IPE is a priority supported among health professions programs across campus. To best provide IPE learning opportunities, a foundational, interprofessional curriculum was created. The curriculum provides 4 live-learning events, or anchors, 2 of which are simulation-based activities. The second of the 2 simulation-based activities serve as a benchmark in the curriculum referred to as “a readiness checkpoint.” If students provide satisfactory performance, as measured by a behavioral checklist, they are deemed ready to demonstrate interprofessional collaboration skills in practice. The Anchor 4 experience is conducted in a simulation center with recording technology available. For this study, recordings of this simulation experience were utilized. Institutional Review Board exemption was granted for the study (IRB # 2007914518).
Calibration Training
Faculty and staff with both health professions (dentistry, medicine, pharmacy) education and IPE expertise were selected as raters. Upon selection, the raters completed a 3 h calibration training with a simulation expert. During the training, example videos were viewed and the tool was utilized in its current 9-item format. Upon the videos end, each rater shared their score of a particular individual with discussion. This allowed the group to come to consensus around each of the rating responses and create a shared mental model among raters with the goal of improving interrater reliability, and protecting against bias.
Data Collection
After training was complete, each rater was randomly assigned 15 of the 20 possible videos to watch and was instructed to assess each individual member of the team utilizing the tool. Each student on the teams were assessed by 3 out of 4 possible raters. All raters were instructed to watch the videos at regular speed, and in order to avoid rater fatigue it was recommended they did not try to assess all videos in 1 sitting. A Qualtrics form was created to collect and manage rater data. Each rater filled out 1 Qualtrics form per individual member of the team, providing 3 overall competency scores for each individual.
Simulation Scenario
The scenario simulated an in-person outpatient care visit between the interprofessional team and a simulated patient who portrayed a person of low socioeconomic status struggling with several complications of diabetes including maintaining blood glucose control, neuropathy, antalgic gate, depression, and dental pain. The goals for the team during the encounter were to prioritize the patient's health challenges and develop actionable steps that build upon assets, utilizes relevant community resources, and integrates care.
Data Analysis
Data were collected via a Qualtrics online survey form and then exported into SPSS 26. Frequencies were utilized to identify coding errors and missing data. Data were included in the analyses based on listwise deletion. 10 Prior to analysis, the data were cleaned to ensure all assumptions for hypothesis testing were met. The 3 rater scores were used to compute a mean for the overall competency score in order to have 1 competency measure per student. A one-way analysis of variance (ANOVA) was utilized to determine if the dependent variable (overall competency) was significantly different based on the independent variable (profession). Cronbach's alpha provided a reliability estimate for the instrument and each rater. An interclass correlation coefficient provided evidence of interrater reliability across the 3 raters overall competency score for each individual student utilizing the SITAT.
Participants
Although 14 health professions programs participate in the IPE curriculum, selection for this sample was limited to 3 professions: medicine, nursing, and pharmacy. Only 20 student teams with members representing these professions were included in the sample. Some teams included learners from additional professions, but only students from these 3 professions were assessed in this study. The total sample included 94 participants: 36 medical students, 20 nursing students, and 38 pharmacy students. Out of the 94 participants in the sample, 89 were identified and assessed in the data set, receiving an overall competency score from 3 raters. The 5 that were identified as missing could not be assessed due to lack of identification of the student during the playback of the recording. Typically, this occurred due to lack of participation or introduction of the student to the patient during the scenario. Only students that could be clearly identified with profession were assessed with the SITAT.
Results
Descriptive statistics were run to determine the mean and standard deviations for the overall competency score by profession and for all professions. Results of the descriptive statistics are presented in Table 2. Medical students, on average, were rated with the highest competency across the professions (
Descriptive Statistics.
Results of an ANOVA provided evidence that there was no statistically significant difference among professions on overall competency (
Cronbach's alpha provided evidence of high internal consistency (
The interrater reliability of the SITAT was also investigated. The intraclass correlation coefficient was computed to assess the level of agreement of the 3 faculty in rating the interprofessional competency of the 89 students. There was moderate absolute agreement among the 3 faculty using the 2-way mixed model design and “average” unit, kappa = 0.536,
Discussion
This study reveals that the SITAT provides evidence of internal consistency and interrater reliability when used to assess individuals on an interprofessional team. Additionally, the assessment of students from 3 professions demonstrated no significant difference in overall competency ratings during team simulation scenarios. Students from all professions were collectively rated as “competent.” During IPE team simulations where assessments have traditionally been conducted on the team as a collective unit, this tool provides a potentially valuable resource for educators to assess and provide individual team members with performance feedback on their interprofessional teamwork skills.12,13 Reviewers observation of the videos in real time also demonstrates the ease of use of this instrument during a timed simulation event.
The SITAT demonstrated internal consistency for the overall tool and each rater, with estimates meeting standards for high internal consistency. 10 The tool also demonstrated a moderate level of interrater reliability suggesting that it should produce moderate agreement among multiple raters. 11 This study provides evidence that the SITAT can be utilized across various health professions for both students and faculty.
Although mean differences on the SITAT practically exist among medicine, pharmacy, and nursing students on overall competency scores, they were not statistically significant. The lack of statistical significance between groups is expected in light of the interprofessional curriculum provided to the students. The students were at the end of a longitudinal IPE curriculum that included multiple team-based learning experiences and simulation-based events, including debriefing with faculty. Had one or more groups of students from different professions had less exposure or training in simulated interprofessional scenarios we would expect a difference to emerge statistically.
This instrument is valuable to educators as it provides an otherwise missing tool in their toolkit, the ability to assess individual performance during a simulated interprofessional learning experience. Historically, groups of students have been assessed as a whole team. 8 We postulate that if there are one or more very high performers among the group, the team competency is likely a reflection of those individuals rather than the whole team. This may potentially result in a missed opportunity to evaluate the more reserved or less confident learners because of limited data on their individual contributions, although no studies to date have determined if this is accurate. The utilization of this tool will have many important future research implications. For example, with evidence that measurement of competency with the SITAT is consistent, both internally and across multiple raters, it could potentially be utilized to examine IPE competency across various scenarios to determine which produces the highest competency among multiple professions. This study would allow educators to better understand which simulation scenarios are creating the best learning outcomes across various professions, ensuring IPE opportunities are valuable for the majority of professions. The SITAT could also be utilized to examine competency attainment in each student over time on the same scenario, in order to determine growth and provide evidence of learning.
The current study had several limitations. While the analysis included learners from multiple institutions, it evaluated a single simulation scenario. Additionally, the entire tool was not evaluated as 7 of the 16 items were not applicable to the study scenario. Furthermore, only learners from 3 professions were assessed. Finally, our raters were IPE “experts.” In future studies, the tool should be utilized more broadly, comparing individual scores longitudinally across the curriculum, disciplines, raters, and simulation modalities (eg, rapid cycle deliberate practice vs traditional simulation sessions).
Conclusion
The novel SITAT has demonstrated internal and interrater reliability for assessment of individual performance during IPE simulations. There was no significant difference in the observed overall competency of medical, nursing, and pharmacy students across several interprofessional competency domains. The SITAT provides value in the education and assessment of students engaged in an IPE curriculum.
