Abstract
This study examined performer, rater, occasion, and sequence as sources of variability in music performance assessment. Generalizability theory served as the study's basis. Performers were 8 high school wind instrumentalists who had recently performed a solo. The author audio-recorded performers playing excerpts from their solo three times, establishing an occasion variable. To establish a rater variable, 10 certified adjudicators were asked to rate the performances from 0 (poor) to 100 (excellent). Raters were randomly assigned to one of five performance sequences, thus nesting raters within a sequence variable. Two G (generalizability) studies established that occasion and sequence produced virtually no measurement error. Raters were a strong source of error. D (decision) studies established the one-rater, one-occasion scenario as unreliable. In scenarios using the generalizability coefficient as a criterion, 5 hypothetical raters were necessary to reach the .80 benchmark. Using the dependability index, 17 hypothetical raters were necessary to reach .80.
Get full access to this article
View all access options for this article.
