Abstract
We respond to the letter of Nadery and Shahsavari regarding our paper entitled ‘Interobserver reliability of the ankle brachial index, toe -brachial index and distal pulse palpation in patients with diabetes. In this letter, we concluded that despite some limitations, the kappa coefficient is an informative measure of agreement in most circumstances that we can use in this type of clinical research.
Keywords
Dear Editor,
We would like to respond to the letter of Nadery and Shahsavari regarding our paper entitled ‘Interobserver reliability of the ankle brachial index, toe–brachial index and distal pulse palpation in patients with diabetes’. 1
The authors claim that no value of kappa can be universally accepted as a sign of a good agreement; however, kappa coefficient is an informative measure of agreement in most circumstances that is widely used in clinical research.2–7
The authors argue that kappa value to assess the agreement of a qualitative variable has two weaknesses. First, it depends on the prevalence in each category. This is true but especially when raters report a very high prevalence of the condition of interest, because some of the overlap in their diagnoses may reflect their common knowledge about the disease in the population being rated. This should be considered ‘true’ agreement, but it is attributed to chance agreement (i.e. kappa = 0). 2 This situation did not happen in our study; for instance, regarding to the prevalence of palpation of distal pulses, 13 (31%), 24 (57%) and 17 (40 %) posterior tibial pulses were absent for the very experienced, medium-experienced and inexperienced clinicians, respectively. Notwithstanding, we included in our study several limitations. First, the characteristics of our study population, in a specialised diabetic foot unit, are probably different than in a basic health-care unit where the prevalence of peripheral artery disease (PAD) is lower.
However, the authors said that kappa value is also dependent on the number of categories but we used only two categories: dichotomous variables (Yes/No), it is not an ordinal qualitative variable or ranked variables such us low, moderate or severe. For this situation, it is not necessary to apply weighted kappa. Finally, authors suggest applying Fleiss kappa because we have more than two observers. Nevertheless, when we performed the kappa index between two clinicians, we observed similar results than with three clinicians. For instance, Table 1 (experienced–medium experienced clinician), Table 2 (medium experienced–inexperienced clinician) and Table 3 (experienced–inexperienced) depict the contingency tables and the kappa index of the palpation of tibial posterior pulses among two clinicians with different levels of experience. We observed a moderate agreement among clinicians with similar experience (K1 = 0.503,
Contingency tables of the palpation of tibial posterior artery among experienced and medium experienced clinician. 8
K1: kappa coefficient between very experienced and medium-experienced clinicians.
Strength of agreement (Landis and Koch criteria).
Contingency tables of the palpation of tibial posterior artery among medium-experienced and inexperienced clinicians. 8
K2: kappa coefficient between medium-experienced and inexperienced clinicians.
Strength of agreement (Landis and Koch criteria).
Contingency tables of the palpation of tibial posterior artery among medium-experienced and inexperienced clinician. 8
K2: kappa coefficient between medium-experienced and inexperienced clinicians.
K3: kappa coefficient between very experienced and inexperienced clinicians.
Strength of agreement (Landis and Koch criteria).
In this letter, we concluded that despite some limitations, the kappa coefficient is an informative measure of agreement in most circumstances that we can use in this type of clinical research.
