TermGallery
English
English
Spanish
Catalan
Portuguese
Russian
EN
English
Español
Català
Português
Русский
1
Agreement between the two staging methods was assessed
using
the
kappa
statistic.
2
Main outcome measure: Inter-rater reliability measured
using
the
kappa
statistic for multiple raters.
3
Agreement between scores classifying participants as high risk was evaluated
using
the
kappa
statistic.
4
Inter-observer agreement was assessed
using
the
kappa
coefficient of agreement (k).
5
Strength of agreement was calculated
using
the
kappa
statistic.
6
Patient-provider agreement on the presence and severity of symptoms was assessed
using
the
kappa
statistic.
7
Using
the
kappa
statistic, the patient-based agreement was determined between the CT and cystoscopic findings.
8
Reliability was assessed
using
the
kappa
coefficient.
9
Concordances between the two methods in the seropositivity responses were evaluated
using
the
Kappa
statistic and the Spearman's rank correlation.
10
In addition, we tested the consistency of the low-accuracy loggers in detecting cows with elevated BT
using
the
kappa
coefficient of concordance.