Мы используем Cookies Этот веб-сайт использует cookie-файлы, чтобы предлагать вам наиболее актуальную информацию. Просматривая этот веб-сайт, Вы принимаете cookie-файлы.
Cohen'skappa was used as measure for interreliability between the different prevalence estimates.
2
Cohen'skappa statistic was calculated to measure agreement between annotation services.
3
Agreement among teams was calculated through Cohen'skappa, sensitivity, and specificity.
4
Agreement was calculated using Pearson correlations, Cohen'skappa, and conditional probabilities.
5
Agreement between observers was evaluated using weighted Cohen'skappa statistics.
6
Agreement between the authors' scores was analyzed using Cohen'sKappa.
7
Inter-item and item-total correlations were calculated and inter-rater agreements were calculated using Cohen'skappa.
8
Inter-observer agreement for the hallmarks was assessed by the proportion of agreement and Cohen'skappa.
9
Agreement between indexes was evaluated using Cohen'sKappa coefficient.
10
The data were analyzed based on their reliability indices, accuracy, and the Cohen'skappa coefficient.
11
Cohen'skappa coefficient was adopted to analyze interobserver consistency.
12
The responses were compared using Cohen'skappa (kappa) to assess agreement corrected for chance.
13
Results: Cohen'skappa for the consensus between the two raters was 0.79.
14
Agreement was evaluated using Cohen'skappa.
15
Cohen'skappa statistic and Pearson's correlation coefficients were used to determine agreement between the system and the standard reference.
16
Cohen'sKappa was calculated to determine agreement rates regarding depression and anxiety disorders; additionally, sensitivity and specificity were evaluated.