Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 理學院
  3. 數學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/65473
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳宏(Hung Chen)
dc.contributor.authorYan-Ling Kuoen
dc.contributor.author郭晏伶zh_TW
dc.date.accessioned2021-06-16T23:45:11Z-
dc.date.available2012-07-27
dc.date.copyright2012-07-27
dc.date.issued2012
dc.date.submitted2012-07-23
dc.identifier.citation[1] AGRESTI, A. (1989). An agreement model with kappa as parameter. Statistics and Probability Letters 7 271-273.
[2] AGRESTI, A. (2002). Categorical Data Analysis. 2nd Edition. Wiley, New York.
[3] AICKIN, M. (1990). Maximum likelihood estimation of agreement in the constant predictive probability model, and its relation to Cohen's kappa. Biometrics 46 293-302.
[4] ALLEN, M. J. and Yen, W. M. (1979). Introduction to Measurement Theory. Brooks/Cole, Monterey, California.
[5] BAILY, K.M. (1998). Learning about Language Assessment: Dilemmas, Decisions, and Directions. Heinle and Heinle, Boston, Massachusetts.
[6] BISHOP M. M., FIENBERG, S. E., and HOLLAND, P. W. (2007). Discrete Multivariate Analysis Theory and Practice. Springer, New York.
[7] CHRENOFF, H. (1956). Large sample theory: parametric case. Ann. Statist. 27 1-22.
[8] COHEN, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20 37-46.
[9] DE MAST, J. (2007). Agreement and kappa-type indices. The American Statistician 61 148-153.
[10] GOODMAN, L. A. (1979). Simple models for the analysis of association in cross-classifications having ordered categories. J. Amer. Statist. Assoc. 74 537-552.
[11] GROVE, W. M., ANDREASEN N. C., MCDONALD-SCOTT P.,
KELLER M. B., and SHAPIRO, R. W. (1981). Reliability studies of psychiatric diagnosis: Theory and practice. Arch Gen Psychiatry 38 408-413.
[12] GUGGENMOOS-HOLZMANN, I., and VONK, R. (1998). Kappa-like indices of observer agreement viewed from a latent class perspective. Statistics in Medicine 17 797-812.
[13] GWET, K. L. (2008). Computing inter-rater reliability and its variance in the presence of high agreement. British Journal of Mathematical and Statistical Psychology 61 29-48.
[14] JOHNSON, L. T., KEMP, A. W., and KOTZ, S. (2005). Univariate Discrete Distributions. 3rd Edition. Wiley, New York.
[15] KRAEMER, H. C. (1979). Ramifications of a population model for K as a coefficient of reliability. Psychometrika 44 461-472.
[16] PRATT, J. W. (1959). On a general concept of ``in probability'. Ann. Math. Statist. 30 549-558.
[17] SCHUSTER, C. (2002). A mixture model approach to indexing rater agreement. British J. Math. and Statist. Psychology 55 289-303.
[18] SIM, J. and WRIGHT, C. C. (2005). The kappa statistic in reliability studies: use, interpretation, and sample size requirements. J. Amer. Physical Therapy Assoc. 85 257-268.
[19] SPITZNAGEL, E. L. and HELZER, J. E. (1985). A proposed solution to the base rate problem in the kappa statistic. Arch Gen Psychiatry 42 725-728.
[20] STEMLER, S. E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research and Evaluation 9 80-89.
[21] UEBERSAX, J. S. (1988). Validity inferences from interobserver agreement. Psychological Bulletin 104 405-416.
[22] UEBERSAX, J. S. and Grove, W. M. (1990). Latent class analysis of diagnostic agreement. Statistics in Medicine 9 559-572.
[23] UEBERSAX, J. S. (1992). Modeling approaches for the analysis of observer agreement. Investigative Radiology 9 559-572.
[24] VIERA, A. J. and GARRETT, J. M. (2005). Understanding interobserver agreement: the kappa statistic. Family Medicine 37 360-363.
[25] WARRENS, M. J. (2010). A formal proof of a paradox associated with Cohen's kappa. Journal of Classification 27 322-332.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/65473-
dc.description.abstract在行為研究中,量化由不同評分者(raters)或量測設備所給的評分結果間之等同性成為重要的研究課題。對於一個被評分者(object) 來說,不同的評分者可能會給予不同的評分結果(ratings)。因此,探討評分者之間的可靠性成為一個重要討論議題。實際上,研究者想要知道所有的評分者在評分時其行為是否一致。Cohen 在1960年提出kappa 係數(κ),一個藉由修正兩位評分者之間由於隨機所造成一致評分而得到的一致性量測係數(interrater agreement coefficient)。而κ 係數也被廣泛的使用在量化兩個評分者間類別尺度的評分結果。然而在文獻研究中,κ 係數由於受到潛在類別(latent classes)的比例影響(base rate)、或是無法合理的修正評分者對於不同的潛在類別有不同的評分行為而被詬病。Gwet 在2008 年的文章中,根據兩個評分者間的評分行為,提出一個新的評分者間之評分模型(Gwet's model)以及其一致性量測係數稱為AC1統計量(γ1)。
De Mast於2007年指出,合理的一致性量測係數κ* 應藉由修正其隨機期望量所造成的一致性(chance-corrected agreement)。在本文中,我們考慮兩種不同的評分者評分行為模型:隨機評分模型(random rating model)和部分隨機評分模型(Gwet's model)。在各模型之下,使用漸進理論分析來探討κ 和γ1 兩個一致性量測係數是否為κ* 的一致性估計量(consistent estimate),並且分別與κ* 比較其表現行為。
zh_TW
dc.description.abstractOn behavioural research applications, it often needs to quantify the homogeneity of agreement between responses given by two (or more) raters or between two (or more) measurement devices. For a given object, it can receive different ratings from different raters. The reliability among raters becomes an important issue. In particular, investigators would like to know whether all raters classify objects in a consistent manner. Cohen (1960) proposed
kappa coefficient, κ, for correcting the chance agreement among two raters. κ is widely used in literature for quantifying agreement among the raters on a nominal scale. However, Cohen's kappa coefficient has been criticized for the illness prevalence or base rate in the particular population under study or irrelevant of rater's rating abilities for latent classes. Gwet (2008) proposed an alternative agreement based on interrater reliability called AC1 statistic, γ1.
De Mast (2007) suggested an appropriate chance-corrected interrater agreement coefficient κ* by correcting the agreement due to chance. In this thesis, we use asymptotic analysis to evaluate whether κ or γ1 is a consistent estimate of κ* when both raters adopt random rating model or Gwet's model (2008) and compare the performances of κ and γ1 with κ*.
en
dc.description.provenanceMade available in DSpace on 2021-06-16T23:45:11Z (GMT). No. of bitstreams: 1
ntu-101-R97221046-1.pdf: 424084 bytes, checksum: 6351b25c52bbc8ad5de1ce1f5200bb4b (MD5)
Previous issue date: 2012
en
dc.description.tableofcontentsContents
誌謝i
摘要ii
Abstract iii
Contents iv
List of Figures vii
List of Tables viii
1 Introduction 1
2 Interrater Reliability and Agreement Measurement 4
2.1 The Myth of Chance-Corrected Agreement . . . . . . . . . . . 4
2.2 Chance Agreement: Kappa Coefficient . . . . . . . . . . . . . 6
2.3 Alternative Chance Agreement Based on Interrater Reliability: AC1 Statistic . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Examples and Discussions . . . . . . . . . . . . . . . . . . . . 7
3 Modelling Approaches to Interrater Reliability 10
3.1 Random Rating Model . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Latent Class Models and Random Rating in Mixture Models:
Gwet’s Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Likelihood Ratio of Random Rating Model versus Gwet’s Model with Objects from Two Latent Classes . . . . . . . . . . . . . 16
4 Asymptotic Analysis of ^κ and ^γ1 under Random Rating Model 19
4.1 High Agreement Observed among Two Raters under Random
Rating Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1.1 Quantify the magnitude of 1 − ^Pc and ^Po − ^Pc for ^κ . . 21
4.1.2 Quantify the magnitude of 1 − ^P*c and ^Po − ^P*c for ^γ1 . 23
4.2 High Disagreement Observed among Two Raters under Random
Rating Model . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2.1 Quantify the magnitude of 1 − ^Pc and ^Po − ^Pc for ^κ . . 24
4.2.2 Quantify the magnitude of 1 − ^P*c and ^Po − ^P*c for ^γ1 . 26
4.3 Two Raters Agree on Half of Objects and Disagree on the
Other Half of Objects under Random Rating Model . . . . . . 27
4.3.1 Quantify the magnitude of 1 − ^Pc and ^Po − ^Pc for ^κ . . 27
4.3.2 Quantify the magnitude of 1 − ^P*c and ^Po − ^P*c for ^γ1 . 29
4.4 Conclusion for ^κ and ^γ1 under Random Rating Model . . . . . 30
5 Asymptotic Analysis of ^κ and ^γ1 under Gwet's Model 31
5.1 Asymptotic Analysis of ^κ under Gwet's Model . . . . . . . . . 31
5.2 Asymptotic Analysis of ^γ1 under Gwet's Model . . . . . . . . 33
5.3 Conclusion for ^κ and ^γ1 under Gwet's Model . . . . . . . . . . 35
5.4 Consistency of AC1 Statistic for κ* under Gwet's model . . . . 36
6 Simulation Results 38
6.1 Random Rating Model . . . . . . . . . . . . . . . . . . . . . . 38
6.1.1 ^κ and ^γ1 for High Agreement Observed among Two
Raters . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.1.2 ^κ and ^γ1 for High Disagreement Observed among Two
Raters . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.1.3 ^κ and ^γ1 for Two Raters Agree on Half of Objects and
Disagree on the Other Half of Objects . . . . . . . . . 42
6.2 Gwet’s Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2.1 ^κ and ^γ1 for Equal Probabilities of Certain Rating
among Two Raters . . . . . . . . . . . . . . . . . . . . 45
7 Conclusion 49
Appendix 51
References 53
dc.language.isozh-TW
dc.subject隨機期望修正量zh_TW
dc.subjectAC1統計量zh_TW
dc.subject一致性量測係數zh_TW
dc.subjectkappa係數zh_TW
dc.subject隨機評分zh_TW
dc.subject潛在類別zh_TW
dc.subject評分者zh_TW
dc.subjectinterrater agreement coefficienten
dc.subjectrandom ratingen
dc.subjectratersen
dc.subjectlatent classesen
dc.subjectchance-correcteden
dc.subjectAC1 statisticen
dc.subjectkappa coefficienten
dc.title一致性量測中隨機期望修正量之合理性zh_TW
dc.titleStudy on appropriateness of interrater chance-corrected agreement coefficientsen
dc.typeThesis
dc.date.schoolyear100-2
dc.description.degree碩士
dc.contributor.oralexamcommittee江金倉,陳佩君,陳定立
dc.subject.keyword一致性量測係數,kappa係數,AC1統計量,隨機期望修正量,潛在類別,評分者,隨機評分,zh_TW
dc.subject.keywordinterrater agreement coefficient,kappa coefficient,AC1 statistic,chance-corrected,latent classes,raters,random rating,en
dc.relation.page55
dc.rights.note有償授權
dc.date.accepted2012-07-24
dc.contributor.author-college理學院zh_TW
dc.contributor.author-dept數學研究所zh_TW
顯示於系所單位:數學系

文件中的檔案:
檔案 大小格式 
ntu-101-1.pdf
  未授權公開取用
414.14 kBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved