語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Reliability and Validity Evidence of Diagnostic Methods : = Comparison of Diagnostic Classification Models and Item Response Theory-Based Methods.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Reliability and Validity Evidence of Diagnostic Methods :/
其他題名:
Comparison of Diagnostic Classification Models and Item Response Theory-Based Methods.
作者:
Jang, Yoo Jeong.
面頁冊數:
1 online resource (154 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-04, Section: B.
Contained By:
Dissertations Abstracts International84-04B.
標題:
Educational tests & measurements. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29255643click for full text (PQDT)
ISBN:
9798352908341
Reliability and Validity Evidence of Diagnostic Methods : = Comparison of Diagnostic Classification Models and Item Response Theory-Based Methods.
Jang, Yoo Jeong.
Reliability and Validity Evidence of Diagnostic Methods :
Comparison of Diagnostic Classification Models and Item Response Theory-Based Methods. - 1 online resource (154 pages)
Source: Dissertations Abstracts International, Volume: 84-04, Section: B.
Thesis (Ph.D.)--University of Minnesota, 2022.
Includes bibliographical references
Despite the increasing demand for diagnostic information, observed subscores have been often reported to lack adequate psychometric qualities such as reliability, distinctiveness, and validity. Therefore, several statistical techniques based on CTT and IRT frameworks have been proposed to improve the quality of subscores. More recently, DCM has also attracted increasing attention as a powerful diagnostic tool that can provide fine-tuned diagnostic feedback. Despite its potential, there has been a dearth of research evaluating the psychometric quality of DCM, especially in comparison with diagnostic methods from other psychometric frameworks. Therefore, in this simulation study, DCM was compared with two IRT-based subscore estimation methods in terms of classification accuracy, distinctiveness, and incremental criterion-related validity evidence of subscores. Manipulated factors included diagnostic methods, subscale length, item difficulty distribution, intercorrelations of subscores, and criterion validity coefficients. For classification accuracy, all diagnostic methods yielded comparable results when the center of item difficulty coincided with mean examinee ability and cut-scores. However, when average item difficulty was mismatched with mean examinee ability and cut-scores, DCM yielded substantially higher/lower classification accuracy than IRT-based methods with direction and magnitude of discrepancy depending on the type of agreement measures employed. For subscore distinctiveness, compared to IRT-based methods, DCM yielded subscores more distinct from each other and overall scores when continuous rather than discrete subscores were utilized. Lastly, regarding incremental criterion-related validity evidence, the contribution of DCM estimates over and above overall scores tended to be comparable to but slightly smaller than that of IRT-based methods. Additionally, higher classification accuracy was associated with longer subscales, item difficulty distribution more aligned with examinee ability distribution and cut-scores, and higher intercorrelations of subscores. The same conditions except for higher intercorrelations of subscores also tended to be associated with higher subscore distinctiveness. In contrast, incremental criterion-related validity evidence of subscores was largely a function of intercorrelations of subscores and magnitude of criterion validity coefficients: it increased with lower intercorrelations of subscores and higher criterion validity coefficients. In general, the results of this study suggested that IRT-based methods would be preferable over DCM as diagnostic means when item responses are obtained from IRT-based assessment forms.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798352908341Subjects--Topical Terms:
3168483
Educational tests & measurements.
Subjects--Index Terms:
Classification accuracyIndex Terms--Genre/Form:
542853
Electronic books.
Reliability and Validity Evidence of Diagnostic Methods : = Comparison of Diagnostic Classification Models and Item Response Theory-Based Methods.
LDR
:04235nmm a2200397K 4500
001
2360845
005
20231015185433.5
006
m o d
007
cr mn ---uuuuu
008
241011s2022 xx obm 000 0 eng d
020
$a
9798352908341
035
$a
(MiAaPQ)AAI29255643
035
$a
AAI29255643
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Jang, Yoo Jeong.
$3
3701481
245
1 0
$a
Reliability and Validity Evidence of Diagnostic Methods :
$b
Comparison of Diagnostic Classification Models and Item Response Theory-Based Methods.
264
0
$c
2022
300
$a
1 online resource (154 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-04, Section: B.
500
$a
Advisor: Rodriguez, Michael C.; Davison, Mark L.
502
$a
Thesis (Ph.D.)--University of Minnesota, 2022.
504
$a
Includes bibliographical references
520
$a
Despite the increasing demand for diagnostic information, observed subscores have been often reported to lack adequate psychometric qualities such as reliability, distinctiveness, and validity. Therefore, several statistical techniques based on CTT and IRT frameworks have been proposed to improve the quality of subscores. More recently, DCM has also attracted increasing attention as a powerful diagnostic tool that can provide fine-tuned diagnostic feedback. Despite its potential, there has been a dearth of research evaluating the psychometric quality of DCM, especially in comparison with diagnostic methods from other psychometric frameworks. Therefore, in this simulation study, DCM was compared with two IRT-based subscore estimation methods in terms of classification accuracy, distinctiveness, and incremental criterion-related validity evidence of subscores. Manipulated factors included diagnostic methods, subscale length, item difficulty distribution, intercorrelations of subscores, and criterion validity coefficients. For classification accuracy, all diagnostic methods yielded comparable results when the center of item difficulty coincided with mean examinee ability and cut-scores. However, when average item difficulty was mismatched with mean examinee ability and cut-scores, DCM yielded substantially higher/lower classification accuracy than IRT-based methods with direction and magnitude of discrepancy depending on the type of agreement measures employed. For subscore distinctiveness, compared to IRT-based methods, DCM yielded subscores more distinct from each other and overall scores when continuous rather than discrete subscores were utilized. Lastly, regarding incremental criterion-related validity evidence, the contribution of DCM estimates over and above overall scores tended to be comparable to but slightly smaller than that of IRT-based methods. Additionally, higher classification accuracy was associated with longer subscales, item difficulty distribution more aligned with examinee ability distribution and cut-scores, and higher intercorrelations of subscores. The same conditions except for higher intercorrelations of subscores also tended to be associated with higher subscore distinctiveness. In contrast, incremental criterion-related validity evidence of subscores was largely a function of intercorrelations of subscores and magnitude of criterion validity coefficients: it increased with lower intercorrelations of subscores and higher criterion validity coefficients. In general, the results of this study suggested that IRT-based methods would be preferable over DCM as diagnostic means when item responses are obtained from IRT-based assessment forms.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Educational tests & measurements.
$3
3168483
650
4
$a
Quantitative psychology.
$3
2144748
650
4
$a
Educational psychology.
$3
517650
653
$a
Classification accuracy
653
$a
Criterion-related validity
653
$a
Diagnostic classification models
653
$a
Multidimensional item response theory
653
$a
Subscore augmentation
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0288
690
$a
0632
690
$a
0525
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
University of Minnesota.
$b
Educational Psychology.
$3
1023204
773
0
$t
Dissertations Abstracts International
$g
84-04B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29255643
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9483201
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入