語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Examining ways to improve quality of ratings.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Examining ways to improve quality of ratings./
作者:
Kim, Inyoung.
面頁冊數:
1 online resource (243 pages)
附註:
Source: Dissertations Abstracts International, Volume: 57-05, Section: A.
Contained By:
Dissertations Abstracts International57-05A.
標題:
Educational evaluation. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=9544602click for full text (PQDT)
ISBN:
9798209394860
Examining ways to improve quality of ratings.
Kim, Inyoung.
Examining ways to improve quality of ratings.
- 1 online resource (243 pages)
Source: Dissertations Abstracts International, Volume: 57-05, Section: A.
Thesis (Ph.D.)--The Ohio State University, 1995.
Includes bibliographical references
In the area of performance ratings, a persistent problem has been rating errors. Obtaining accurate and reliable performance ratings is a challenge faced in most educational and employment settings. There have been many studies using a variety of criteria that can be used to evaluate the quality of ratings (Dunbar, Koretz, & Hoover, 1991; Haertel, 1990; Linn, Baker, & Dunbar, 1991; Mehrens, 1992; Moss, 1992). Although there is some confusion in the literature regarding the appropriate indices for evaluating the quality of ratings by different raters' bias and their rating errors, there is also a lack of knowledge regarding the quality of rating data from some rater characteristics. Only a few studies could be found in the literature that evaluate, compare, and try to improve the quality of performance ratings by training raters, improving scoring rubrics, and multiple raters. How can rater errors be detected and what strategies can be used to minimize the effects of rater errors in performance ratings to improve the quality of rating? What is the effect of the rater's background (knowledge) on rater reliability? Do rater characteristics (e.g., gender, age, experience, position) affect rating? How about their motivation, trust, and confidence, would they affect ratings? Do trained raters really achieve higher reliability and less rating errors? Are trained raters more consistent across ratings and over time? The purpose of this study is to improve the quality of rating. The following are some ways this study examines the quality of ratings: (1) identify conceptual and operational definitions of rating errors in literature: (2) train raters to reduce rating errors: (3) use appropriate criteria in scoring rubrics: (4) use multiple raters: (5) choose qualified raters based on rater's background and choose raters who fit the purpose of the study. This has implications for research on rater types, and practical implications for rater training, and raises the possibility of adjusting marks to take account of rater effects. If society and the measurement community plan to increase their use of assessment methods that rely on performance ratings, then problems related to rater errors, training raters, and scoring rubrics need to be addressed in order to make a valuable contribution to the type of decisions for which performance rating data are used.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798209394860Subjects--Topical Terms:
526425
Educational evaluation.
Subjects--Index Terms:
rater trainingIndex Terms--Genre/Form:
542853
Electronic books.
Examining ways to improve quality of ratings.
LDR
:03670nmm a2200373K 4500
001
2364687
005
20231130105911.5
006
m o d
007
cr mn ---uuuuu
008
241011s1995 xx obm 000 0 eng d
020
$a
9798209394860
035
$a
(MiAaPQ)AAI9544602
035
$a
AAI9544602
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Kim, Inyoung.
$3
3705509
245
1 0
$a
Examining ways to improve quality of ratings.
264
0
$c
1995
300
$a
1 online resource (243 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 57-05, Section: A.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Advisor: Loadman, William E.
502
$a
Thesis (Ph.D.)--The Ohio State University, 1995.
504
$a
Includes bibliographical references
520
$a
In the area of performance ratings, a persistent problem has been rating errors. Obtaining accurate and reliable performance ratings is a challenge faced in most educational and employment settings. There have been many studies using a variety of criteria that can be used to evaluate the quality of ratings (Dunbar, Koretz, & Hoover, 1991; Haertel, 1990; Linn, Baker, & Dunbar, 1991; Mehrens, 1992; Moss, 1992). Although there is some confusion in the literature regarding the appropriate indices for evaluating the quality of ratings by different raters' bias and their rating errors, there is also a lack of knowledge regarding the quality of rating data from some rater characteristics. Only a few studies could be found in the literature that evaluate, compare, and try to improve the quality of performance ratings by training raters, improving scoring rubrics, and multiple raters. How can rater errors be detected and what strategies can be used to minimize the effects of rater errors in performance ratings to improve the quality of rating? What is the effect of the rater's background (knowledge) on rater reliability? Do rater characteristics (e.g., gender, age, experience, position) affect rating? How about their motivation, trust, and confidence, would they affect ratings? Do trained raters really achieve higher reliability and less rating errors? Are trained raters more consistent across ratings and over time? The purpose of this study is to improve the quality of rating. The following are some ways this study examines the quality of ratings: (1) identify conceptual and operational definitions of rating errors in literature: (2) train raters to reduce rating errors: (3) use appropriate criteria in scoring rubrics: (4) use multiple raters: (5) choose qualified raters based on rater's background and choose raters who fit the purpose of the study. This has implications for research on rater types, and practical implications for rater training, and raises the possibility of adjusting marks to take account of rater effects. If society and the measurement community plan to increase their use of assessment methods that rely on performance ratings, then problems related to rater errors, training raters, and scoring rubrics need to be addressed in order to make a valuable contribution to the type of decisions for which performance rating data are used.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Educational evaluation.
$3
526425
650
4
$a
Educational psychology.
$3
517650
650
4
$a
Educational tests & measurements.
$3
3168483
653
$a
rater training
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0288
690
$a
0525
690
$a
0688
690
$a
0443
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
The Ohio State University.
$3
718944
773
0
$t
Dissertations Abstracts International
$g
57-05A.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=9544602
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9487043
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入