語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Comparability and linking in direct ...
~
Osborn Popp, Sharon Elizabeth.
FindBook
Google Book
Amazon
博客來
Comparability and linking in direct writing assessment: Benchmarks, discourse mode, and grade level.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Comparability and linking in direct writing assessment: Benchmarks, discourse mode, and grade level./
作者:
Osborn Popp, Sharon Elizabeth.
面頁冊數:
169 p.
附註:
Source: Dissertation Abstracts International, Volume: 62-11, Section: A, page: 3753.
Contained By:
Dissertation Abstracts International62-11A.
標題:
Education, Tests and Measurements. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3031474
ISBN:
0493437991
Comparability and linking in direct writing assessment: Benchmarks, discourse mode, and grade level.
Osborn Popp, Sharon Elizabeth.
Comparability and linking in direct writing assessment: Benchmarks, discourse mode, and grade level.
- 169 p.
Source: Dissertation Abstracts International, Volume: 62-11, Section: A, page: 3753.
Thesis (Ph.D.)--Arizona State University, 2001.
Increasingly, direct assessments of writing performance are being included in large-scale testing programs despite concerns regarding reliability and validity. Issues regarding assessing student writing across discourse modes and measuring growth across grade level have generated interest as well as concern. The purpose of this study was to examine the effects of: (a) different scoring benchmarks on scores for the same papers, (b) discourse modes on scores for papers by the same students, (c) grade level on scores for papers written in a single discourse mode, and (d) grade level on scores for papers written in different discourse modes. Raters scored writing samples from students in Grades 3, 5, and 8 against a common rubric. Raw ratings were analyzed using multi-facet Rasch models. Raw ratings and Rasch-estimated student abilities, trait difficulties, and rater leniency-severity parameters were examined. Ratings of the same essays differed in magnitude and relative rank when scored against different sets of benchmarks. Ratings of papers written in different discourse modes by the same students had similar features such as similarly rank-ordered analytic trait difficulties. However, ratings for different modes led to substantial inconsistencies in how students were classified based on various performance standards. Ratings of student writing in a single mode increased with grade level. Comparisons of writing ability on a common task appear to be possible across grade levels, given benchmarks chosen from the multi-grade set of sample papers. The validity of comparing ratings of student writing in different modes across grade levels remains questionable. Results indicate that directly adjusting for discourse mode may be a promising approach to assess general writing quality across modes, but adjusting for mode may not be sufficient to allow for successful linking across grade levels. The benchmark papers used to operationalize the rubric score points strongly influenced the ratings of students' papers as well. Results of this work add to growing cautions and concerns regarding the use and interpretation of large-scale writing assessment scores and suggest the need for careful research on the nature of benchmark papers and the processes used to select them.
ISBN: 0493437991Subjects--Topical Terms:
1017589
Education, Tests and Measurements.
Comparability and linking in direct writing assessment: Benchmarks, discourse mode, and grade level.
LDR
:03309nmm 2200301 4500
001
1858288
005
20040927073705.5
008
130614s2001 eng d
020
$a
0493437991
035
$a
(UnM)AAI3031474
035
$a
AAI3031474
040
$a
UnM
$c
UnM
100
1
$a
Osborn Popp, Sharon Elizabeth.
$3
1945984
245
1 0
$a
Comparability and linking in direct writing assessment: Benchmarks, discourse mode, and grade level.
300
$a
169 p.
500
$a
Source: Dissertation Abstracts International, Volume: 62-11, Section: A, page: 3753.
500
$a
Co-Chairs: John T. Behrens; Joseph M. Ryan.
502
$a
Thesis (Ph.D.)--Arizona State University, 2001.
520
$a
Increasingly, direct assessments of writing performance are being included in large-scale testing programs despite concerns regarding reliability and validity. Issues regarding assessing student writing across discourse modes and measuring growth across grade level have generated interest as well as concern. The purpose of this study was to examine the effects of: (a) different scoring benchmarks on scores for the same papers, (b) discourse modes on scores for papers by the same students, (c) grade level on scores for papers written in a single discourse mode, and (d) grade level on scores for papers written in different discourse modes. Raters scored writing samples from students in Grades 3, 5, and 8 against a common rubric. Raw ratings were analyzed using multi-facet Rasch models. Raw ratings and Rasch-estimated student abilities, trait difficulties, and rater leniency-severity parameters were examined. Ratings of the same essays differed in magnitude and relative rank when scored against different sets of benchmarks. Ratings of papers written in different discourse modes by the same students had similar features such as similarly rank-ordered analytic trait difficulties. However, ratings for different modes led to substantial inconsistencies in how students were classified based on various performance standards. Ratings of student writing in a single mode increased with grade level. Comparisons of writing ability on a common task appear to be possible across grade levels, given benchmarks chosen from the multi-grade set of sample papers. The validity of comparing ratings of student writing in different modes across grade levels remains questionable. Results indicate that directly adjusting for discourse mode may be a promising approach to assess general writing quality across modes, but adjusting for mode may not be sufficient to allow for successful linking across grade levels. The benchmark papers used to operationalize the rubric score points strongly influenced the ratings of students' papers as well. Results of this work add to growing cautions and concerns regarding the use and interpretation of large-scale writing assessment scores and suggest the need for careful research on the nature of benchmark papers and the processes used to select them.
590
$a
School code: 0010.
650
4
$a
Education, Tests and Measurements.
$3
1017589
650
4
$a
Education, Language and Literature.
$3
1018115
650
4
$a
Psychology, Psychometrics.
$3
1017742
690
$a
0288
690
$a
0279
690
$a
0632
710
2 0
$a
Arizona State University.
$3
1017445
773
0
$t
Dissertation Abstracts International
$g
62-11A.
790
1 0
$a
Behrens, John T.,
$e
advisor
790
1 0
$a
Ryan, Joseph M.,
$e
advisor
790
$a
0010
791
$a
Ph.D.
792
$a
2001
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3031474
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9176988
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入