語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Investigating Reliability and Constr...
~
Nguyen, Phuong Thi Tuyet.
FindBook
Google Book
Amazon
博客來
Investigating Reliability and Construct Validity of a Source-Based Academic Writing Test for Placement Purposes.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Investigating Reliability and Construct Validity of a Source-Based Academic Writing Test for Placement Purposes./
作者:
Nguyen, Phuong Thi Tuyet.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
397 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-01, Section: A.
Contained By:
Dissertations Abstracts International83-01A.
標題:
Linguistics. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28416653
ISBN:
9798516902871
Investigating Reliability and Construct Validity of a Source-Based Academic Writing Test for Placement Purposes.
Nguyen, Phuong Thi Tuyet.
Investigating Reliability and Construct Validity of a Source-Based Academic Writing Test for Placement Purposes.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 397 p.
Source: Dissertations Abstracts International, Volume: 83-01, Section: A.
Thesis (Ph.D.)--Iowa State University, 2021.
This item must not be sold to any third party vendors.
Source-based writing, in which writers read or listen to academic content before writing, has been considered to better assess academic writing skills than independent writing tasks (Read, 1990; Weigle, 2004). Because scores resulting from ratings of test takers' source-based writing task responses are treated as indicators of their academic writing ability, researchers have begun to investigate the meaning of scores on source-based academic writing tests in an attempt to define the construct measured on such tests. Although this research has resulted in insights about source-based writing constructs and the rating reliability of such tests, it has been limited in its research perspective, the methods for collecting data about the rating process, and the clarity of the connection between reliability and construct validity. This study aimed to collect and analyze evidence regarding the reliability and construct validity of a source-based academic English test for placement purposes, called the EPT Writing, and to show the relationship between these two parts of the study by presenting the evidence in a validity argument (Kane, 1992, 2006, 2013). Specifically, important reliability aspects, including the appropriateness of the rating rubric based on raters' opinions and statistical evidence, the performance of the raters in terms of severity, consistency, and bias, as well as test score reliability, were examined. Also, the construct of academic source-based writing assessed by the EPT Writing was explored by analysis of the writing features that raters attended to while rating test takers' responses. The study employed the mixed-methods multiphase research design (Creswell & Plano Clark, 2012) in which both quantitative and qualitative data were collected and analyzed in two sequential phases to address the research questions. In Phase 1, quantitative data, consisting of 1,300 operational ratings provided by the EPT Office, were analyzed using Many-Facets Rasch Measurement (MFRM) and Generalizability theory to address the research questions related to the rubric's functionality, raters' performance, and score reliability. In Phase 2, 630 experimental ratings, 90 stimulated recalls collected with assistance from records from eye-tracking technology, as well as nine interviews from nine raters were analyzed to address the research questions pertaining to raters' opinions of the rubric and the writing features that attracted raters' attention during rating. The findings were presented in a validity argument to show the connection between the reliability of the ratings and the construct validity, which needs to be taken into account in research on rating processes. Overall, the raters' interviews and MFRM analysis of the operational ratings showed that the rubric was mostly appropriate for providing evidence of variation in source-based academic writing ability. Regarding raters' performance, MRFM analysis revealed that while most raters maintained their comparability and consistency in terms of severity, and impartiality towards the writing tasks, some of them were significantly more generous, inconsistent, and biased against task types. The score reliability estimate for a 2-task x 2-rater design was found below the desired level, suggesting that more tasks and raters are needed to increase reliability. Additionally, analysis of the verbal reports indicated that the raters attended to the writing features aligned with the source-based academic writing construct that the test aims to measure. The conclusion presents a partial validity framework for the EPT Writing, in addition to implications for construct definition of source-based academic writing tests, cognition research methods, and language assessment validation research. Recommendations for the EPR Writing include a clearer definition of the test construct, revision of the rubric, and more rigorous rater training. Suggested directions for future research include further research investigating raters' cognition in source-based writing assessment and additional validation studies for other inferences of the validity framework for the EPT Writing.
ISBN: 9798516902871Subjects--Topical Terms:
524476
Linguistics.
Subjects--Index Terms:
Construct validity
Investigating Reliability and Construct Validity of a Source-Based Academic Writing Test for Placement Purposes.
LDR
:05464nmm a2200421 4500
001
2282043
005
20210927083526.5
008
220723s2021 ||||||||||||||||| ||eng d
020
$a
9798516902871
035
$a
(MiAaPQ)AAI28416653
035
$a
AAI28416653
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Nguyen, Phuong Thi Tuyet.
$0
(orcid)0000-0002-6799-6397
$3
3560785
245
1 0
$a
Investigating Reliability and Construct Validity of a Source-Based Academic Writing Test for Placement Purposes.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
397 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-01, Section: A.
500
$a
Advisor: Chapelle, Carol.
502
$a
Thesis (Ph.D.)--Iowa State University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Source-based writing, in which writers read or listen to academic content before writing, has been considered to better assess academic writing skills than independent writing tasks (Read, 1990; Weigle, 2004). Because scores resulting from ratings of test takers' source-based writing task responses are treated as indicators of their academic writing ability, researchers have begun to investigate the meaning of scores on source-based academic writing tests in an attempt to define the construct measured on such tests. Although this research has resulted in insights about source-based writing constructs and the rating reliability of such tests, it has been limited in its research perspective, the methods for collecting data about the rating process, and the clarity of the connection between reliability and construct validity. This study aimed to collect and analyze evidence regarding the reliability and construct validity of a source-based academic English test for placement purposes, called the EPT Writing, and to show the relationship between these two parts of the study by presenting the evidence in a validity argument (Kane, 1992, 2006, 2013). Specifically, important reliability aspects, including the appropriateness of the rating rubric based on raters' opinions and statistical evidence, the performance of the raters in terms of severity, consistency, and bias, as well as test score reliability, were examined. Also, the construct of academic source-based writing assessed by the EPT Writing was explored by analysis of the writing features that raters attended to while rating test takers' responses. The study employed the mixed-methods multiphase research design (Creswell & Plano Clark, 2012) in which both quantitative and qualitative data were collected and analyzed in two sequential phases to address the research questions. In Phase 1, quantitative data, consisting of 1,300 operational ratings provided by the EPT Office, were analyzed using Many-Facets Rasch Measurement (MFRM) and Generalizability theory to address the research questions related to the rubric's functionality, raters' performance, and score reliability. In Phase 2, 630 experimental ratings, 90 stimulated recalls collected with assistance from records from eye-tracking technology, as well as nine interviews from nine raters were analyzed to address the research questions pertaining to raters' opinions of the rubric and the writing features that attracted raters' attention during rating. The findings were presented in a validity argument to show the connection between the reliability of the ratings and the construct validity, which needs to be taken into account in research on rating processes. Overall, the raters' interviews and MFRM analysis of the operational ratings showed that the rubric was mostly appropriate for providing evidence of variation in source-based academic writing ability. Regarding raters' performance, MRFM analysis revealed that while most raters maintained their comparability and consistency in terms of severity, and impartiality towards the writing tasks, some of them were significantly more generous, inconsistent, and biased against task types. The score reliability estimate for a 2-task x 2-rater design was found below the desired level, suggesting that more tasks and raters are needed to increase reliability. Additionally, analysis of the verbal reports indicated that the raters attended to the writing features aligned with the source-based academic writing construct that the test aims to measure. The conclusion presents a partial validity framework for the EPT Writing, in addition to implications for construct definition of source-based academic writing tests, cognition research methods, and language assessment validation research. Recommendations for the EPR Writing include a clearer definition of the test construct, revision of the rubric, and more rigorous rater training. Suggested directions for future research include further research investigating raters' cognition in source-based writing assessment and additional validation studies for other inferences of the validity framework for the EPT Writing.
590
$a
School code: 0097.
650
4
$a
Linguistics.
$3
524476
650
4
$a
Language.
$3
643551
650
4
$a
Educational tests & measurements.
$3
3168483
650
4
$a
Educational evaluation.
$3
526425
650
4
$a
Educational administration.
$3
2122799
650
4
$a
Language arts.
$3
532624
653
$a
Construct validity
653
$a
Eye tracking
653
$a
Mixed methods
653
$a
Validity argument
653
$a
Rating reliability
653
$a
Human scoring
690
$a
0290
690
$a
0679
690
$a
0288
690
$a
0443
690
$a
0279
690
$a
0514
710
2
$a
Iowa State University.
$b
English.
$3
1020984
773
0
$t
Dissertations Abstracts International
$g
83-01A.
790
$a
0097
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28416653
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9433776
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入