語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Faking and the Validity of Personali...
~
Huber, Christopher R.
FindBook
Google Book
Amazon
博客來
Faking and the Validity of Personality Tests: Using New Faking-Resistant Measures to Study Some Old Questions.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Faking and the Validity of Personality Tests: Using New Faking-Resistant Measures to Study Some Old Questions./
作者:
Huber, Christopher R.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2017,
面頁冊數:
294 p.
附註:
Source: Dissertation Abstracts International, Volume: 78-08(E), Section: B.
Contained By:
Dissertation Abstracts International78-08B(E).
標題:
Occupational psychology. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10255935
ISBN:
9781369680058
Faking and the Validity of Personality Tests: Using New Faking-Resistant Measures to Study Some Old Questions.
Huber, Christopher R.
Faking and the Validity of Personality Tests: Using New Faking-Resistant Measures to Study Some Old Questions.
- Ann Arbor : ProQuest Dissertations & Theses, 2017 - 294 p.
Source: Dissertation Abstracts International, Volume: 78-08(E), Section: B.
Thesis (Ph.D.)--University of Minnesota, 2017.
Despite strong evidence supporting the validity of personality measures for personnel selection, their susceptibility to faking has been a persistent concern. Research has found that many job applicants exaggerate their possession of desirable traits, and there are reasons to believe that this distortion reduces criterion-related validity. However, the lack of studies that combine experimental control with real-world generalizability makes it difficult to isolate the effects of applicant faking. Experimental studies have typically induced faking using explicit instructions to fake, which elicit unusually extreme faking compared to typical applicant settings. A variety of non-experimental approaches have also been employed, but these approaches are largely inadequate for establishing causeand- effect relationships. Thus, researchers continue to debate whether applicant faking substantially attenuates the validity of personality tests. The present study used a new experimental framework to study this question and related methodological issues in the faking literature. First, it included a subtle incentive to fake in addition to explicit instructions to respond honestly or fake good. Second, it compared faking on standard Likert scales to faking on multidimensional forced choice (MFC) scales designed to resist deception. Third, it compared more and less fakable versions of the same MFC inventory to eliminate confounding differences between MFC and Likert scales. The result was a 3 x 3 design that simultaneously manipulated the motivation and ability to fake, allowing for a more rigorous examination of the faking--validity relationship. Results indicated complex relationships between faking and the validity of personality scores. Directed fakers were much better at raising their scores on Likert scales than MFC measures of the same traits. However, MFC scales failed to retain more validity than Likert scales when participants faked. Supplemental analyses suggested that extreme faking decimated the construct validity of all scales regardless of their fakability. Faking also added new common method variance to the Likert scales, which in turn contributed to the scales' criterion-related validity. In addition to the effects of faking, the present study investigated two recurring methodological issues in the faking literature. First, I investigated the claim that directed faking is fundamentally different from typical faking by comparing results from directed and incentivized fakers. Directed faking results generally replicated using a subtle incentive to fake, but the effects were much smaller and less consistent. Second, some have argued that traditional criterion-related validity coefficients fail to capture the negative effects of faking on actual selection decisions. I investigated this possibility by creating simulated selection pools in which fakers and honest responders competed for limited positions. The simulation results generally indicated reasonable correspondence between validity estimates and selected group performance, suggesting that validity coefficients adequately reflected the effects of faking. Results are interpreted using existing theories of faking, and new methodologies are proposed to advance the study of typical faking behavior.
ISBN: 9781369680058Subjects--Topical Terms:
2122852
Occupational psychology.
Faking and the Validity of Personality Tests: Using New Faking-Resistant Measures to Study Some Old Questions.
LDR
:04274nmm a2200301 4500
001
2117717
005
20170530090539.5
008
180830s2017 ||||||||||||||||| ||eng d
020
$a
9781369680058
035
$a
(MiAaPQ)AAI10255935
035
$a
AAI10255935
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Huber, Christopher R.
$3
3279511
245
1 0
$a
Faking and the Validity of Personality Tests: Using New Faking-Resistant Measures to Study Some Old Questions.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2017
300
$a
294 p.
500
$a
Source: Dissertation Abstracts International, Volume: 78-08(E), Section: B.
500
$a
Adviser: Nathan R. Kuncel.
502
$a
Thesis (Ph.D.)--University of Minnesota, 2017.
520
$a
Despite strong evidence supporting the validity of personality measures for personnel selection, their susceptibility to faking has been a persistent concern. Research has found that many job applicants exaggerate their possession of desirable traits, and there are reasons to believe that this distortion reduces criterion-related validity. However, the lack of studies that combine experimental control with real-world generalizability makes it difficult to isolate the effects of applicant faking. Experimental studies have typically induced faking using explicit instructions to fake, which elicit unusually extreme faking compared to typical applicant settings. A variety of non-experimental approaches have also been employed, but these approaches are largely inadequate for establishing causeand- effect relationships. Thus, researchers continue to debate whether applicant faking substantially attenuates the validity of personality tests. The present study used a new experimental framework to study this question and related methodological issues in the faking literature. First, it included a subtle incentive to fake in addition to explicit instructions to respond honestly or fake good. Second, it compared faking on standard Likert scales to faking on multidimensional forced choice (MFC) scales designed to resist deception. Third, it compared more and less fakable versions of the same MFC inventory to eliminate confounding differences between MFC and Likert scales. The result was a 3 x 3 design that simultaneously manipulated the motivation and ability to fake, allowing for a more rigorous examination of the faking--validity relationship. Results indicated complex relationships between faking and the validity of personality scores. Directed fakers were much better at raising their scores on Likert scales than MFC measures of the same traits. However, MFC scales failed to retain more validity than Likert scales when participants faked. Supplemental analyses suggested that extreme faking decimated the construct validity of all scales regardless of their fakability. Faking also added new common method variance to the Likert scales, which in turn contributed to the scales' criterion-related validity. In addition to the effects of faking, the present study investigated two recurring methodological issues in the faking literature. First, I investigated the claim that directed faking is fundamentally different from typical faking by comparing results from directed and incentivized fakers. Directed faking results generally replicated using a subtle incentive to fake, but the effects were much smaller and less consistent. Second, some have argued that traditional criterion-related validity coefficients fail to capture the negative effects of faking on actual selection decisions. I investigated this possibility by creating simulated selection pools in which fakers and honest responders competed for limited positions. The simulation results generally indicated reasonable correspondence between validity estimates and selected group performance, suggesting that validity coefficients adequately reflected the effects of faking. Results are interpreted using existing theories of faking, and new methodologies are proposed to advance the study of typical faking behavior.
590
$a
School code: 0130.
650
4
$a
Occupational psychology.
$3
2122852
650
4
$a
Personality psychology.
$3
2144789
650
4
$a
Organizational behavior.
$3
516683
690
$a
0624
690
$a
0625
690
$a
0703
710
2
$a
University of Minnesota.
$b
Psychology.
$3
1024075
773
0
$t
Dissertation Abstracts International
$g
78-08B(E).
790
$a
0130
791
$a
Ph.D.
792
$a
2017
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10255935
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9328335
電子資源
01.外借(書)_YB
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入