語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Three Essays on Ethical Issues in Na...
~
Arhin, Kofi.
FindBook
Google Book
Amazon
博客來
Three Essays on Ethical Issues in Natural Language Use for the Design and Implementation of Artificial Intelligence (AI) Systems.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Three Essays on Ethical Issues in Natural Language Use for the Design and Implementation of Artificial Intelligence (AI) Systems./
作者:
Arhin, Kofi.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2023,
面頁冊數:
133 p.
附註:
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
Contained By:
Dissertations Abstracts International85-03B.
標題:
Computer science. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30530536
ISBN:
9798380411011
Three Essays on Ethical Issues in Natural Language Use for the Design and Implementation of Artificial Intelligence (AI) Systems.
Arhin, Kofi.
Three Essays on Ethical Issues in Natural Language Use for the Design and Implementation of Artificial Intelligence (AI) Systems.
- Ann Arbor : ProQuest Dissertations & Theses, 2023 - 133 p.
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
Thesis (Ph.D.)--Rensselaer Polytechnic Institute, 2023.
This item must not be sold to any third party vendors.
This dissertation consists of three essays on ethical issues in natural language use for the design and implementation of artificial intelligence (AI) systems. Several ethical concerns regarding the use of AI to support human decision-making have emerged. The three studies highlight concerns related to confirmation bias and algorithmic fairness in both human and AI systems. Algorithmic fairness refers to efforts to ensure that AI systems are designed and deployed in a manner that does not discriminate against a select group of people, specifically, underrepresented group members. Confirmation bias is characterized by making decisions based on existing beliefs. The dissertation examines these themes in the context of hiring and news veracity recommendations. In the first two essays of this dissertation, solutions to address algorithmic discrimination are proposed. In the third essay, strategies are proposed to impact fairness judgments about algorithms in the context of news veracity assessments to improve the acceptance of algorithmic recommendations. Essay 1. Human resources (HR) platforms using advanced information technology (IT) and artificial intelligence (AI) tools to assist with HR tasks are rapidly being adopted. Considering these platforms have the potential to remove human bias (e.g., confirmation bias) from personnel selection by standardizing the evaluation process, as well as the potential to codify discrimination in seemingly objective algorithms, there is a need to better understand their impact on diversity, equity, and inclusion (DEI). To this end, we use a real-world dataset of 2,506 applicants' interview responses on an HR platform to examine how applicants' job-irrelevant dialect use (i.e., African American English; AAE) affects selection decisions. We find that (1) greater use of AAE reduces an applicant's chances of getting hired, (2) the negative impact of AAE use is stronger in unstructured interview questions, and (3) that this negative effect is strongest among Black applicants. In addition, we find machine learning models are more discriminatory against Black applicants when predicting selection outcomes with unstructured questions, and removing AAE words from machine learning models can enhance their fairness by making them less discriminatory against Black applicants. These findings have important implications for personnel selection processes in organizations seeking to improve social justice and algorithmic fairness.Essay 2. Machine learning (ML) and artificial intelligence (AI) are increasingly playing a role in personnel assessment and selection. However, the use of ML and AI to support human resources (HR) tasks in organizations has a short history rife with significant challenges. For example, algorithms may reinforce existing inequalities in human processes - which may be a result of confirmation bias - when the data used to train them reflects such inequalities. To contribute insights to address this challenge, we propose a loose coupling algorithmic fairness framework that utilizes multiple sources of ground truth labels (i.e., decentralization) thereby decoupling the relationship between predictors and target outcomes (i.e., reducing directness) in ML pipelines. We use a real-world dataset of 2,506 applicants' interview responses to predict candidate selection on a hiring platform to investigate this approach. Models based on our framework estimate the similarity of candidate responses to interview questions to human-validated exemplar answers collected from HR websites, and compared to directly predicting historical hiring decisions, the proposed approach (1) leads to fairer outcomes for underrepresented group members, and (2) is less able to predict candidate race. These results highlight how to enhance the procedural and distributive fairness of ML and AI systems in organizations through human and algorithmic collaboration.Essay 3. Advancements in information and internet technologies have contributed to the pervasiveness of information sharing on platforms such as social media, and blogs, among others. Unfortunately, this has led to the effortless dissemination of false information (or Fake news) on these platforms. While several fact-checking tools and resources have been developed to prevent the spread of Fake News, existing studies have found that these tools are not always effective due to cognitive biases such as confirmation bias. That is, people do not accept the recommendation of these fact-checking tools if the recommendations do not align with their beliefs. To address this challenge, the third study proposes a conceptual framework explaining how the similarity between AI tools and users, users' perceived fairness of AI tools, and the autonomy AI tools provide users during interaction, are key factors that can improve the acceptance of algorithmic advice in the presence of confirmation bias. The study espouses that similarity will lead to perceived fairness, while perceived fairness will improve the acceptance of algorithmic advice. Additionally, the relationship between the perceived fairness and the decision to accept or reject an algorithm's recommendation will be moderated by the extent of autonomy provided to the user by the AI tool during the recommendation task. Future work will focus on gathering data to test the arguments advanced in the study.
ISBN: 9798380411011Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Adverse impact
Three Essays on Ethical Issues in Natural Language Use for the Design and Implementation of Artificial Intelligence (AI) Systems.
LDR
:06717nmm a2200409 4500
001
2396871
005
20240618081806.5
006
m o d
007
cr#unu||||||||
008
251215s2023 ||||||||||||||||| ||eng d
020
$a
9798380411011
035
$a
(MiAaPQ)AAI30530536
035
$a
AAI30530536
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Arhin, Kofi.
$3
3665636
245
1 0
$a
Three Essays on Ethical Issues in Natural Language Use for the Design and Implementation of Artificial Intelligence (AI) Systems.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2023
300
$a
133 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-03, Section: B.
500
$a
Advisor: Kuruzovich, Jason.
502
$a
Thesis (Ph.D.)--Rensselaer Polytechnic Institute, 2023.
506
$a
This item must not be sold to any third party vendors.
520
$a
This dissertation consists of three essays on ethical issues in natural language use for the design and implementation of artificial intelligence (AI) systems. Several ethical concerns regarding the use of AI to support human decision-making have emerged. The three studies highlight concerns related to confirmation bias and algorithmic fairness in both human and AI systems. Algorithmic fairness refers to efforts to ensure that AI systems are designed and deployed in a manner that does not discriminate against a select group of people, specifically, underrepresented group members. Confirmation bias is characterized by making decisions based on existing beliefs. The dissertation examines these themes in the context of hiring and news veracity recommendations. In the first two essays of this dissertation, solutions to address algorithmic discrimination are proposed. In the third essay, strategies are proposed to impact fairness judgments about algorithms in the context of news veracity assessments to improve the acceptance of algorithmic recommendations. Essay 1. Human resources (HR) platforms using advanced information technology (IT) and artificial intelligence (AI) tools to assist with HR tasks are rapidly being adopted. Considering these platforms have the potential to remove human bias (e.g., confirmation bias) from personnel selection by standardizing the evaluation process, as well as the potential to codify discrimination in seemingly objective algorithms, there is a need to better understand their impact on diversity, equity, and inclusion (DEI). To this end, we use a real-world dataset of 2,506 applicants' interview responses on an HR platform to examine how applicants' job-irrelevant dialect use (i.e., African American English; AAE) affects selection decisions. We find that (1) greater use of AAE reduces an applicant's chances of getting hired, (2) the negative impact of AAE use is stronger in unstructured interview questions, and (3) that this negative effect is strongest among Black applicants. In addition, we find machine learning models are more discriminatory against Black applicants when predicting selection outcomes with unstructured questions, and removing AAE words from machine learning models can enhance their fairness by making them less discriminatory against Black applicants. These findings have important implications for personnel selection processes in organizations seeking to improve social justice and algorithmic fairness.Essay 2. Machine learning (ML) and artificial intelligence (AI) are increasingly playing a role in personnel assessment and selection. However, the use of ML and AI to support human resources (HR) tasks in organizations has a short history rife with significant challenges. For example, algorithms may reinforce existing inequalities in human processes - which may be a result of confirmation bias - when the data used to train them reflects such inequalities. To contribute insights to address this challenge, we propose a loose coupling algorithmic fairness framework that utilizes multiple sources of ground truth labels (i.e., decentralization) thereby decoupling the relationship between predictors and target outcomes (i.e., reducing directness) in ML pipelines. We use a real-world dataset of 2,506 applicants' interview responses to predict candidate selection on a hiring platform to investigate this approach. Models based on our framework estimate the similarity of candidate responses to interview questions to human-validated exemplar answers collected from HR websites, and compared to directly predicting historical hiring decisions, the proposed approach (1) leads to fairer outcomes for underrepresented group members, and (2) is less able to predict candidate race. These results highlight how to enhance the procedural and distributive fairness of ML and AI systems in organizations through human and algorithmic collaboration.Essay 3. Advancements in information and internet technologies have contributed to the pervasiveness of information sharing on platforms such as social media, and blogs, among others. Unfortunately, this has led to the effortless dissemination of false information (or Fake news) on these platforms. While several fact-checking tools and resources have been developed to prevent the spread of Fake News, existing studies have found that these tools are not always effective due to cognitive biases such as confirmation bias. That is, people do not accept the recommendation of these fact-checking tools if the recommendations do not align with their beliefs. To address this challenge, the third study proposes a conceptual framework explaining how the similarity between AI tools and users, users' perceived fairness of AI tools, and the autonomy AI tools provide users during interaction, are key factors that can improve the acceptance of algorithmic advice in the presence of confirmation bias. The study espouses that similarity will lead to perceived fairness, while perceived fairness will improve the acceptance of algorithmic advice. Additionally, the relationship between the perceived fairness and the decision to accept or reject an algorithm's recommendation will be moderated by the extent of autonomy provided to the user by the AI tool during the recommendation task. Future work will focus on gathering data to test the arguments advanced in the study.
590
$a
School code: 0185.
650
4
$a
Computer science.
$3
523869
650
4
$a
Information technology.
$3
532993
653
$a
Adverse impact
653
$a
Algorithmic fairness
653
$a
Confirmation bias
653
$a
Diversity equity
653
$a
Machine learning
690
$a
0454
690
$a
0800
690
$a
0489
690
$a
0984
710
2
$a
Rensselaer Polytechnic Institute.
$b
Management.
$3
2105500
773
0
$t
Dissertations Abstracts International
$g
85-03B.
790
$a
0185
791
$a
Ph.D.
792
$a
2023
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30530536
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9505191
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入