語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
False News: A Digital Conceptualizat...
~
Khandoozi, Seyedali.
FindBook
Google Book
Amazon
博客來
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice./
作者:
Khandoozi, Seyedali.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2022,
面頁冊數:
315 p.
附註:
Source: Dissertations Abstracts International, Volume: 84-06, Section: B.
Contained By:
Dissertations Abstracts International84-06B.
標題:
Behavior. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30168774
ISBN:
9798358424630
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice.
Khandoozi, Seyedali.
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice.
- Ann Arbor : ProQuest Dissertations & Theses, 2022 - 315 p.
Source: Dissertations Abstracts International, Volume: 84-06, Section: B.
Thesis (Ph.D.)--Queen's University (Canada), 2022.
This item must not be sold to any third party vendors.
Across three conceptual and empirical studies, we respond to urgent calls from all corners of society to address the wicked and universal problem of false messages circulating on the Internet, better known as "fake news". In the first paper, we problematize the use of the term "fake news", highlighting issues around its different meanings in prior academic research, its current meaning in the vernacular, and its adequacy to cover the entirety of the phenomena of online falsehoods. Building on existing attempts at addressing the conceptualization problem, we offer our own solution based on the literature on the ontology of digital objects, proposing the concept of "false messages", of which "false news" is a subset. We also situate this new concept in its broader technical and social context. In the second paper, we shift our attention to mitigating the problem of false news and compare the effects of two algorithmic advisors on individuals' judgment about news facticity. A large number of algorithms are being developed to identify false news based on the content of news articles (content-based algorithms) or social reaction to news articles (social-based algorithms), which we argue, can act as algorithmic advisors to humans about news facticity. Based on the theory of technology dominance (TTD), Judge-Advisor System (JAS) studies, and computers are social actors (CASA) paradigm, we hypothesize and find some empirical evidence that content-based and social-based algorithmic advisors differ in their ability to influence individuals' judgments about news facticity. In the final paper, we compare two algorithmic advisors that differ in their source of training data, with one advisor trained using data from a fact-checker with liberal political attitudes and the other trained with data from a fact-checker with conservative political attitudes. Extending the TTD by linking it to similarity-attraction studies, we find different patterns of advice taking from the two algorithmic advisors among US-based Democrats, Republicans, and independents, with Democrats utilizing advice from the algorithmic advisor with liberal training data and Republicans not utilizing advice from either algorithmic advisor, while independents utilized advice from the liberal algorithmic advisor with more nuances compared to the Democrats.
ISBN: 9798358424630Subjects--Topical Terms:
532476
Behavior.
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice.
LDR
:03531nmm a2200385 4500
001
2394442
005
20240422070857.5
006
m o d
007
cr#unu||||||||
008
251215s2022 ||||||||||||||||| ||eng d
020
$a
9798358424630
035
$a
(MiAaPQ)AAI30168774
035
$a
(MiAaPQ)QueensUCan_197430454
035
$a
AAI30168774
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Khandoozi, Seyedali.
$3
3763916
245
1 0
$a
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2022
300
$a
315 p.
500
$a
Source: Dissertations Abstracts International, Volume: 84-06, Section: B.
500
$a
Advisor: Brohman, Kathryn.
502
$a
Thesis (Ph.D.)--Queen's University (Canada), 2022.
506
$a
This item must not be sold to any third party vendors.
520
$a
Across three conceptual and empirical studies, we respond to urgent calls from all corners of society to address the wicked and universal problem of false messages circulating on the Internet, better known as "fake news". In the first paper, we problematize the use of the term "fake news", highlighting issues around its different meanings in prior academic research, its current meaning in the vernacular, and its adequacy to cover the entirety of the phenomena of online falsehoods. Building on existing attempts at addressing the conceptualization problem, we offer our own solution based on the literature on the ontology of digital objects, proposing the concept of "false messages", of which "false news" is a subset. We also situate this new concept in its broader technical and social context. In the second paper, we shift our attention to mitigating the problem of false news and compare the effects of two algorithmic advisors on individuals' judgment about news facticity. A large number of algorithms are being developed to identify false news based on the content of news articles (content-based algorithms) or social reaction to news articles (social-based algorithms), which we argue, can act as algorithmic advisors to humans about news facticity. Based on the theory of technology dominance (TTD), Judge-Advisor System (JAS) studies, and computers are social actors (CASA) paradigm, we hypothesize and find some empirical evidence that content-based and social-based algorithmic advisors differ in their ability to influence individuals' judgments about news facticity. In the final paper, we compare two algorithmic advisors that differ in their source of training data, with one advisor trained using data from a fact-checker with liberal political attitudes and the other trained with data from a fact-checker with conservative political attitudes. Extending the TTD by linking it to similarity-attraction studies, we find different patterns of advice taking from the two algorithmic advisors among US-based Democrats, Republicans, and independents, with Democrats utilizing advice from the algorithmic advisor with liberal training data and Republicans not utilizing advice from either algorithmic advisor, while independents utilized advice from the liberal algorithmic advisor with more nuances compared to the Democrats.
590
$a
School code: 0283.
650
4
$a
Behavior.
$3
532476
650
4
$a
Computer science.
$3
523869
650
4
$a
Political parties.
$3
516328
650
4
$a
User generated content.
$3
3562474
650
4
$a
Political science.
$3
528916
650
4
$a
Censorship.
$3
572838
650
4
$a
Presidential elections.
$3
3560858
650
4
$a
Information technology.
$3
532993
650
4
$a
Mass communications.
$3
3422380
650
4
$a
Web studies.
$3
2122754
690
$a
0984
690
$a
0800
690
$a
0615
690
$a
0489
690
$a
0646
690
$a
0708
710
2
$a
Queen's University (Canada).
$3
1017786
773
0
$t
Dissertations Abstracts International
$g
84-06B.
790
$a
0283
791
$a
Ph.D.
792
$a
2022
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30168774
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9502762
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入