Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
False News: A Digital Conceptualizat...
~
Khandoozi, Seyedali.
Linked to FindBook
Google Book
Amazon
博客來
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice.
Record Type:
Electronic resources : Monograph/item
Title/Author:
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice./
Author:
Khandoozi, Seyedali.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2022,
Description:
315 p.
Notes:
Source: Dissertations Abstracts International, Volume: 84-06, Section: B.
Contained By:
Dissertations Abstracts International84-06B.
Subject:
Behavior. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30168774
ISBN:
9798358424630
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice.
Khandoozi, Seyedali.
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice.
- Ann Arbor : ProQuest Dissertations & Theses, 2022 - 315 p.
Source: Dissertations Abstracts International, Volume: 84-06, Section: B.
Thesis (Ph.D.)--Queen's University (Canada), 2022.
This item must not be sold to any third party vendors.
Across three conceptual and empirical studies, we respond to urgent calls from all corners of society to address the wicked and universal problem of false messages circulating on the Internet, better known as "fake news". In the first paper, we problematize the use of the term "fake news", highlighting issues around its different meanings in prior academic research, its current meaning in the vernacular, and its adequacy to cover the entirety of the phenomena of online falsehoods. Building on existing attempts at addressing the conceptualization problem, we offer our own solution based on the literature on the ontology of digital objects, proposing the concept of "false messages", of which "false news" is a subset. We also situate this new concept in its broader technical and social context. In the second paper, we shift our attention to mitigating the problem of false news and compare the effects of two algorithmic advisors on individuals' judgment about news facticity. A large number of algorithms are being developed to identify false news based on the content of news articles (content-based algorithms) or social reaction to news articles (social-based algorithms), which we argue, can act as algorithmic advisors to humans about news facticity. Based on the theory of technology dominance (TTD), Judge-Advisor System (JAS) studies, and computers are social actors (CASA) paradigm, we hypothesize and find some empirical evidence that content-based and social-based algorithmic advisors differ in their ability to influence individuals' judgments about news facticity. In the final paper, we compare two algorithmic advisors that differ in their source of training data, with one advisor trained using data from a fact-checker with liberal political attitudes and the other trained with data from a fact-checker with conservative political attitudes. Extending the TTD by linking it to similarity-attraction studies, we find different patterns of advice taking from the two algorithmic advisors among US-based Democrats, Republicans, and independents, with Democrats utilizing advice from the algorithmic advisor with liberal training data and Republicans not utilizing advice from either algorithmic advisor, while independents utilized advice from the liberal algorithmic advisor with more nuances compared to the Democrats.
ISBN: 9798358424630Subjects--Topical Terms:
532476
Behavior.
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice.
LDR
:03531nmm a2200385 4500
001
2394442
005
20240422070857.5
006
m o d
007
cr#unu||||||||
008
251215s2022 ||||||||||||||||| ||eng d
020
$a
9798358424630
035
$a
(MiAaPQ)AAI30168774
035
$a
(MiAaPQ)QueensUCan_197430454
035
$a
AAI30168774
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Khandoozi, Seyedali.
$3
3763916
245
1 0
$a
False News: A Digital Conceptualization and Potential Mitigation Using Algorithmic Advice.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2022
300
$a
315 p.
500
$a
Source: Dissertations Abstracts International, Volume: 84-06, Section: B.
500
$a
Advisor: Brohman, Kathryn.
502
$a
Thesis (Ph.D.)--Queen's University (Canada), 2022.
506
$a
This item must not be sold to any third party vendors.
520
$a
Across three conceptual and empirical studies, we respond to urgent calls from all corners of society to address the wicked and universal problem of false messages circulating on the Internet, better known as "fake news". In the first paper, we problematize the use of the term "fake news", highlighting issues around its different meanings in prior academic research, its current meaning in the vernacular, and its adequacy to cover the entirety of the phenomena of online falsehoods. Building on existing attempts at addressing the conceptualization problem, we offer our own solution based on the literature on the ontology of digital objects, proposing the concept of "false messages", of which "false news" is a subset. We also situate this new concept in its broader technical and social context. In the second paper, we shift our attention to mitigating the problem of false news and compare the effects of two algorithmic advisors on individuals' judgment about news facticity. A large number of algorithms are being developed to identify false news based on the content of news articles (content-based algorithms) or social reaction to news articles (social-based algorithms), which we argue, can act as algorithmic advisors to humans about news facticity. Based on the theory of technology dominance (TTD), Judge-Advisor System (JAS) studies, and computers are social actors (CASA) paradigm, we hypothesize and find some empirical evidence that content-based and social-based algorithmic advisors differ in their ability to influence individuals' judgments about news facticity. In the final paper, we compare two algorithmic advisors that differ in their source of training data, with one advisor trained using data from a fact-checker with liberal political attitudes and the other trained with data from a fact-checker with conservative political attitudes. Extending the TTD by linking it to similarity-attraction studies, we find different patterns of advice taking from the two algorithmic advisors among US-based Democrats, Republicans, and independents, with Democrats utilizing advice from the algorithmic advisor with liberal training data and Republicans not utilizing advice from either algorithmic advisor, while independents utilized advice from the liberal algorithmic advisor with more nuances compared to the Democrats.
590
$a
School code: 0283.
650
4
$a
Behavior.
$3
532476
650
4
$a
Computer science.
$3
523869
650
4
$a
Political parties.
$3
516328
650
4
$a
User generated content.
$3
3562474
650
4
$a
Political science.
$3
528916
650
4
$a
Censorship.
$3
572838
650
4
$a
Presidential elections.
$3
3560858
650
4
$a
Information technology.
$3
532993
650
4
$a
Mass communications.
$3
3422380
650
4
$a
Web studies.
$3
2122754
690
$a
0984
690
$a
0800
690
$a
0615
690
$a
0489
690
$a
0646
690
$a
0708
710
2
$a
Queen's University (Canada).
$3
1017786
773
0
$t
Dissertations Abstracts International
$g
84-06B.
790
$a
0283
791
$a
Ph.D.
792
$a
2022
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=30168774
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9502762
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login