Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Navigating Uncertainty: Enhancing De...
~
Altintas, Onur.
Linked to FindBook
Google Book
Amazon
博客來
Navigating Uncertainty: Enhancing Decision-Making in Healthcare and Interpretable AI Adoption.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Navigating Uncertainty: Enhancing Decision-Making in Healthcare and Interpretable AI Adoption./
Author:
Altintas, Onur.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2024,
Description:
210 p.
Notes:
Source: Dissertations Abstracts International, Volume: 85-11, Section: B.
Contained By:
Dissertations Abstracts International85-11B.
Subject:
Behavioral sciences. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31298491
ISBN:
9798382599472
Navigating Uncertainty: Enhancing Decision-Making in Healthcare and Interpretable AI Adoption.
Altintas, Onur.
Navigating Uncertainty: Enhancing Decision-Making in Healthcare and Interpretable AI Adoption.
- Ann Arbor : ProQuest Dissertations & Theses, 2024 - 210 p.
Source: Dissertations Abstracts International, Volume: 85-11, Section: B.
Thesis (Ph.D.)--Boston University, 2024.
In an increasingly complex and uncertain world, effective decision-making is crucial for success in various domains, from healthcare to business management. This dissertation explores two critical aspects of decision-making under uncertainty: continuous data monitoring in healthcare and the impact of interpretability on AI adoption in managerial decisions. By investigating these topics through three studies, we aim to provide valuable insights and recommendations for optimizing decision-making processes and outcomes in uncertain environments.The first study focuses on the importance of continuous data monitoring in managing patient care, specifically for individuals with implanted left ventricular assist devices (LVADs). LVADs support heart function in patients with advanced heart failure, but their long-term success is limited by serious adverse events such as stroke and death. By analyzing longitudinal blood pressure measurements and patients' clinical profiles, we demonstrate that maintaining low mean arterial pressure (MAP) is associated with an increased risk of adverse events, contrary to current clinical practices. This chapter proposes incorporating this low MAP≤75 threshold into blood pressure management guidelines for LVAD recipients to potentially improve their survival rates and increase the attractiveness of this life-saving therapy.The second study shifts focus to managerial decision-making, empirically investigating the impact of AI interpretability on adoption and trust in uncertain business environments. We find that interpretability does not always improve AI adoption, and certain types of interpretability may even hinder adoption upon initial introduction. Moreover, while AI adoption is higher in more uncertain environments, trust in AI is generally lower. We observe that the adoption of AI increases under low uncertainty but decreases under high uncertainty, due to an incorrect assessment of AI's capabilities based on its accuracy. Notably, continuously presenting AI performance benchmarks alongside decision-makers' results promotes trust and mitigates the negative adoption trend under high uncertainty.Building upon these findings, the third study aims to isolate the effects of AI interpretability information from the impact of AI recommendations. By providing participants with interpretability information without explicit AI recommendations, we explore how different types of guidance affect perceptions, decision-making strategies, and performance. Our results indicate that while interpretability information influences perceptions and intended use of AI, it does not significantly improve decision-making performance in the absence of AI recommendations. This finding emphasizes the importance of providing AI decision support to enhance decision quality, even when interpretability information is available.The insights gained from these studies contribute to the growing body of knowledge in healthcare management and AI-driven decision-making under uncertainty. Our findings highlight the need for continuous data monitoring, application of interpretability techniques, and the provision of AI decision support to optimize decision-making processes and outcomes. By bridging the gap between data-driven insights and human decision-making, we can navigate the complexities of uncertain environments and drive positive change in patient care and managerial decision-making effectiveness.
ISBN: 9798382599472Subjects--Topical Terms:
529833
Behavioral sciences.
Subjects--Index Terms:
Mean arterial pressure
Navigating Uncertainty: Enhancing Decision-Making in Healthcare and Interpretable AI Adoption.
LDR
:04667nmm a2200397 4500
001
2400002
005
20240916070039.5
006
m o d
007
cr#unu||||||||
008
251215s2024 ||||||||||||||||| ||eng d
020
$a
9798382599472
035
$a
(MiAaPQ)AAI31298491
035
$a
AAI31298491
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Altintas, Onur.
$0
(orcid)0000-0001-5487-5273
$3
3769974
245
1 0
$a
Navigating Uncertainty: Enhancing Decision-Making in Healthcare and Interpretable AI Adoption.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2024
300
$a
210 p.
500
$a
Source: Dissertations Abstracts International, Volume: 85-11, Section: B.
500
$a
Advisor: Seidmann, Abraham.
502
$a
Thesis (Ph.D.)--Boston University, 2024.
520
$a
In an increasingly complex and uncertain world, effective decision-making is crucial for success in various domains, from healthcare to business management. This dissertation explores two critical aspects of decision-making under uncertainty: continuous data monitoring in healthcare and the impact of interpretability on AI adoption in managerial decisions. By investigating these topics through three studies, we aim to provide valuable insights and recommendations for optimizing decision-making processes and outcomes in uncertain environments.The first study focuses on the importance of continuous data monitoring in managing patient care, specifically for individuals with implanted left ventricular assist devices (LVADs). LVADs support heart function in patients with advanced heart failure, but their long-term success is limited by serious adverse events such as stroke and death. By analyzing longitudinal blood pressure measurements and patients' clinical profiles, we demonstrate that maintaining low mean arterial pressure (MAP) is associated with an increased risk of adverse events, contrary to current clinical practices. This chapter proposes incorporating this low MAP≤75 threshold into blood pressure management guidelines for LVAD recipients to potentially improve their survival rates and increase the attractiveness of this life-saving therapy.The second study shifts focus to managerial decision-making, empirically investigating the impact of AI interpretability on adoption and trust in uncertain business environments. We find that interpretability does not always improve AI adoption, and certain types of interpretability may even hinder adoption upon initial introduction. Moreover, while AI adoption is higher in more uncertain environments, trust in AI is generally lower. We observe that the adoption of AI increases under low uncertainty but decreases under high uncertainty, due to an incorrect assessment of AI's capabilities based on its accuracy. Notably, continuously presenting AI performance benchmarks alongside decision-makers' results promotes trust and mitigates the negative adoption trend under high uncertainty.Building upon these findings, the third study aims to isolate the effects of AI interpretability information from the impact of AI recommendations. By providing participants with interpretability information without explicit AI recommendations, we explore how different types of guidance affect perceptions, decision-making strategies, and performance. Our results indicate that while interpretability information influences perceptions and intended use of AI, it does not significantly improve decision-making performance in the absence of AI recommendations. This finding emphasizes the importance of providing AI decision support to enhance decision quality, even when interpretability information is available.The insights gained from these studies contribute to the growing body of knowledge in healthcare management and AI-driven decision-making under uncertainty. Our findings highlight the need for continuous data monitoring, application of interpretability techniques, and the provision of AI decision support to optimize decision-making processes and outcomes. By bridging the gap between data-driven insights and human decision-making, we can navigate the complexities of uncertain environments and drive positive change in patient care and managerial decision-making effectiveness.
590
$a
School code: 0017.
650
4
$a
Behavioral sciences.
$3
529833
653
$a
Mean arterial pressure
653
$a
Left ventricular assist devices
653
$a
Decision-making processes
653
$a
AI adoption
653
$a
Data monitoring
690
$a
0454
690
$a
0602
690
$a
0769
690
$a
0800
710
2
$a
Boston University.
$b
Management QSB.
$3
3288601
773
0
$t
Dissertations Abstracts International
$g
85-11B.
790
$a
0017
791
$a
Ph.D.
792
$a
2024
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=31298491
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9508322
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login