語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Reinforcement learning design for ca...
~
Zhao, Yufan.
FindBook
Google Book
Amazon
博客來
Reinforcement learning design for cancer clinical trials.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Reinforcement learning design for cancer clinical trials./
作者:
Zhao, Yufan.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2009,
面頁冊數:
119 p.
附註:
Source: Dissertation Abstracts International, Volume: 70-07, Section: B, page: 3862.
Contained By:
Dissertation Abstracts International70-07B.
標題:
Biostatistics. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3366451
ISBN:
9781109277180
Reinforcement learning design for cancer clinical trials.
Zhao, Yufan.
Reinforcement learning design for cancer clinical trials.
- Ann Arbor : ProQuest Dissertations & Theses, 2009 - 119 p.
Source: Dissertation Abstracts International, Volume: 70-07, Section: B, page: 3862.
Thesis (Ph.D.)--The University of North Carolina at Chapel Hill, 2009.
There has been significant recent research activity in developing therapies that are tailored to each individual. Finding such therapies in treatment settings involving multiple decision times is a major challenge. In this dissertation, we develop reinforcement learning trials for discovering these optimal regimens for life-threatening diseases such as cancer. A temporal-difference learning method called Q-learning is utilized which involves learning an optimal policy from a single training set of finite longitudinal patient trajectories. Approximating the Q-function with time-indexed parameters can be achieved by using support vector regression or extremely randomized trees. Within this framework, we demonstrate that the procedure can extract optimal strategies directly from clinical data without relying on the identification of any accurate mathematical models, unlike approaches based on adaptive design. We show that reinforcement learning has tremendous potential in clinical research because it can select actions that improve outcomes by taking into account delayed effects even when the relationship between actions and outcomes is not fully known.
ISBN: 9781109277180Subjects--Topical Terms:
1002712
Biostatistics.
Reinforcement learning design for cancer clinical trials.
LDR
:03088nmm a2200325 4500
001
2164198
005
20181030085012.5
008
190424s2009 ||||||||||||||||| ||eng d
020
$a
9781109277180
035
$a
(MiAaPQ)AAI3366451
035
$a
(MiAaPQ)unc:10396
035
$a
AAI3366451
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Zhao, Yufan.
$3
3352243
245
1 0
$a
Reinforcement learning design for cancer clinical trials.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2009
300
$a
119 p.
500
$a
Source: Dissertation Abstracts International, Volume: 70-07, Section: B, page: 3862.
500
$a
Adviser: Michael R. Kosorok.
502
$a
Thesis (Ph.D.)--The University of North Carolina at Chapel Hill, 2009.
520
$a
There has been significant recent research activity in developing therapies that are tailored to each individual. Finding such therapies in treatment settings involving multiple decision times is a major challenge. In this dissertation, we develop reinforcement learning trials for discovering these optimal regimens for life-threatening diseases such as cancer. A temporal-difference learning method called Q-learning is utilized which involves learning an optimal policy from a single training set of finite longitudinal patient trajectories. Approximating the Q-function with time-indexed parameters can be achieved by using support vector regression or extremely randomized trees. Within this framework, we demonstrate that the procedure can extract optimal strategies directly from clinical data without relying on the identification of any accurate mathematical models, unlike approaches based on adaptive design. We show that reinforcement learning has tremendous potential in clinical research because it can select actions that improve outcomes by taking into account delayed effects even when the relationship between actions and outcomes is not fully known.
520
$a
To support our claims, the methodology's practical utility is firstly illustrated in a virtual simulated clinical trial. We then apply this general strategy with significant refinements to studying and discovering optimal treatments for advanced metastatic stage IIIB/IV non-small cell lung cancer (NSCLC). In addition to the complexity of the NSCLC problem of selecting optimal compounds for first and second-line treatments based on prognostic factors, another primary scientific goal is to determine the optimal time to initiate second-line therapy, either immediately or delayed after induction therapy, yielding the longest overall survival time. We show that reinforcement learning not only successfully identifies optimal strategies for two lines of treatment from clinical data, but also reliably selects the best initial time for second-line therapy while taking into account heterogeneities of NSCLC across patients.
590
$a
School code: 0153.
650
4
$a
Biostatistics.
$3
1002712
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Statistics.
$3
517247
690
$a
0308
690
$a
0800
690
$a
0463
710
2
$a
The University of North Carolina at Chapel Hill.
$b
Biostatistics.
$3
1023527
773
0
$t
Dissertation Abstracts International
$g
70-07B.
790
$a
0153
791
$a
Ph.D.
792
$a
2009
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3366451
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9363745
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入