語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Improving Medical Image Segmentation...
~
Yi, Darvin.
FindBook
Google Book
Amazon
博客來
Improving Medical Image Segmentation by Designing Around Clinical Context.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Improving Medical Image Segmentation by Designing Around Clinical Context./
作者:
Yi, Darvin.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
面頁冊數:
197 p.
附註:
Source: Dissertations Abstracts International, Volume: 82-06, Section: B.
Contained By:
Dissertations Abstracts International82-06B.
標題:
Medical imaging. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28121431
ISBN:
9798698538400
Improving Medical Image Segmentation by Designing Around Clinical Context.
Yi, Darvin.
Improving Medical Image Segmentation by Designing Around Clinical Context.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 197 p.
Source: Dissertations Abstracts International, Volume: 82-06, Section: B.
Thesis (Ph.D.)--Stanford University, 2020.
This item must not be sold to any third party vendors.
The rise of deep learning (DL) has created many novel algorithms for segmentation, which has in turn revolutionized the field of medical image segmentation. However, several distinctions between the field of natural and medical computer vision necessitates specialized algorithms to optimize performance, including the multi-modality of medical data, the differences in imaging protocols between centers, and the limited amount of annotated data. These differences lead to limitations when applying current state of the art computer vision methods on medical imaging. For segmentation, the major gaps our algorithms must bridge to become clinically useful are:(1) generalize to different imaging protocols,(2) become robust to training on noisy labels, and(3) generally improve segmentation performanceThe current rigorous deep learning architectures are not robust to having missing input modalities after training a network, which makes our networks unable to run inference on new data taken with a different imaging protocol. By training our algorithms without taking into account the mutability of imaging protocols, we heavily limit the deployability of our algorithms. Our current training paradigm also needs pristine segmentation labels, which necessitates a large time investment by expert annotators. By training our algorithms with an underlying assumption that there is no noise in our labels with harsh loss functions like cross entropy, we create a need for clean labels. This limits our datasets from being fully largely scalable to the same size as natural computer vision datasets, as disease segmentations on medical images require more time and effort to annotate than natural images with semantic classes. Finally, current state of the art performance on difficult segmentation tasks like brain metastases is just not enough to be clinically useful. We will need to explore new ways of designing and ensembling networks to increase segmentation performance should we aim to deploy these algorithms in any clinically relevant environment.We hypothesize that by changing neural network architectures and loss functions to account for noisy data rather than assuming consistent imaging protocols and pristine labels, we can encode more robustness into our trained networks and improve segmentation performance on medical imaging tasks. In our experiments, we will test several different networks whose architecture and loss functions have been motivated by realistic and clinically relevant situations. For these experiments, we chose the model system of brain metastases lesion detection and segmentation, a difficult problem due to the high count and small size of the lesions. It is also an important problem due to the need to assess the effects of treatment by tracking changes in tumor burden. In this dissertation, we present the following specific aims: (1) optimizing deep learning performance on brain metastases segmentation, (2) training networks to be robust to coarse annotations and missing data, and (3) validating our methodology on three different secondary tasks. Our trained baseline performance (state of the art) performs brain metastases segmentation modestly, giving us mAP values of 0.46±0.02 and DICE scores of 0.72. Changing our architectures to account for different pulse sequence integration methods does not improve our values by much, giving us a model mAP improvement to 0.48±0.2 and no improvement in DICE score. However, through investigating pulse sequence integration, we developed a novel input-level dropout training scheme that holds out certain pulse sequences randomly during different iterations of training our deep net. This trains our network to be robust to missing pulse sequences in the future, at no cost to performance. We then developed two additional robustness training schemes that enable training on data annotations that have a lot of noise. We prove that we are able to lose no performance when degrading 70% of our segmentation annotations with spherical approximations, and show a loss of < 5% performance when degrading 90% of our annotations. Similarly, when we censor our 50% of our annotated lesions (simulating a 50% False Negative Rate), we can preserve > 95% of the performance by utilizing a novel lopsided bootstrap loss. Using these ideas, we use the lesion-based censoring technique as the base of a novel ensembling method we named Random Bundle. This network increased our mAP value 0.65±0.01, an increase of about 40%. We validate our methods on three different secondary datasets. By validating our methods work on brain metastases data from Oslo University Hospital, we show that our methods are robust to cross-center data. By validating our methods on the MICCAI BraTS dataset, we show that our methods are robust to magnetic resonance images of a different disorder. Finally, by validating our methods on diabetic retinopathy micro-aneurysms on fundus photographs, we show that our methods are robust across imaging domains and organ systems. Our experiments support our claims that (1) designing architectures with a focus on how pulse sequences interact will encode robustness for different imaging protocols, (2) creating custom loss functions around expected annotation errors will make our networks more robust to those errors, and (3) the overall performance of our networks can be improved by using these novel architectures and loss functions.
ISBN: 9798698538400Subjects--Topical Terms:
3172799
Medical imaging.
Subjects--Index Terms:
Medical image segmentation
Improving Medical Image Segmentation by Designing Around Clinical Context.
LDR
:06599nmm a2200337 4500
001
2281873
005
20210927083417.5
008
220723s2020 ||||||||||||||||| ||eng d
020
$a
9798698538400
035
$a
(MiAaPQ)AAI28121431
035
$a
(MiAaPQ)STANFORDct300cb3464
035
$a
AAI28121431
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Yi, Darvin.
$3
3560583
245
1 0
$a
Improving Medical Image Segmentation by Designing Around Clinical Context.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
197 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-06, Section: B.
500
$a
Advisor: Rubin, Daniel;Langlotz, Curtis;Re, Christopher;Yeung, Serena.
502
$a
Thesis (Ph.D.)--Stanford University, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
The rise of deep learning (DL) has created many novel algorithms for segmentation, which has in turn revolutionized the field of medical image segmentation. However, several distinctions between the field of natural and medical computer vision necessitates specialized algorithms to optimize performance, including the multi-modality of medical data, the differences in imaging protocols between centers, and the limited amount of annotated data. These differences lead to limitations when applying current state of the art computer vision methods on medical imaging. For segmentation, the major gaps our algorithms must bridge to become clinically useful are:(1) generalize to different imaging protocols,(2) become robust to training on noisy labels, and(3) generally improve segmentation performanceThe current rigorous deep learning architectures are not robust to having missing input modalities after training a network, which makes our networks unable to run inference on new data taken with a different imaging protocol. By training our algorithms without taking into account the mutability of imaging protocols, we heavily limit the deployability of our algorithms. Our current training paradigm also needs pristine segmentation labels, which necessitates a large time investment by expert annotators. By training our algorithms with an underlying assumption that there is no noise in our labels with harsh loss functions like cross entropy, we create a need for clean labels. This limits our datasets from being fully largely scalable to the same size as natural computer vision datasets, as disease segmentations on medical images require more time and effort to annotate than natural images with semantic classes. Finally, current state of the art performance on difficult segmentation tasks like brain metastases is just not enough to be clinically useful. We will need to explore new ways of designing and ensembling networks to increase segmentation performance should we aim to deploy these algorithms in any clinically relevant environment.We hypothesize that by changing neural network architectures and loss functions to account for noisy data rather than assuming consistent imaging protocols and pristine labels, we can encode more robustness into our trained networks and improve segmentation performance on medical imaging tasks. In our experiments, we will test several different networks whose architecture and loss functions have been motivated by realistic and clinically relevant situations. For these experiments, we chose the model system of brain metastases lesion detection and segmentation, a difficult problem due to the high count and small size of the lesions. It is also an important problem due to the need to assess the effects of treatment by tracking changes in tumor burden. In this dissertation, we present the following specific aims: (1) optimizing deep learning performance on brain metastases segmentation, (2) training networks to be robust to coarse annotations and missing data, and (3) validating our methodology on three different secondary tasks. Our trained baseline performance (state of the art) performs brain metastases segmentation modestly, giving us mAP values of 0.46±0.02 and DICE scores of 0.72. Changing our architectures to account for different pulse sequence integration methods does not improve our values by much, giving us a model mAP improvement to 0.48±0.2 and no improvement in DICE score. However, through investigating pulse sequence integration, we developed a novel input-level dropout training scheme that holds out certain pulse sequences randomly during different iterations of training our deep net. This trains our network to be robust to missing pulse sequences in the future, at no cost to performance. We then developed two additional robustness training schemes that enable training on data annotations that have a lot of noise. We prove that we are able to lose no performance when degrading 70% of our segmentation annotations with spherical approximations, and show a loss of < 5% performance when degrading 90% of our annotations. Similarly, when we censor our 50% of our annotated lesions (simulating a 50% False Negative Rate), we can preserve > 95% of the performance by utilizing a novel lopsided bootstrap loss. Using these ideas, we use the lesion-based censoring technique as the base of a novel ensembling method we named Random Bundle. This network increased our mAP value 0.65±0.01, an increase of about 40%. We validate our methods on three different secondary datasets. By validating our methods work on brain metastases data from Oslo University Hospital, we show that our methods are robust to cross-center data. By validating our methods on the MICCAI BraTS dataset, we show that our methods are robust to magnetic resonance images of a different disorder. Finally, by validating our methods on diabetic retinopathy micro-aneurysms on fundus photographs, we show that our methods are robust across imaging domains and organ systems. Our experiments support our claims that (1) designing architectures with a focus on how pulse sequences interact will encode robustness for different imaging protocols, (2) creating custom loss functions around expected annotation errors will make our networks more robust to those errors, and (3) the overall performance of our networks can be improved by using these novel architectures and loss functions.
590
$a
School code: 0212.
650
4
$a
Medical imaging.
$3
3172799
653
$a
Medical image segmentation
653
$a
Clinical context
653
$a
Computer vision
690
$a
0574
710
2
$a
Stanford University.
$3
754827
773
0
$t
Dissertations Abstracts International
$g
82-06B.
790
$a
0212
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28121431
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9433606
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入