語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Deep Structural Learning for Fusion in Remote Sensing Applications.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Deep Structural Learning for Fusion in Remote Sensing Applications./
作者:
Tran, Kenneth Viet Lam.
面頁冊數:
1 online resource (86 pages)
附註:
Source: Dissertations Abstracts International, Volume: 84-04, Section: B.
Contained By:
Dissertations Abstracts International84-04B.
標題:
Decomposition. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29420007click for full text (PQDT)
ISBN:
9798352653128
Deep Structural Learning for Fusion in Remote Sensing Applications.
Tran, Kenneth Viet Lam.
Deep Structural Learning for Fusion in Remote Sensing Applications.
- 1 online resource (86 pages)
Source: Dissertations Abstracts International, Volume: 84-04, Section: B.
Thesis (Ph.D.)--North Carolina State University, 2022.
Includes bibliographical references
TRAN, KENNETH VIET LAM. Deep Structural Learning for Fusion in Remote Sensing Applications. (Under the direction of Hamid Krim). Data captured from satellite sensors around the Earth can be used in applications such as land classification, illicit activity detection, and environmental monitoring. With recent advancements in remote sensing technologies, some satellites are able to capture images at such high spatial resolution that it is possible to count the number of cars on the road, or even see street markers, with the human eye. However, capturing images at this high resolution comes with the trade-off of having to cover less surface area per day, due to resource limitations. At the expense of having much lower spatial resolution, PlanetScope is able to almost fully capture Earth's surface daily. These are just two examples of the many diverse satellites currently orbiting the Earth. When these heterogeneous sensors align on the same region of Earth, they provide complementary information that can enhance analytic power. In practice, it is almost impossible to have this alignment since new satellite images are produced daily and need to be processed in real time. This makes learning models based on traditional fusion methods very challenging.For this dissertation, we focus on circumventing this challenge by developing frameworks that fuse information from satellite sensors during training, but assume that we only have access to one modality for testing. In our first work, we are motivated by the situation that arises from the WV3 and PlanetScope sensors, where one sensor captures images at a much higher resolution, while the other captures images more frequently. We developed a single image super-resolution model using a multi-scale progressive super resolution GANs network to perform up to 8x super resolution. Our second work is motivated by the spectral differences in different optical modalities (panchromatic, RGB, and multispectral data). We designed a framework that generates features of the missing modalities from the remaining modality during inference time. For our third work, we extended this generative fusion to panchromatic and synthetic aperture radar (SAR) data to try to take advantage of SAR's ability to penetrate weather and clouds. In our past work, we found that using traditional CNNs can lead to undesirable artifacts and smoothing in generative modeling. In our last work, we explore using implicit neural representations (INR), which have been shown to produce sharper features for image reconstruction. By applying this type of representation, and in conjunction with better SAR de-noising, we can generate a better proxy feature optical feature that leads to improved performance of downstream tasks.
Electronic reproduction.
Ann Arbor, Mich. :
ProQuest,
2023
Mode of access: World Wide Web
ISBN: 9798352653128Subjects--Topical Terms:
3561186
Decomposition.
Index Terms--Genre/Form:
542853
Electronic books.
Deep Structural Learning for Fusion in Remote Sensing Applications.
LDR
:04093nmm a2200361K 4500
001
2354217
005
20230324111230.5
006
m o d
007
cr mn ---uuuuu
008
241011s2022 xx obm 000 0 eng d
020
$a
9798352653128
035
$a
(MiAaPQ)AAI29420007
035
$a
(MiAaPQ)NCState_Univ18402039793
035
$a
AAI29420007
040
$a
MiAaPQ
$b
eng
$c
MiAaPQ
$d
NTU
100
1
$a
Tran, Kenneth Viet Lam.
$3
3694564
245
1 0
$a
Deep Structural Learning for Fusion in Remote Sensing Applications.
264
0
$c
2022
300
$a
1 online resource (86 pages)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
500
$a
Source: Dissertations Abstracts International, Volume: 84-04, Section: B.
500
$a
Advisor: Sakla, Wesam; Wu, Tianfu; Lobaton, Edgar; Chi, Min; Krim, Hamid.
502
$a
Thesis (Ph.D.)--North Carolina State University, 2022.
504
$a
Includes bibliographical references
520
$a
TRAN, KENNETH VIET LAM. Deep Structural Learning for Fusion in Remote Sensing Applications. (Under the direction of Hamid Krim). Data captured from satellite sensors around the Earth can be used in applications such as land classification, illicit activity detection, and environmental monitoring. With recent advancements in remote sensing technologies, some satellites are able to capture images at such high spatial resolution that it is possible to count the number of cars on the road, or even see street markers, with the human eye. However, capturing images at this high resolution comes with the trade-off of having to cover less surface area per day, due to resource limitations. At the expense of having much lower spatial resolution, PlanetScope is able to almost fully capture Earth's surface daily. These are just two examples of the many diverse satellites currently orbiting the Earth. When these heterogeneous sensors align on the same region of Earth, they provide complementary information that can enhance analytic power. In practice, it is almost impossible to have this alignment since new satellite images are produced daily and need to be processed in real time. This makes learning models based on traditional fusion methods very challenging.For this dissertation, we focus on circumventing this challenge by developing frameworks that fuse information from satellite sensors during training, but assume that we only have access to one modality for testing. In our first work, we are motivated by the situation that arises from the WV3 and PlanetScope sensors, where one sensor captures images at a much higher resolution, while the other captures images more frequently. We developed a single image super-resolution model using a multi-scale progressive super resolution GANs network to perform up to 8x super resolution. Our second work is motivated by the spectral differences in different optical modalities (panchromatic, RGB, and multispectral data). We designed a framework that generates features of the missing modalities from the remaining modality during inference time. For our third work, we extended this generative fusion to panchromatic and synthetic aperture radar (SAR) data to try to take advantage of SAR's ability to penetrate weather and clouds. In our past work, we found that using traditional CNNs can lead to undesirable artifacts and smoothing in generative modeling. In our last work, we explore using implicit neural representations (INR), which have been shown to produce sharper features for image reconstruction. By applying this type of representation, and in conjunction with better SAR de-noising, we can generate a better proxy feature optical feature that leads to improved performance of downstream tasks.
533
$a
Electronic reproduction.
$b
Ann Arbor, Mich. :
$c
ProQuest,
$d
2023
538
$a
Mode of access: World Wide Web
650
4
$a
Decomposition.
$3
3561186
650
4
$a
Deep learning.
$3
3554982
650
4
$a
Remote sensing.
$3
535394
650
4
$a
Wavelet transforms.
$3
3681479
650
4
$a
Satellites.
$3
924316
650
4
$a
Sensors.
$3
3549539
650
4
$a
Classification.
$3
595585
650
4
$a
Aerospace engineering.
$3
1002622
650
4
$a
Mathematics.
$3
515831
655
7
$a
Electronic books.
$2
lcsh
$3
542853
690
$a
0799
690
$a
0538
690
$a
0800
690
$a
0405
710
2
$a
ProQuest Information and Learning Co.
$3
783688
710
2
$a
North Carolina State University.
$3
1018772
773
0
$t
Dissertations Abstracts International
$g
84-04B.
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=29420007
$z
click for full text (PQDT)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9476573
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入