語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Deep Learning Methods for 3D Aerial ...
~
Yousefhussien, Mohammed A.
FindBook
Google Book
Amazon
博客來
Deep Learning Methods for 3D Aerial and Satellite Data.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Deep Learning Methods for 3D Aerial and Satellite Data./
作者:
Yousefhussien, Mohammed A.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2019,
面頁冊數:
103 p.
附註:
Source: Dissertations Abstracts International, Volume: 80-12, Section: B.
Contained By:
Dissertations Abstracts International80-12B.
標題:
Remote sensing. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13881840
ISBN:
9781392157787
Deep Learning Methods for 3D Aerial and Satellite Data.
Yousefhussien, Mohammed A.
Deep Learning Methods for 3D Aerial and Satellite Data.
- Ann Arbor : ProQuest Dissertations & Theses, 2019 - 103 p.
Source: Dissertations Abstracts International, Volume: 80-12, Section: B.
Thesis (Ph.D.)--Rochester Institute of Technology, 2019.
This item must not be sold to any third party vendors.
Recent advances in digital electronics have led to an overabundance of observations from electro-optical (EO) imaging sensors spanning high spatial, spectral and temporal resolution. This unprecedented volume, variety, and velocity is overwhelming our capacity to manage and translate that data into actionable information. Although decades of image processing research have taken the human out of the loop for many important tasks, the human analyst is still an irreplaceable link in the image exploitation chain, especially for more complex tasks requiring contextual understanding, memory, discernment, and learning. If knowledge discovery is to keep pace with the growing availability of data, new processing paradigms are needed in order to automate the analysis of earth observation imagery and ease the burden of manual interpretation.To address this gap, this dissertation advances fundamental and applied research in deep learning for aerial and satellite imagery. We show how deep learning-a computational model inspired by the human brain-can be used for (1) tracking, (2) classifying, and (3) modeling from a variety of data sources including full-motion video (FMV), Light Detection and Ranging (LiDAR), and stereo photogrammetry. First we assess the ability of a bio-inspired tracking method to track small targets using aerial videos. The tracker uses three kinds of saliency maps: appearance, location, and motion. Our approach achieves the best overall performance, including being the only method capable of handling long-term occlusions.Second, we evaluate the classification accuracy of a multi-scale fully convolutional network to label individual points in LiDAR data. Our method uses only the 3D-coordinates and corresponding low-dimensional spectral features for each point. Evaluated using the ISPRS 3D Semantic Labeling Contest, our method scored second place with an overall accuracy of 81.6%. Finally, we validate the prediction capability of our neighborhood-aware network to model the bare-earth surface of LiDAR and stereo photogrammetry point clouds. The network bypasses traditionally-used ground classifications and seamlessly integrate neighborhood features with point-wise and global features to predict a per point Digital Terrain Model (DTM). We compare our results with two widely used softwares for DTM extraction, ENVI and LAStools. Together, these efforts have the potential to alleviate the manual burden associated with some of the most challenging and time-consuming geospatial processing tasks, with implications for improving our response to issues of global security, emergency management, and disaster response.
ISBN: 9781392157787Subjects--Topical Terms:
535394
Remote sensing.
Deep Learning Methods for 3D Aerial and Satellite Data.
LDR
:03715nmm a2200325 4500
001
2207157
005
20190913102458.5
008
201008s2019 ||||||||||||||||| ||eng d
020
$a
9781392157787
035
$a
(MiAaPQ)AAI13881840
035
$a
(MiAaPQ)rit:13254
035
$a
AAI13881840
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Yousefhussien, Mohammed A.
$3
3434098
245
1 0
$a
Deep Learning Methods for 3D Aerial and Satellite Data.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2019
300
$a
103 p.
500
$a
Source: Dissertations Abstracts International, Volume: 80-12, Section: B.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Advisor: Salvaggio, Carl.
502
$a
Thesis (Ph.D.)--Rochester Institute of Technology, 2019.
506
$a
This item must not be sold to any third party vendors.
520
$a
Recent advances in digital electronics have led to an overabundance of observations from electro-optical (EO) imaging sensors spanning high spatial, spectral and temporal resolution. This unprecedented volume, variety, and velocity is overwhelming our capacity to manage and translate that data into actionable information. Although decades of image processing research have taken the human out of the loop for many important tasks, the human analyst is still an irreplaceable link in the image exploitation chain, especially for more complex tasks requiring contextual understanding, memory, discernment, and learning. If knowledge discovery is to keep pace with the growing availability of data, new processing paradigms are needed in order to automate the analysis of earth observation imagery and ease the burden of manual interpretation.To address this gap, this dissertation advances fundamental and applied research in deep learning for aerial and satellite imagery. We show how deep learning-a computational model inspired by the human brain-can be used for (1) tracking, (2) classifying, and (3) modeling from a variety of data sources including full-motion video (FMV), Light Detection and Ranging (LiDAR), and stereo photogrammetry. First we assess the ability of a bio-inspired tracking method to track small targets using aerial videos. The tracker uses three kinds of saliency maps: appearance, location, and motion. Our approach achieves the best overall performance, including being the only method capable of handling long-term occlusions.Second, we evaluate the classification accuracy of a multi-scale fully convolutional network to label individual points in LiDAR data. Our method uses only the 3D-coordinates and corresponding low-dimensional spectral features for each point. Evaluated using the ISPRS 3D Semantic Labeling Contest, our method scored second place with an overall accuracy of 81.6%. Finally, we validate the prediction capability of our neighborhood-aware network to model the bare-earth surface of LiDAR and stereo photogrammetry point clouds. The network bypasses traditionally-used ground classifications and seamlessly integrate neighborhood features with point-wise and global features to predict a per point Digital Terrain Model (DTM). We compare our results with two widely used softwares for DTM extraction, ENVI and LAStools. Together, these efforts have the potential to alleviate the manual burden associated with some of the most challenging and time-consuming geospatial processing tasks, with implications for improving our response to issues of global security, emergency management, and disaster response.
590
$a
School code: 0465.
650
4
$a
Remote sensing.
$3
535394
650
4
$a
Artificial intelligence.
$3
516317
690
$a
0799
690
$a
0800
710
2
$a
Rochester Institute of Technology.
$b
Imaging Science.
$3
1019498
773
0
$t
Dissertations Abstracts International
$g
80-12B.
790
$a
0465
791
$a
Ph.D.
792
$a
2019
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13881840
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9383706
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入