語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Robust Pose Estimation of a Robotic ...
~
Zhang, He.
FindBook
Google Book
Amazon
博客來
Robust Pose Estimation of a Robotic Navigation Aid for the Visually Impaired by Multimodal Data Fusion.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Robust Pose Estimation of a Robotic Navigation Aid for the Visually Impaired by Multimodal Data Fusion./
作者:
Zhang, He.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
面頁冊數:
99 p.
附註:
Source: Dissertations Abstracts International, Volume: 79-12, Section: B.
Contained By:
Dissertations Abstracts International79-12B.
標題:
Computer Engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10811646
ISBN:
9780355969962
Robust Pose Estimation of a Robotic Navigation Aid for the Visually Impaired by Multimodal Data Fusion.
Zhang, He.
Robust Pose Estimation of a Robotic Navigation Aid for the Visually Impaired by Multimodal Data Fusion.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 99 p.
Source: Dissertations Abstracts International, Volume: 79-12, Section: B.
Thesis (Ph.D.)--University of Arkansas at Little Rock, 2018.
This item must not be sold to any third party vendors.
For a visually impaired individual, it is a challenging task for him/her to plan and follow a path towards the destination. This task is referred to as wayfinding. In this paper, the functions of wayfinding are defined as localization (aka pose estimation) in an indoor environment and finding a way to get to the destination by using the location information. To address this issue, vision-based navigation systems that use a camera for pose estimation have been intensively studied. Several vision-based robotic navigation aids (RNAs) for the visually impaired have been developed. However, these RNAs are not reliable enough to assist the visually impaired for wayfinding because of featureless scenes, occlusions, abrupt motion and illumination changes. The objective of this dissertation is to develop a robust 6-DOF pose estimation method for a particular RNA-Co-Robotic Cane(CRC)-for wayfinding for the visually impaired. The main contributions of this research are highlighted as follows: First, a new pose estimation method that uses the geometric information of the operating environment (extracted from the range data of a 3D time-of-flight (ToF) camera) to reduce accumulative pose error is proposed. Based on the method, an indoor wayfinding system is developed and validated by experiments. In addition, the developed RNA has been tested by human subject experiments. Second, a new factor graph based multimodal data fusion algorithm is proposed to integrate visual information and range data from a 3D camera and the inertial data from an inertial measurement unit (IMU) for robust pose estimation in indoor environment. Pose estimation by coupling the measurements from a camera and an IMU is termed as visual-inertial odometry (VIO) or visual-inertial SLAM (VI-SLAM). To improve the existing VIO approaches' accuracy, a new method, called plane aided visual inertial odometry (PAVIO), is proposed. The method uses plane features of the operating environment to identify accurate VO outputs to improve VIO pose estimation accuracy. Third, the suitability and performances of three state-of-the-art tightly-coupled VI-SLAM methods are investigated and compared in the context of CRC navigation. Based on the results, the most suitable method is selected and extended for wayfinding, enlightening the future improvement for RNA development.
ISBN: 9780355969962Subjects--Topical Terms:
1567821
Computer Engineering.
Robust Pose Estimation of a Robotic Navigation Aid for the Visually Impaired by Multimodal Data Fusion.
LDR
:03478nmm a2200337 4500
001
2206437
005
20190828120133.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9780355969962
035
$a
(MiAaPQ)AAI10811646
035
$a
(MiAaPQ)ualr:10767
035
$a
AAI10811646
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Zhang, He.
$3
1256334
245
1 0
$a
Robust Pose Estimation of a Robotic Navigation Aid for the Visually Impaired by Multimodal Data Fusion.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
99 p.
500
$a
Source: Dissertations Abstracts International, Volume: 79-12, Section: B.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Liu, Xian.
502
$a
Thesis (Ph.D.)--University of Arkansas at Little Rock, 2018.
506
$a
This item must not be sold to any third party vendors.
520
$a
For a visually impaired individual, it is a challenging task for him/her to plan and follow a path towards the destination. This task is referred to as wayfinding. In this paper, the functions of wayfinding are defined as localization (aka pose estimation) in an indoor environment and finding a way to get to the destination by using the location information. To address this issue, vision-based navigation systems that use a camera for pose estimation have been intensively studied. Several vision-based robotic navigation aids (RNAs) for the visually impaired have been developed. However, these RNAs are not reliable enough to assist the visually impaired for wayfinding because of featureless scenes, occlusions, abrupt motion and illumination changes. The objective of this dissertation is to develop a robust 6-DOF pose estimation method for a particular RNA-Co-Robotic Cane(CRC)-for wayfinding for the visually impaired. The main contributions of this research are highlighted as follows: First, a new pose estimation method that uses the geometric information of the operating environment (extracted from the range data of a 3D time-of-flight (ToF) camera) to reduce accumulative pose error is proposed. Based on the method, an indoor wayfinding system is developed and validated by experiments. In addition, the developed RNA has been tested by human subject experiments. Second, a new factor graph based multimodal data fusion algorithm is proposed to integrate visual information and range data from a 3D camera and the inertial data from an inertial measurement unit (IMU) for robust pose estimation in indoor environment. Pose estimation by coupling the measurements from a camera and an IMU is termed as visual-inertial odometry (VIO) or visual-inertial SLAM (VI-SLAM). To improve the existing VIO approaches' accuracy, a new method, called plane aided visual inertial odometry (PAVIO), is proposed. The method uses plane features of the operating environment to identify accurate VO outputs to improve VIO pose estimation accuracy. Third, the suitability and performances of three state-of-the-art tightly-coupled VI-SLAM methods are investigated and compared in the context of CRC navigation. Based on the results, the most suitable method is selected and extended for wayfinding, enlightening the future improvement for RNA development.
590
$a
School code: 1204.
650
4
$a
Computer Engineering.
$3
1567821
650
4
$a
Electrical engineering.
$3
649834
650
4
$a
Systems science.
$3
3168411
690
$a
0464
690
$a
0544
690
$a
0790
710
2
$a
University of Arkansas at Little Rock.
$b
Systems Engineering.
$3
3170796
773
0
$t
Dissertations Abstracts International
$g
79-12B.
790
$a
1204
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10811646
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9382986
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入