語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Machine Vision for Improved Human-Robot Cooperation in Adverse Underwater Conditions.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Machine Vision for Improved Human-Robot Cooperation in Adverse Underwater Conditions./
作者:
Islam, Md Jahidul.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
253 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
Contained By:
Dissertations Abstracts International83-02B.
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28545283
ISBN:
9798538127993
Machine Vision for Improved Human-Robot Cooperation in Adverse Underwater Conditions.
Islam, Md Jahidul.
Machine Vision for Improved Human-Robot Cooperation in Adverse Underwater Conditions.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 253 p.
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
Thesis (Ph.D.)--University of Minnesota, 2021.
This item must not be sold to any third party vendors.
Visually-guided underwater robots are deployed alongside human divers for cooperative exploration, inspection, and monitoring tasks in numerous shallow-water and coastal-water applications. The most essential capability of such companion robots is to visually interpret their surroundings and assist the divers during various stages of an underwater mission. Despite recent technological advancements, the existing systems and solutions for real-time visual perception are greatly affected by marine artifacts such as poor visibility, lighting variation, and the scarcity of salient features. The difficulties are exacerbated by a host of non-linear image distortions caused by the vulnerabilities of underwater light propagation (e.g., wavelength-dependent attenuation, absorption, and scattering). In this dissertation, we present a set of novel and improved visual perception solutions to address these challenges for effective underwater human-robot cooperation. The research outcomes entail novel design and efficient implementation of the underlying vision and learning-based algorithms with extensive field experimental validations and real-time feasibility analyses for single-board deployments. The dissertation is organized into three parts. The first part focuses on developing practical solutions for autonomous underwater vehicles (AUVs) to accompany human divers during an underwater mission. These include robust vision-based modules that enable AUVs to understand human swimming motion, hand gesture, and body pose for following and interacting with them while maintaining smooth spatiotemporal coordination. A series of closed-water and open-water field experiments demonstrate the utility and effectiveness of our proposed perception algorithms for underwater human-robot cooperation. We also identify and quantify their performance variability over a diverse set of operating constraints in adverse visual conditions. The second part of this dissertation is devoted to designing efficient techniques to overcome the effects of poor visibility and optical distortions in underwater imagery by restoring their perceptual and statistical qualities. We further demonstrate the practical feasibility of these techniques as pre-processors in the autonomy pipeline of visually-guided AUVs. Finally, the third part of this dissertation develops methodologies for high-level decision-making such as modeling spatial attention for fast visual search, learning to identify when image enhancement and super-resolution modules are necessary for a detailed perception, etc. We demonstrate that these methodologies facilitate up to 45% faster processing of the on-board visual perception modules and enable AUVs to make intelligent navigational and operational decisions, particularly in autonomous exploratory tasks.In summary, this dissertation delineates our attempts to address the environmental and operational challenges of real-time machine vision for underwater human-robot cooperation. Aiming at a variety of important applications, we develop robust and efficient modules for AUVs to 'follow and interact' with companion divers by accurately perceiving their surroundings while relying on noisy visual sensing alone. Moreover, our proposed perception solutions enable visually-guided robots to 'see better' in noisy conditions, and 'do better' with limited computational resources and real-time constraints. In addition to advancing the state-of-the-art, the proposed methodologies and systems take us one step closer toward bridging the gap between theory and practice for improved human-robot cooperation in the wild.
ISBN: 9798538127993Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Deep learning
Machine Vision for Improved Human-Robot Cooperation in Adverse Underwater Conditions.
LDR
:04845nmm a2200385 4500
001
2347478
005
20220801062212.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798538127993
035
$a
(MiAaPQ)AAI28545283
035
$a
AAI28545283
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Islam, Md Jahidul.
$3
3686735
245
1 0
$a
Machine Vision for Improved Human-Robot Cooperation in Adverse Underwater Conditions.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
253 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-02, Section: B.
500
$a
Advisor: Sattar, Junaed.
502
$a
Thesis (Ph.D.)--University of Minnesota, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Visually-guided underwater robots are deployed alongside human divers for cooperative exploration, inspection, and monitoring tasks in numerous shallow-water and coastal-water applications. The most essential capability of such companion robots is to visually interpret their surroundings and assist the divers during various stages of an underwater mission. Despite recent technological advancements, the existing systems and solutions for real-time visual perception are greatly affected by marine artifacts such as poor visibility, lighting variation, and the scarcity of salient features. The difficulties are exacerbated by a host of non-linear image distortions caused by the vulnerabilities of underwater light propagation (e.g., wavelength-dependent attenuation, absorption, and scattering). In this dissertation, we present a set of novel and improved visual perception solutions to address these challenges for effective underwater human-robot cooperation. The research outcomes entail novel design and efficient implementation of the underlying vision and learning-based algorithms with extensive field experimental validations and real-time feasibility analyses for single-board deployments. The dissertation is organized into three parts. The first part focuses on developing practical solutions for autonomous underwater vehicles (AUVs) to accompany human divers during an underwater mission. These include robust vision-based modules that enable AUVs to understand human swimming motion, hand gesture, and body pose for following and interacting with them while maintaining smooth spatiotemporal coordination. A series of closed-water and open-water field experiments demonstrate the utility and effectiveness of our proposed perception algorithms for underwater human-robot cooperation. We also identify and quantify their performance variability over a diverse set of operating constraints in adverse visual conditions. The second part of this dissertation is devoted to designing efficient techniques to overcome the effects of poor visibility and optical distortions in underwater imagery by restoring their perceptual and statistical qualities. We further demonstrate the practical feasibility of these techniques as pre-processors in the autonomy pipeline of visually-guided AUVs. Finally, the third part of this dissertation develops methodologies for high-level decision-making such as modeling spatial attention for fast visual search, learning to identify when image enhancement and super-resolution modules are necessary for a detailed perception, etc. We demonstrate that these methodologies facilitate up to 45% faster processing of the on-board visual perception modules and enable AUVs to make intelligent navigational and operational decisions, particularly in autonomous exploratory tasks.In summary, this dissertation delineates our attempts to address the environmental and operational challenges of real-time machine vision for underwater human-robot cooperation. Aiming at a variety of important applications, we develop robust and efficient modules for AUVs to 'follow and interact' with companion divers by accurately perceiving their surroundings while relying on noisy visual sensing alone. Moreover, our proposed perception solutions enable visually-guided robots to 'see better' in noisy conditions, and 'do better' with limited computational resources and real-time constraints. In addition to advancing the state-of-the-art, the proposed methodologies and systems take us one step closer toward bridging the gap between theory and practice for improved human-robot cooperation in the wild.
590
$a
School code: 0130.
650
4
$a
Computer science.
$3
523869
650
4
$a
Computer engineering.
$3
621879
650
4
$a
Robotics.
$3
519753
650
4
$a
Autonomous underwater vehicles.
$3
3444520
650
4
$a
Underwater exploration.
$3
598925
650
4
$a
Accuracy.
$3
3559958
650
4
$a
Datasets.
$3
3541416
650
4
$a
Cooperation.
$3
594090
650
4
$a
Vision systems.
$3
3685322
650
4
$a
Dissertations & theses.
$3
3560115
650
4
$a
Robots.
$3
529507
650
4
$a
Algorithms.
$3
536374
650
4
$a
Visual perception.
$3
529664
653
$a
Deep learning
653
$a
Human-robot cooperation
653
$a
Machine vision
653
$a
Robot perception
653
$a
Underwater robotics
653
$a
Visual perception
690
$a
0984
690
$a
0464
690
$a
0771
710
2
$a
University of Minnesota.
$b
Computer Science.
$3
1018528
773
0
$t
Dissertations Abstracts International
$g
83-02B.
790
$a
0130
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28545283
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9469916
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入