Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Efficient and Robust Video Understan...
~
Li, Ying.
Linked to FindBook
Google Book
Amazon
博客來
Efficient and Robust Video Understanding for Human-Robot Interaction and Detection.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Efficient and Robust Video Understanding for Human-Robot Interaction and Detection./
Author:
Li, Ying.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
Description:
125 p.
Notes:
Source: Dissertations Abstracts International, Volume: 80-06, Section: B.
Contained By:
Dissertations Abstracts International80-06B.
Subject:
Computer Engineering. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=11005571
ISBN:
9780438592209
Efficient and Robust Video Understanding for Human-Robot Interaction and Detection.
Li, Ying.
Efficient and Robust Video Understanding for Human-Robot Interaction and Detection.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 125 p.
Source: Dissertations Abstracts International, Volume: 80-06, Section: B.
Thesis (Ph.D.)--The Ohio State University, 2018.
This item must not be sold to any third party vendors.
Video understanding is able to accomplish various tasks which are fundamental to human-robot interaction and detection. Such tasks include object tracking, action recognition, object detection, and segmentation. However, due to the large data volume in video sequence and the high complexity of visual algorithms, most visual algorithms suffer from low robustness to maintain a high efficiency, especially when it comes to the real-time application. It is challenging to achieve high robustness with high efficiency for video understanding. In this dissertation, we explore the efficient and robust video understanding for human-robot interaction and detection. Two important applications are the health-risky behavior detection and human tracking for human following robots. As large portions of world population are approaching old age, an increasing number of healthcare issues arise from unsafe abnormal behaviors such as falling and staggering. A system that can detect the health-risky abnormal behavior of the elderly is thus of significant importance. In order to detect the abnormal behvior with high accuracy and timely response, visual action recognition is explored and integrated with inertial sensor based behavior detection. The inertial sensor based behavior detection is integrated with a visual behavior detection algorithm to not only choose a small volume of the video sequence but also provide a likelihood guide for different behaviors. The system works in a trigger-verification manner. An elder-carried mobile devices either by a dedicated design or a smartphone, equipped with inertial sensor is used to trigger the selection of relevant video data. The selected data is then fed into visual verification module, and in this way the selective utilization of video data is achieved and the efficiency is guaranteed. By using selected data, the system is allowed to perform more complex visual analysis and achieve a higher accuracy. A novel tracking approach for robust human tracking by robots is proposed. To ensure a close distance between the human and the robot in human-robot interaction, we propose to track part of the human body, particularly human feet. Since the human feet are two closely located objects with similar appearance, it is challenging to track both of them and maintain high accuracy and robustness. An adaptive model for the human walking pattern is formulated to utilize the natural human body information to guide the tracking of the target. By decomposing the foot motion into local and global motions, a locomotion model is proposed. This model is integrated into an existing tracking algorithm, such as the particle filtering to improve the accuracy and efficiency. Apart from the locomotion model, a phase-labeled exemplar pool, which associates a motion phase with foot appearance, is built to improve the tracking performance. The human-robot interaction in a critical environment, to be specific, the nuclear environment, is also studied. In a nuclear environment, due to the damage by the radiation, the assistance of a robot is neccessary. However, due to the radiation effect on the components of the robot, the performance of the robot is degraded. In order to design algorithms for the human-robot interaction that are specifically modified for the radiation environment, the change of the robot performance is studied in this dissertation.
ISBN: 9780438592209Subjects--Topical Terms:
1567821
Computer Engineering.
Efficient and Robust Video Understanding for Human-Robot Interaction and Detection.
LDR
:04579nmm a2200337 4500
001
2207847
005
20190923114241.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9780438592209
035
$a
(MiAaPQ)AAI11005571
035
$a
(MiAaPQ)OhioLINK:osu152207324664654
035
$a
AAI11005571
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Li, Ying.
$3
1036447
245
1 0
$a
Efficient and Robust Video Understanding for Human-Robot Interaction and Detection.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
125 p.
500
$a
Source: Dissertations Abstracts International, Volume: 80-06, Section: B.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Advisor: Zheng, Yuan.
502
$a
Thesis (Ph.D.)--The Ohio State University, 2018.
506
$a
This item must not be sold to any third party vendors.
506
$a
This item must not be added to any third party search indexes.
520
$a
Video understanding is able to accomplish various tasks which are fundamental to human-robot interaction and detection. Such tasks include object tracking, action recognition, object detection, and segmentation. However, due to the large data volume in video sequence and the high complexity of visual algorithms, most visual algorithms suffer from low robustness to maintain a high efficiency, especially when it comes to the real-time application. It is challenging to achieve high robustness with high efficiency for video understanding. In this dissertation, we explore the efficient and robust video understanding for human-robot interaction and detection. Two important applications are the health-risky behavior detection and human tracking for human following robots. As large portions of world population are approaching old age, an increasing number of healthcare issues arise from unsafe abnormal behaviors such as falling and staggering. A system that can detect the health-risky abnormal behavior of the elderly is thus of significant importance. In order to detect the abnormal behvior with high accuracy and timely response, visual action recognition is explored and integrated with inertial sensor based behavior detection. The inertial sensor based behavior detection is integrated with a visual behavior detection algorithm to not only choose a small volume of the video sequence but also provide a likelihood guide for different behaviors. The system works in a trigger-verification manner. An elder-carried mobile devices either by a dedicated design or a smartphone, equipped with inertial sensor is used to trigger the selection of relevant video data. The selected data is then fed into visual verification module, and in this way the selective utilization of video data is achieved and the efficiency is guaranteed. By using selected data, the system is allowed to perform more complex visual analysis and achieve a higher accuracy. A novel tracking approach for robust human tracking by robots is proposed. To ensure a close distance between the human and the robot in human-robot interaction, we propose to track part of the human body, particularly human feet. Since the human feet are two closely located objects with similar appearance, it is challenging to track both of them and maintain high accuracy and robustness. An adaptive model for the human walking pattern is formulated to utilize the natural human body information to guide the tracking of the target. By decomposing the foot motion into local and global motions, a locomotion model is proposed. This model is integrated into an existing tracking algorithm, such as the particle filtering to improve the accuracy and efficiency. Apart from the locomotion model, a phase-labeled exemplar pool, which associates a motion phase with foot appearance, is built to improve the tracking performance. The human-robot interaction in a critical environment, to be specific, the nuclear environment, is also studied. In a nuclear environment, due to the damage by the radiation, the assistance of a robot is neccessary. However, due to the radiation effect on the components of the robot, the performance of the robot is degraded. In order to design algorithms for the human-robot interaction that are specifically modified for the radiation environment, the change of the robot performance is studied in this dissertation.
590
$a
School code: 0168.
650
4
$a
Computer Engineering.
$3
1567821
650
4
$a
Computer science.
$3
523869
690
$a
0464
690
$a
0984
710
2
$a
The Ohio State University.
$b
Electrical and Computer Engineering.
$3
1672495
773
0
$t
Dissertations Abstracts International
$g
80-06B.
790
$a
0168
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=11005571
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9384396
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login