語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Monocular human pose tracking and ac...
~
Singh, Vivek Kumar.
FindBook
Google Book
Amazon
博客來
Monocular human pose tracking and action recognition in dynamic environments.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Monocular human pose tracking and action recognition in dynamic environments./
作者:
Singh, Vivek Kumar.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2011,
面頁冊數:
137 p.
附註:
Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: 2318.
Contained By:
Dissertation Abstracts International73-04B.
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3487994
ISBN:
9781267077523
Monocular human pose tracking and action recognition in dynamic environments.
Singh, Vivek Kumar.
Monocular human pose tracking and action recognition in dynamic environments.
- Ann Arbor : ProQuest Dissertations & Theses, 2011 - 137 p.
Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: 2318.
Thesis (Ph.D.)--University of Southern California, 2011.
The objective of this work is to develop an efficient method to find human in videos captured from a single camera, and recognize the action being performed. Automatic detection of humans in a scene and understanding the ongoing activities has been extensively studied, as solution to this problem finds applications in diverse areas such as surveillance, video summarization, content mining and human computer interaction, among others.
ISBN: 9781267077523Subjects--Topical Terms:
523869
Computer science.
Monocular human pose tracking and action recognition in dynamic environments.
LDR
:03958nmm a2200325 4500
001
2156308
005
20180517123959.5
008
190424s2011 ||||||||||||||||| ||eng d
020
$a
9781267077523
035
$a
(MiAaPQ)AAI3487994
035
$a
(MiAaPQ)usc:12756
035
$a
AAI3487994
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Singh, Vivek Kumar.
$3
3344075
245
1 0
$a
Monocular human pose tracking and action recognition in dynamic environments.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2011
300
$a
137 p.
500
$a
Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: 2318.
500
$a
Adviser: Ramakant Nevatia.
502
$a
Thesis (Ph.D.)--University of Southern California, 2011.
520
$a
The objective of this work is to develop an efficient method to find human in videos captured from a single camera, and recognize the action being performed. Automatic detection of humans in a scene and understanding the ongoing activities has been extensively studied, as solution to this problem finds applications in diverse areas such as surveillance, video summarization, content mining and human computer interaction, among others.
520
$a
Though significant advances have made towards finding human in specific poses such as upright pose in cluttered scenes, the problem of finding a human in an arbitrary pose in an unknown environment is still a challenge. We address the problem of estimating human pose using a part based approach, that first finds body part candidates using part detectors and then enforce kinematic constraints using a tree-structured graphical model. For inference, we present a collaborative branch and bound algorithm that uses branch and bound method to search for each part and use kinematics from neighboring parts to guide the branching behavior and compute bounds on the best part estimate. We use multiple, heterogeneous part detectors with varying accuracy and computation requirements, ordered in a hierarchy, to achieve more accurate and efficient pose estimation.
520
$a
While the above approach deals well with pose articulations, it still fails to find human in poses with heavy self occlusion such as crouch, as it does not model inter part occlusion. Thus, recognizing actions from inferred poses would be unreliable. In order to deal with this issue, we propose a joint tracking and recognition approach which tracks the actor pose by sampling from 3D action models and localizing each pose sample; this also allows view-invariant action recognition. We model an action as a sequence of transformations between keyposes. These action models can be obtained by annotating only a few keyposes in 2D; this avoids large training data and MoCAP. For efficiently localizing a sampled pose, we generate a Pose-Specific Part Model (PSPM) which captures appropriate kinematic and occlusion constraints in a tree-structure. In addition, our approach also does not require pose silhouettes and thus also works well in presence of background motion. We show improvements to previous results on two publicly available datasets as well as on a novel, augmented dataset with dynamic backgrounds.
520
$a
Since the poses are sampled from action models, the above activity driven approach works well if the actor only performs actions for which models are available, and does not generalize well to unseen poses and actions. We address this by proposing an activity assisted tracking framework that combines the activity driven tracking with the bottom up pose estimation by using pose samples obtained using part models, in addition to those sampled from action models. We demonstrate the effectiveness of our approach on long video sequences with hand gestures.
590
$a
School code: 0208.
650
4
$a
Computer science.
$3
523869
690
$a
0984
710
2
$a
University of Southern California.
$b
Computer Science.
$3
1023331
773
0
$t
Dissertation Abstracts International
$g
73-04B.
790
$a
0208
791
$a
Ph.D.
792
$a
2011
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3487994
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9355855
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入