語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Understanding Human Actions in Video.
~
Stroud, Jonathan.
FindBook
Google Book
Amazon
博客來
Understanding Human Actions in Video.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Understanding Human Actions in Video./
作者:
Stroud, Jonathan.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
面頁冊數:
144 p.
附註:
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
Contained By:
Dissertations Abstracts International82-07B.
標題:
Remote sensing. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28240318
ISBN:
9798684620706
Understanding Human Actions in Video.
Stroud, Jonathan.
Understanding Human Actions in Video.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 144 p.
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
Thesis (Ph.D.)--University of Michigan, 2020.
This item must not be sold to any third party vendors.
Understanding human behavior is crucial for any autonomous system which interacts with humans. For example, assistive robots need to know when a person is signaling for help, and autonomous vehicles need to know when a person is waiting to cross the street. However, identifying human actions in video is a challenging and unsolved problem. In this work, we address several of the key challenges in human action recognition. To enable better representations of video sequences, we develop novel deep learning architectures which improve representations both at the level of instantaneous motion as well as at the level of long-term context. In addition, to reduce reliance on fixed action vocabularies, we develop a compositional representation of actions which allows novel action descriptions to be represented as a sequence of sub-actions. Finally, we address the issue of data collection for human action understanding by creating a large-scale video dataset, consisting of 70 million videos collected from internet video sharing sites and their matched descriptions. We demonstrate that these contributions improve the generalization performance of human action recognition systems on several benchmark datasets.
ISBN: 9798684620706Subjects--Topical Terms:
535394
Remote sensing.
Subjects--Index Terms:
Computer vision
Understanding Human Actions in Video.
LDR
:02624nmm a2200457 4500
001
2283751
005
20211115071640.5
008
220723s2020 ||||||||||||||||| ||eng d
020
$a
9798684620706
035
$a
(MiAaPQ)AAI28240318
035
$a
(MiAaPQ)umichrackham003355
035
$a
AAI28240318
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Stroud, Jonathan.
$3
3562775
245
1 0
$a
Understanding Human Actions in Video.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
144 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
500
$a
Advisor: Deng, Jia;Mihalcea, Rada.
502
$a
Thesis (Ph.D.)--University of Michigan, 2020.
506
$a
This item must not be sold to any third party vendors.
506
$a
This item must not be added to any third party search indexes.
520
$a
Understanding human behavior is crucial for any autonomous system which interacts with humans. For example, assistive robots need to know when a person is signaling for help, and autonomous vehicles need to know when a person is waiting to cross the street. However, identifying human actions in video is a challenging and unsolved problem. In this work, we address several of the key challenges in human action recognition. To enable better representations of video sequences, we develop novel deep learning architectures which improve representations both at the level of instantaneous motion as well as at the level of long-term context. In addition, to reduce reliance on fixed action vocabularies, we develop a compositional representation of actions which allows novel action descriptions to be represented as a sequence of sub-actions. Finally, we address the issue of data collection for human action understanding by creating a large-scale video dataset, consisting of 70 million videos collected from internet video sharing sites and their matched descriptions. We demonstrate that these contributions improve the generalization performance of human action recognition systems on several benchmark datasets.
590
$a
School code: 0127.
650
4
$a
Remote sensing.
$3
535394
650
4
$a
Computer science.
$3
523869
650
4
$a
Technical communication.
$3
3172863
650
4
$a
Automotive engineering.
$3
2181195
650
4
$a
Information technology.
$3
532993
650
4
$a
Artificial intelligence.
$3
516317
653
$a
Computer vision
653
$a
Action recognition
653
$a
Autonomous system
653
$a
Autonomous vehicles
653
$a
Video sequences
653
$a
Human action recognition technology
653
$a
Video dataset
690
$a
0984
690
$a
0800
690
$a
0489
690
$a
0643
690
$a
0799
690
$a
0540
710
2
$a
University of Michigan.
$b
Computer Science & Engineering.
$3
3285590
773
0
$t
Dissertations Abstracts International
$g
82-07B.
790
$a
0127
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28240318
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9435484
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入