Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Understanding Human Actions in Video.
~
Stroud, Jonathan.
Linked to FindBook
Google Book
Amazon
博客來
Understanding Human Actions in Video.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Understanding Human Actions in Video./
Author:
Stroud, Jonathan.
Published:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
Description:
144 p.
Notes:
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
Contained By:
Dissertations Abstracts International82-07B.
Subject:
Remote sensing. -
Online resource:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28240318
ISBN:
9798684620706
Understanding Human Actions in Video.
Stroud, Jonathan.
Understanding Human Actions in Video.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 144 p.
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
Thesis (Ph.D.)--University of Michigan, 2020.
This item must not be sold to any third party vendors.
Understanding human behavior is crucial for any autonomous system which interacts with humans. For example, assistive robots need to know when a person is signaling for help, and autonomous vehicles need to know when a person is waiting to cross the street. However, identifying human actions in video is a challenging and unsolved problem. In this work, we address several of the key challenges in human action recognition. To enable better representations of video sequences, we develop novel deep learning architectures which improve representations both at the level of instantaneous motion as well as at the level of long-term context. In addition, to reduce reliance on fixed action vocabularies, we develop a compositional representation of actions which allows novel action descriptions to be represented as a sequence of sub-actions. Finally, we address the issue of data collection for human action understanding by creating a large-scale video dataset, consisting of 70 million videos collected from internet video sharing sites and their matched descriptions. We demonstrate that these contributions improve the generalization performance of human action recognition systems on several benchmark datasets.
ISBN: 9798684620706Subjects--Topical Terms:
535394
Remote sensing.
Subjects--Index Terms:
Computer vision
Understanding Human Actions in Video.
LDR
:02624nmm a2200457 4500
001
2283751
005
20211115071640.5
008
220723s2020 ||||||||||||||||| ||eng d
020
$a
9798684620706
035
$a
(MiAaPQ)AAI28240318
035
$a
(MiAaPQ)umichrackham003355
035
$a
AAI28240318
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Stroud, Jonathan.
$3
3562775
245
1 0
$a
Understanding Human Actions in Video.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
144 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
500
$a
Advisor: Deng, Jia;Mihalcea, Rada.
502
$a
Thesis (Ph.D.)--University of Michigan, 2020.
506
$a
This item must not be sold to any third party vendors.
506
$a
This item must not be added to any third party search indexes.
520
$a
Understanding human behavior is crucial for any autonomous system which interacts with humans. For example, assistive robots need to know when a person is signaling for help, and autonomous vehicles need to know when a person is waiting to cross the street. However, identifying human actions in video is a challenging and unsolved problem. In this work, we address several of the key challenges in human action recognition. To enable better representations of video sequences, we develop novel deep learning architectures which improve representations both at the level of instantaneous motion as well as at the level of long-term context. In addition, to reduce reliance on fixed action vocabularies, we develop a compositional representation of actions which allows novel action descriptions to be represented as a sequence of sub-actions. Finally, we address the issue of data collection for human action understanding by creating a large-scale video dataset, consisting of 70 million videos collected from internet video sharing sites and their matched descriptions. We demonstrate that these contributions improve the generalization performance of human action recognition systems on several benchmark datasets.
590
$a
School code: 0127.
650
4
$a
Remote sensing.
$3
535394
650
4
$a
Computer science.
$3
523869
650
4
$a
Technical communication.
$3
3172863
650
4
$a
Automotive engineering.
$3
2181195
650
4
$a
Information technology.
$3
532993
650
4
$a
Artificial intelligence.
$3
516317
653
$a
Computer vision
653
$a
Action recognition
653
$a
Autonomous system
653
$a
Autonomous vehicles
653
$a
Video sequences
653
$a
Human action recognition technology
653
$a
Video dataset
690
$a
0984
690
$a
0800
690
$a
0489
690
$a
0643
690
$a
0799
690
$a
0540
710
2
$a
University of Michigan.
$b
Computer Science & Engineering.
$3
3285590
773
0
$t
Dissertations Abstracts International
$g
82-07B.
790
$a
0127
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28240318
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9435484
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login