語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Towards Automated Recognition of Bod...
~
Luo, Yu.
FindBook
Google Book
Amazon
博客來
Towards Automated Recognition of Bodily Expression of Emotion in the Wild.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Towards Automated Recognition of Bodily Expression of Emotion in the Wild./
作者:
Luo, Yu.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
127 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Contained By:
Dissertations Abstracts International83-03B.
標題:
Cameras. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28841647
ISBN:
9798460447480
Towards Automated Recognition of Bodily Expression of Emotion in the Wild.
Luo, Yu.
Towards Automated Recognition of Bodily Expression of Emotion in the Wild.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 127 p.
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Thesis (Ph.D.)--The Pennsylvania State University, 2021.
This item must not be sold to any third party vendors.
Humans are arguably innately prepared to comprehend others' emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9,876 video clips of body movements and 13,239 human characters, named BoLD (Body Language Dataset), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named ARBEE (Automated Recognition of Bodily Expression of Emotion), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans. Computationally representing human body movements from images is another aspect towards automated recognition of bodily expression. A fine-grained mesh of human pose and shape provides rich geometric information that enables many applications including bodily expression recognition. Estimating an accurate 3D human mesh from an image captured by a passive sensor is a highly challenging research problem. The mainstream approach, which uses deep learning, requires large-scale human pose/shape annotations in the training process. Currently, those annotations are mostly created from expensive indoor motion capture systems, thus both diversity and quantity are limited. We propose a new method to train a deep human mesh estimation model using a large quantity of unlabeled RGB-D images, which are inexpensive and convenient to collect. Depth information encoded in the data is used in the training process to achieve higher model accuracy. Our method is easy-to-implement and amenable to any other state-of-the-art parametric mesh modeling framework. We empirically demonstrate the effectiveness of this method based on real-world datasets, validating the value of the proposed ``learning from depth'' approach.
ISBN: 9798460447480Subjects--Topical Terms:
524039
Cameras.
Subjects--Index Terms:
Body language
Towards Automated Recognition of Bodily Expression of Emotion in the Wild.
LDR
:04127nmm a2200361 4500
001
2283881
005
20211115071709.5
008
220723s2021 ||||||||||||||||| ||eng d
020
$a
9798460447480
035
$a
(MiAaPQ)AAI28841647
035
$a
(MiAaPQ)PennState_24462yzl5709
035
$a
AAI28841647
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Luo, Yu.
$3
1903375
245
1 0
$a
Towards Automated Recognition of Bodily Expression of Emotion in the Wild.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
127 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
500
$a
Advisor: Wang, James;Li, Jia.
502
$a
Thesis (Ph.D.)--The Pennsylvania State University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Humans are arguably innately prepared to comprehend others' emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9,876 video clips of body movements and 13,239 human characters, named BoLD (Body Language Dataset), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named ARBEE (Automated Recognition of Bodily Expression of Emotion), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans. Computationally representing human body movements from images is another aspect towards automated recognition of bodily expression. A fine-grained mesh of human pose and shape provides rich geometric information that enables many applications including bodily expression recognition. Estimating an accurate 3D human mesh from an image captured by a passive sensor is a highly challenging research problem. The mainstream approach, which uses deep learning, requires large-scale human pose/shape annotations in the training process. Currently, those annotations are mostly created from expensive indoor motion capture systems, thus both diversity and quantity are limited. We propose a new method to train a deep human mesh estimation model using a large quantity of unlabeled RGB-D images, which are inexpensive and convenient to collect. Depth information encoded in the data is used in the training process to achieve higher model accuracy. Our method is easy-to-implement and amenable to any other state-of-the-art parametric mesh modeling framework. We empirically demonstrate the effectiveness of this method based on real-world datasets, validating the value of the proposed ``learning from depth'' approach.
590
$a
School code: 0176.
650
4
$a
Cameras.
$3
524039
650
4
$a
Human performance.
$3
3562051
650
4
$a
Data collection.
$3
3561708
650
4
$a
Emotions.
$3
524569
650
4
$a
Crowdsourcing.
$3
3377825
650
4
$a
Retouching.
$3
3562950
650
4
$a
Computer science.
$3
523869
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Experimental psychology.
$3
2144733
650
4
$a
Information technology.
$3
532993
653
$a
Body language
653
$a
Automated recognition
690
$a
0489
690
$a
0984
690
$a
0800
690
$a
0623
710
2
$a
The Pennsylvania State University.
$3
699896
773
0
$t
Dissertations Abstracts International
$g
83-03B.
790
$a
0176
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28841647
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9435614
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入