語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Computational Models of the Producti...
~
Srinivasan, Ramprakash.
FindBook
Google Book
Amazon
博客來
Computational Models of the Production and Perception of Facial Expressions.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Computational Models of the Production and Perception of Facial Expressions./
作者:
Srinivasan, Ramprakash.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2018,
面頁冊數:
147 p.
附註:
Source: Dissertations Abstracts International, Volume: 80-06, Section: B.
Contained By:
Dissertations Abstracts International80-06B.
標題:
Social psychology. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=11013087
ISBN:
9780438668966
Computational Models of the Production and Perception of Facial Expressions.
Srinivasan, Ramprakash.
Computational Models of the Production and Perception of Facial Expressions.
- Ann Arbor : ProQuest Dissertations & Theses, 2018 - 147 p.
Source: Dissertations Abstracts International, Volume: 80-06, Section: B.
Thesis (Ph.D.)--The Ohio State University, 2018.
This item must not be sold to any third party vendors.
By combining different facial muscle actions, called Action Units (AUs), humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science have long hypothesized the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional Magnetic Resonance Imaging (fMRI) and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the pattern analysis decoder. Research in face perception and emotion theory requires very large annotated databases of images of facial expressions of emotion. Useful annotations include AUs and their intensities, as well as emotion category. This process cannot be practically achieved manually. Herein, we present a novel computer vision algorithm to annotate a large database of a million images of facial expressions of emotion from the wild (i.e., face images downloaded from the Internet). Comparisons with state-of-the-art algorithms demonstrate the algorithm's high accuracy. We further use WordNet to download 1,000,000 images of facial expressions with associated emotion keywords from the Internet. The downloaded images are then automatically annotated with AUs, AU intensities and emotion categories by our algorithm. The result is a highly useful database that can be readily queried using semantic descriptions for applications in computer vision, affective computing, social and cognitive psychology. Color is a fundamental image feature of facial expressions. For example, when we furrow our eyebrows in anger, blood rushes in and a reddish color becomes apparent around that area of the face. Surprisingly, these image properties have not been exploited to recognize the facial action units (AUs) associated with these expressions. Herein, we present the first system to do recognition of AUs and their intensities using these functional color changes. These color features are shown to be robust to changes in identity, gender, race, ethnicity and skin color. Because these image changes are given by functions rather than vectors, we use a functional classifiers to identify the most discriminant color features of an AU and its intensities. We demonstrate that, using these discriminant color features, one can achieve results superior to those of the state-of-the-art. Lastly, the study of emotion has reached an impasse that can only be addressed once we know which facial expressions are used within and across cultures in the wild, not in controlled lab conditions. Yet, no such studies exist. Here, we present the first large-scale study of the production and visual perception of facial expressions of emotion in the wild. We find that of the 16,384 possible facial configurations that people can produce, only 35 are successfully used to transmit emotive information across cultures, and 8 within a smaller number of cultures. Cross-cultural expressions successfully transmit emotion category and valence, but not arousal. Cultural-specific expressions successfully transmit valence and arousal, but not categories. These unexpected findings cannot be fully explained by current models of emotion.
ISBN: 9780438668966Subjects--Topical Terms:
520219
Social psychology.
Computational Models of the Production and Perception of Facial Expressions.
LDR
:04901nmm a2200361 4500
001
2205637
005
20190828120332.5
008
201008s2018 ||||||||||||||||| ||eng d
020
$a
9780438668966
035
$a
(MiAaPQ)AAI11013087
035
$a
(MiAaPQ)OhioLINK:osu1531239299392184
035
$a
AAI11013087
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Srinivasan, Ramprakash.
$3
3432500
245
1 0
$a
Computational Models of the Production and Perception of Facial Expressions.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2018
300
$a
147 p.
500
$a
Source: Dissertations Abstracts International, Volume: 80-06, Section: B.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Martinez, Aleix.
502
$a
Thesis (Ph.D.)--The Ohio State University, 2018.
506
$a
This item must not be sold to any third party vendors.
506
$a
This item must not be added to any third party search indexes.
520
$a
By combining different facial muscle actions, called Action Units (AUs), humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science have long hypothesized the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional Magnetic Resonance Imaging (fMRI) and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the pattern analysis decoder. Research in face perception and emotion theory requires very large annotated databases of images of facial expressions of emotion. Useful annotations include AUs and their intensities, as well as emotion category. This process cannot be practically achieved manually. Herein, we present a novel computer vision algorithm to annotate a large database of a million images of facial expressions of emotion from the wild (i.e., face images downloaded from the Internet). Comparisons with state-of-the-art algorithms demonstrate the algorithm's high accuracy. We further use WordNet to download 1,000,000 images of facial expressions with associated emotion keywords from the Internet. The downloaded images are then automatically annotated with AUs, AU intensities and emotion categories by our algorithm. The result is a highly useful database that can be readily queried using semantic descriptions for applications in computer vision, affective computing, social and cognitive psychology. Color is a fundamental image feature of facial expressions. For example, when we furrow our eyebrows in anger, blood rushes in and a reddish color becomes apparent around that area of the face. Surprisingly, these image properties have not been exploited to recognize the facial action units (AUs) associated with these expressions. Herein, we present the first system to do recognition of AUs and their intensities using these functional color changes. These color features are shown to be robust to changes in identity, gender, race, ethnicity and skin color. Because these image changes are given by functions rather than vectors, we use a functional classifiers to identify the most discriminant color features of an AU and its intensities. We demonstrate that, using these discriminant color features, one can achieve results superior to those of the state-of-the-art. Lastly, the study of emotion has reached an impasse that can only be addressed once we know which facial expressions are used within and across cultures in the wild, not in controlled lab conditions. Yet, no such studies exist. Here, we present the first large-scale study of the production and visual perception of facial expressions of emotion in the wild. We find that of the 16,384 possible facial configurations that people can produce, only 35 are successfully used to transmit emotive information across cultures, and 8 within a smaller number of cultures. Cross-cultural expressions successfully transmit emotion category and valence, but not arousal. Cultural-specific expressions successfully transmit valence and arousal, but not categories. These unexpected findings cannot be fully explained by current models of emotion.
590
$a
School code: 0168.
650
4
$a
Social psychology.
$3
520219
650
4
$a
Computer Engineering.
$3
1567821
650
4
$a
Cognitive psychology.
$3
523881
650
4
$a
Computer science.
$3
523869
690
$a
0451
690
$a
0464
690
$a
0633
690
$a
0984
710
2
$a
The Ohio State University.
$b
Electrical and Computer Engineering.
$3
1672495
773
0
$t
Dissertations Abstracts International
$g
80-06B.
790
$a
0168
791
$a
Ph.D.
792
$a
2018
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=11013087
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9382186
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入