語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Privacy-Preserving Smart-Room Visual...
~
Chen, Jiawei.
FindBook
Google Book
Amazon
博客來
Privacy-Preserving Smart-Room Visual Analytics.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Privacy-Preserving Smart-Room Visual Analytics./
作者:
Chen, Jiawei.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2019,
面頁冊數:
136 p.
附註:
Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
Contained By:
Dissertations Abstracts International81-05B.
標題:
Electrical engineering. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=22623010
ISBN:
9781687992918
Privacy-Preserving Smart-Room Visual Analytics.
Chen, Jiawei.
Privacy-Preserving Smart-Room Visual Analytics.
- Ann Arbor : ProQuest Dissertations & Theses, 2019 - 136 p.
Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
Thesis (Ph.D.)--Boston University, 2019.
This item must not be sold to any third party vendors.
The proliferation of sensors in living spaces in the last few years has led to the concept of a smart room of the future - an environment that allows intelligent interaction with its occupants, be it a living or conference room. Among the promised benefits of future smart rooms are improved energy efficiency, health benefits and increased productivity. To realize such benefits, accurate and reliable localization of occupants and recognition of their poses, activities, and facial expressions are crucial. Extensive research has been performed to date in these areas, primarily using video cameras. However, with increasing concerns about privacy, the use of standard video cameras seems ill-suited for smart spaces; alternative sensing modalities and visual analytics techniques, that preserve privacy, are urgently needed. Motivated by such demand, this thesis aims to develop image and video analysis methodologies that protect occupant's (visual) privacy while preserving utility for an inference task. We propose two distinct methodologies to accomplish this.In the first one, we address privacy concerns by degrading the spatial resolution of images/videos to the point where it no longer provides visual utility to eavesdroppers. We have conducted proof-of-concept studies for the problems of head pose estimation, indoor occupant localization, and human action recognition at extremely low resolutions (eLR) (lower than 16x16 pixels). For the problem of pose estimation, specifically head pose, from a single image at resolutions as low as 10x10 pixels or even 3x3 pixels, we developed an estimation algorithm using a classical data-driven approach. For occupant localization based on data from overhead-mounted single-pixel visible-light sensors, we developed both coarse- and fine-grained estimation algorithms using classical machine learning techniques. For action recognition from eLR visual data, motivated by the success of deep learning in computer vision, we developed multiple two-stream Convolutional Neural Networks (ConvNets) that fuse spatial and temporal information. In particular, we proposed a novel semi-coupled, filter-sharing network that leverages high-resolution videos to train an eLR ConvNet. We demonstrated that practically useful inference performance can be achieved at eLR.While the use of eLR data can mitigate visual privacy concerns, it can also significantly limit utility compared to full-resolution data. Thus, in addition to developing inference methods for eLR data, we took advantage of recent advancements in representation learning to design an identity-invariant data representation that also permits synthesis of utility-equivalent realistic full-resolution data with a different identity. To this end, we proposed two novel models tailored for 2D images. We tested our models on a number of visual analytics tasks such as recognizing facial expressions, estimating head poses, or illumination condition. A thorough evaluation of the proposed approaches under various threat scenarios demonstrates that our approaches strike a balance between preservation of privacy and data utility. As additional benefits, our approach enables performing expression-and head-pose-preserving face morphing.
ISBN: 9781687992918Subjects--Topical Terms:
649834
Electrical engineering.
Privacy-Preserving Smart-Room Visual Analytics.
LDR
:04203nmm a2200301 4500
001
2263280
005
20200214113230.5
008
220629s2019 ||||||||||||||||| ||eng d
020
$a
9781687992918
035
$a
(MiAaPQ)AAI22623010
035
$a
AAI22623010
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Chen, Jiawei.
$3
1909659
245
1 0
$a
Privacy-Preserving Smart-Room Visual Analytics.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2019
300
$a
136 p.
500
$a
Source: Dissertations Abstracts International, Volume: 81-05, Section: B.
500
$a
Advisor: Konrad, Janusz;Ishwar, Prakash.
502
$a
Thesis (Ph.D.)--Boston University, 2019.
506
$a
This item must not be sold to any third party vendors.
520
$a
The proliferation of sensors in living spaces in the last few years has led to the concept of a smart room of the future - an environment that allows intelligent interaction with its occupants, be it a living or conference room. Among the promised benefits of future smart rooms are improved energy efficiency, health benefits and increased productivity. To realize such benefits, accurate and reliable localization of occupants and recognition of their poses, activities, and facial expressions are crucial. Extensive research has been performed to date in these areas, primarily using video cameras. However, with increasing concerns about privacy, the use of standard video cameras seems ill-suited for smart spaces; alternative sensing modalities and visual analytics techniques, that preserve privacy, are urgently needed. Motivated by such demand, this thesis aims to develop image and video analysis methodologies that protect occupant's (visual) privacy while preserving utility for an inference task. We propose two distinct methodologies to accomplish this.In the first one, we address privacy concerns by degrading the spatial resolution of images/videos to the point where it no longer provides visual utility to eavesdroppers. We have conducted proof-of-concept studies for the problems of head pose estimation, indoor occupant localization, and human action recognition at extremely low resolutions (eLR) (lower than 16x16 pixels). For the problem of pose estimation, specifically head pose, from a single image at resolutions as low as 10x10 pixels or even 3x3 pixels, we developed an estimation algorithm using a classical data-driven approach. For occupant localization based on data from overhead-mounted single-pixel visible-light sensors, we developed both coarse- and fine-grained estimation algorithms using classical machine learning techniques. For action recognition from eLR visual data, motivated by the success of deep learning in computer vision, we developed multiple two-stream Convolutional Neural Networks (ConvNets) that fuse spatial and temporal information. In particular, we proposed a novel semi-coupled, filter-sharing network that leverages high-resolution videos to train an eLR ConvNet. We demonstrated that practically useful inference performance can be achieved at eLR.While the use of eLR data can mitigate visual privacy concerns, it can also significantly limit utility compared to full-resolution data. Thus, in addition to developing inference methods for eLR data, we took advantage of recent advancements in representation learning to design an identity-invariant data representation that also permits synthesis of utility-equivalent realistic full-resolution data with a different identity. To this end, we proposed two novel models tailored for 2D images. We tested our models on a number of visual analytics tasks such as recognizing facial expressions, estimating head poses, or illumination condition. A thorough evaluation of the proposed approaches under various threat scenarios demonstrates that our approaches strike a balance between preservation of privacy and data utility. As additional benefits, our approach enables performing expression-and head-pose-preserving face morphing.
590
$a
School code: 0017.
650
4
$a
Electrical engineering.
$3
649834
650
4
$a
Computer science.
$3
523869
690
$a
0544
690
$a
0984
710
2
$a
Boston University.
$b
Electrical & Computer Engineering ENG.
$3
3192614
773
0
$t
Dissertations Abstracts International
$g
81-05B.
790
$a
0017
791
$a
Ph.D.
792
$a
2019
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=22623010
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9415514
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入