語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Deep Representation Learning for Photorealistic Content Creation.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Deep Representation Learning for Photorealistic Content Creation./
作者:
Xia, Xide.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
169 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-02, Section: A.
Contained By:
Dissertations Abstracts International83-02A.
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28261388
ISBN:
9798522947774
Deep Representation Learning for Photorealistic Content Creation.
Xia, Xide.
Deep Representation Learning for Photorealistic Content Creation.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 169 p.
Source: Dissertations Abstracts International, Volume: 83-02, Section: A.
Thesis (Ph.D.)--Boston University, 2021.
This item must not be sold to any third party vendors.
We study the problem of deep representation learning for photorealistic content creation. This is a critical component in many computer vision applications ranging from virtual reality, videography, and even retail and advertising. In this thesis, we use deep neural techniques to develop end-to-end models that are capable of generating photorealistic results. Our framework is applied in three applications.First, we study real-time universal Photorealistic Image Style Transfer. Photorealistic style transfer is the task of transferring the artistic style of an image onto a content target, producing a result that is plausibly taken with a camera. We propose a new end-to-end model for photorealistic style transfer that is both fast and inherently generates photorealistic results. The core of our approach is a feed-forward neural network that learns local edge-aware affine transforms that automatically obey the photorealism constraint. Our method produces visually superior results and is three orders of magnitude faster, enabling real-time performance at 4K on a mobile phone.Next, we learn real-time localized Photorealistic Video Style Transfer. We present a novel algorithm for transferring artistic styles of an image onto local regions of a target video while preserving its photorealism. Local regions may be selected either fully automatically from an image, through using video segmentation algorithms, or from casual user guidance such as scribbles. Our method is real-time and works on arbitrary inputs without runtime optimization once trained. We demonstrate our method on a variety of style images and target videos, including the ability to transfer different styles onto multiple objects simultaneously, and smoothly transition between styles in time.Lastly, we tackle the problem of attribute-based Fashion Image Retrieval and Content Creation. We present an effective approach for generating new outfits based on the input queries through generative adversarial learning. We address this challenge by decomposing the complicated process into two stages. In the first stage, we present a novel attribute-aware global ranking network for attribute-based fashion retrieval. In the second stage, a generative model is used to finalize the retrieved results conditioned on an individual's preferred style. We demonstrate promising results on standard large-scale benchmarks.
ISBN: 9798522947774Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Deep representation learning
Deep Representation Learning for Photorealistic Content Creation.
LDR
:03500nmm a2200337 4500
001
2343643
005
20220512072122.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798522947774
035
$a
(MiAaPQ)AAI28261388
035
$a
AAI28261388
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Xia, Xide.
$0
(orcid)0000-0002-9831-7000
$3
3682243
245
1 0
$a
Deep Representation Learning for Photorealistic Content Creation.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
169 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-02, Section: A.
500
$a
Advisor: Kulis, Brian.
502
$a
Thesis (Ph.D.)--Boston University, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
We study the problem of deep representation learning for photorealistic content creation. This is a critical component in many computer vision applications ranging from virtual reality, videography, and even retail and advertising. In this thesis, we use deep neural techniques to develop end-to-end models that are capable of generating photorealistic results. Our framework is applied in three applications.First, we study real-time universal Photorealistic Image Style Transfer. Photorealistic style transfer is the task of transferring the artistic style of an image onto a content target, producing a result that is plausibly taken with a camera. We propose a new end-to-end model for photorealistic style transfer that is both fast and inherently generates photorealistic results. The core of our approach is a feed-forward neural network that learns local edge-aware affine transforms that automatically obey the photorealism constraint. Our method produces visually superior results and is three orders of magnitude faster, enabling real-time performance at 4K on a mobile phone.Next, we learn real-time localized Photorealistic Video Style Transfer. We present a novel algorithm for transferring artistic styles of an image onto local regions of a target video while preserving its photorealism. Local regions may be selected either fully automatically from an image, through using video segmentation algorithms, or from casual user guidance such as scribbles. Our method is real-time and works on arbitrary inputs without runtime optimization once trained. We demonstrate our method on a variety of style images and target videos, including the ability to transfer different styles onto multiple objects simultaneously, and smoothly transition between styles in time.Lastly, we tackle the problem of attribute-based Fashion Image Retrieval and Content Creation. We present an effective approach for generating new outfits based on the input queries through generative adversarial learning. We address this challenge by decomposing the complicated process into two stages. In the first stage, we present a novel attribute-aware global ranking network for attribute-based fashion retrieval. In the second stage, a generative model is used to finalize the retrieved results conditioned on an individual's preferred style. We demonstrate promising results on standard large-scale benchmarks.
590
$a
School code: 0017.
650
4
$a
Computer science.
$3
523869
650
4
$a
Design.
$3
518875
650
4
$a
Accuracy.
$3
3559958
650
4
$a
Deep learning.
$3
3554982
650
4
$a
Ablation.
$3
3562462
650
4
$a
Image retrieval.
$3
3562846
650
4
$a
Optimization.
$3
891104
650
4
$a
Neural networks.
$3
677449
653
$a
Deep representation learning
653
$a
Photorealistic content
653
$a
Fashion image retrieval and content creation
690
$a
0984
690
$a
0389
710
2
$a
Boston University.
$b
Computer Science GRS.
$3
3169364
773
0
$t
Dissertations Abstracts International
$g
83-02A.
790
$a
0017
791
$a
Ph.D.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28261388
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9466081
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入