語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
FindBook
Google Book
Amazon
博客來
Machine Learning for Deep Image Manipulation.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Machine Learning for Deep Image Manipulation./
作者:
Park, Taesung.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2021,
面頁冊數:
154 p.
附註:
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Contained By:
Dissertations Abstracts International83-03B.
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28540031
ISBN:
9798535552675
Machine Learning for Deep Image Manipulation.
Park, Taesung.
Machine Learning for Deep Image Manipulation.
- Ann Arbor : ProQuest Dissertations & Theses, 2021 - 154 p.
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
Thesis (D.Eng.)--University of California, Berkeley, 2021.
This item must not be sold to any third party vendors.
Common types of image editing methods focus on low-level characteristics. In this thesis, I leverage machine learning to enable image editing that operates at a higher conceptual level. Fundamentally, the proposed methods aim to factor out the visual information that must be maintained in the editing process from the information that may be edited by incorporating the generic visual knowledge. As a result, the new methods can transform images in human-interpretable ways, such as turning one object into another, stylizing photographs into a specific artist's paintings, or adding sunset to a photo taken in daylight. We explore designing such methods in different settings with varying amounts of supervision: per-pixel labels, per-image labels, and no labels. First, using per-pixel supervision, I propose a new deep neural network architecture that can synthesize realistic images from scene layouts and optional target styles. Second, using per-image supervision, I explore the task of domain translation, where an input image of one class is transformed into another. Lastly, I design a framework that can still discover disentangled manipulation of structure and texture from a collection of unlabeled images. We present convincing visuals in a wide range of applications including interactive photo drawing tools, object transfiguration, domain gap reduction between virtual and real environment, and realistic manipulation of image textures.
ISBN: 9798535552675Subjects--Topical Terms:
523869
Computer science.
Subjects--Index Terms:
Computational photography
Machine Learning for Deep Image Manipulation.
LDR
:02580nmm a2200361 4500
001
2348601
005
20220912135615.5
008
241004s2021 ||||||||||||||||| ||eng d
020
$a
9798535552675
035
$a
(MiAaPQ)AAI28540031
035
$a
AAI28540031
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Park, Taesung.
$3
3687965
245
1 0
$a
Machine Learning for Deep Image Manipulation.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2021
300
$a
154 p.
500
$a
Source: Dissertations Abstracts International, Volume: 83-03, Section: B.
500
$a
Advisor: Efros, Alexei A.
502
$a
Thesis (D.Eng.)--University of California, Berkeley, 2021.
506
$a
This item must not be sold to any third party vendors.
520
$a
Common types of image editing methods focus on low-level characteristics. In this thesis, I leverage machine learning to enable image editing that operates at a higher conceptual level. Fundamentally, the proposed methods aim to factor out the visual information that must be maintained in the editing process from the information that may be edited by incorporating the generic visual knowledge. As a result, the new methods can transform images in human-interpretable ways, such as turning one object into another, stylizing photographs into a specific artist's paintings, or adding sunset to a photo taken in daylight. We explore designing such methods in different settings with varying amounts of supervision: per-pixel labels, per-image labels, and no labels. First, using per-pixel supervision, I propose a new deep neural network architecture that can synthesize realistic images from scene layouts and optional target styles. Second, using per-image supervision, I explore the task of domain translation, where an input image of one class is transformed into another. Lastly, I design a framework that can still discover disentangled manipulation of structure and texture from a collection of unlabeled images. We present convincing visuals in a wide range of applications including interactive photo drawing tools, object transfiguration, domain gap reduction between virtual and real environment, and realistic manipulation of image textures.
590
$a
School code: 0028.
650
4
$a
Computer science.
$3
523869
650
4
$a
Research.
$3
531893
650
4
$a
Internships.
$3
3560137
650
4
$a
Photographs.
$3
627415
650
4
$a
Datasets.
$3
3541416
650
4
$a
Experiments.
$3
525909
650
4
$a
Adaptation.
$3
3562958
650
4
$a
Methods.
$3
3560391
650
4
$a
Advisors.
$3
3560734
650
4
$a
Ablation.
$3
3562462
650
4
$a
Semantics.
$3
520060
650
4
$a
Editing.
$3
601456
650
4
$a
Painting.
$3
524049
650
4
$a
Artificial intelligence.
$3
516317
653
$a
Computational photography
653
$a
Computer vision
653
$a
Deep learning
653
$a
Image editing
653
$a
Machine learning
690
$a
0984
690
$a
0800
710
2
$a
University of California, Berkeley.
$b
Electrical Engineering & Computer Sciences.
$3
1671057
773
0
$t
Dissertations Abstracts International
$g
83-03B.
790
$a
0028
791
$a
D.Eng.
792
$a
2021
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28540031
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9471039
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入