語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Learning to Generate 3D Training Data.
~
Yang, Dawei.
FindBook
Google Book
Amazon
博客來
Learning to Generate 3D Training Data.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Learning to Generate 3D Training Data./
作者:
Yang, Dawei.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
面頁冊數:
97 p.
附註:
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
Contained By:
Dissertations Abstracts International82-07B.
標題:
Materials science. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28240370
ISBN:
9798684618659
Learning to Generate 3D Training Data.
Yang, Dawei.
Learning to Generate 3D Training Data.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 97 p.
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
Thesis (Ph.D.)--University of Michigan, 2020.
This item must not be sold to any third party vendors.
Human-level visual 3D perception ability has long been pursued by researchers in computer vision, computer graphics, and robotics. Recent years have seen an emerging line of works using synthetic images to train deep networks for single image 3D perception. Synthetic images rendered by graphics engines are a promising source for training deep neural networks because it comes with perfect 3D ground truth for free. However, the 3D shapes and scenes to be rendered are largely made manual. Besides, it is challenging to ensure that synthetic images collected this way can help train a deep network to perform well on real images. This is because graphics generation pipelines require numerous design decisions such as the selection of 3D shapes and the placement of the camera.In this dissertation, we propose automatic generation pipelines of synthetic data that aim to improve the task performance of a trained network. We explore both supervised and unsupervised directions for automatic optimization of 3D decisions. For supervised learning, we demonstrate how to optimize 3D parameters such that a trained network can generalize well to real images. We first show that we can construct a pure synthetic 3D shape to achieve state-of-the-art performance on a shape-from-shading benchmark. We further parameterize the decisions as a vector and propose a hybrid gradient approach to efficiently optimize the vector towards usefulness. Our hybrid gradient is able to outperform classic black-box approaches on a wide selection of 3D perception tasks. For unsupervised learning, we propose a novelty metric for 3D parameter evolution based on deep autoregressive models. We show that without any extrinsic motivation, the novelty computed from autoregressive models alone is helpful. Our novelty metric can consistently encourage a random synthetic generator to produce more useful training data for downstream 3D perception tasks.
ISBN: 9798684618659Subjects--Topical Terms:
543314
Materials science.
Subjects--Index Terms:
Computer vision
Learning to Generate 3D Training Data.
LDR
:03295nmm a2200445 4500
001
2277017
005
20210510092502.5
008
220723s2020 ||||||||||||||||| ||eng d
020
$a
9798684618659
035
$a
(MiAaPQ)AAI28240370
035
$a
(MiAaPQ)umichrackham003391
035
$a
AAI28240370
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Yang, Dawei.
$3
3555322
245
1 0
$a
Learning to Generate 3D Training Data.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
97 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-07, Section: B.
500
$a
Advisor: Deng, Jia;Fouhey, David Ford.
502
$a
Thesis (Ph.D.)--University of Michigan, 2020.
506
$a
This item must not be sold to any third party vendors.
506
$a
This item must not be added to any third party search indexes.
520
$a
Human-level visual 3D perception ability has long been pursued by researchers in computer vision, computer graphics, and robotics. Recent years have seen an emerging line of works using synthetic images to train deep networks for single image 3D perception. Synthetic images rendered by graphics engines are a promising source for training deep neural networks because it comes with perfect 3D ground truth for free. However, the 3D shapes and scenes to be rendered are largely made manual. Besides, it is challenging to ensure that synthetic images collected this way can help train a deep network to perform well on real images. This is because graphics generation pipelines require numerous design decisions such as the selection of 3D shapes and the placement of the camera.In this dissertation, we propose automatic generation pipelines of synthetic data that aim to improve the task performance of a trained network. We explore both supervised and unsupervised directions for automatic optimization of 3D decisions. For supervised learning, we demonstrate how to optimize 3D parameters such that a trained network can generalize well to real images. We first show that we can construct a pure synthetic 3D shape to achieve state-of-the-art performance on a shape-from-shading benchmark. We further parameterize the decisions as a vector and propose a hybrid gradient approach to efficiently optimize the vector towards usefulness. Our hybrid gradient is able to outperform classic black-box approaches on a wide selection of 3D perception tasks. For unsupervised learning, we propose a novelty metric for 3D parameter evolution based on deep autoregressive models. We show that without any extrinsic motivation, the novelty computed from autoregressive models alone is helpful. Our novelty metric can consistently encourage a random synthetic generator to produce more useful training data for downstream 3D perception tasks.
590
$a
School code: 0127.
650
4
$a
Materials science.
$3
543314
650
4
$a
Computer science.
$3
523869
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Design.
$3
518875
650
4
$a
Information technology.
$3
532993
650
4
$a
Optics.
$3
517925
650
4
$a
Robotics.
$3
519753
653
$a
Computer vision
653
$a
Human-level visual 3D perception
653
$a
Computer graphics
653
$a
Synthetic data
653
$a
Deep neural networks
690
$a
0984
690
$a
0771
690
$a
0389
690
$a
0752
690
$a
0489
690
$a
0794
690
$a
0800
710
2
$a
University of Michigan.
$b
Computer Science & Engineering.
$3
3285590
773
0
$t
Dissertations Abstracts International
$g
82-07B.
790
$a
0127
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28240370
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9428751
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入