語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Physics-based Dynamic Reconstruction...
~
Li, Zhong.
FindBook
Google Book
Amazon
博客來
Physics-based Dynamic Reconstruction of Deformable Objects.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Physics-based Dynamic Reconstruction of Deformable Objects./
作者:
Li, Zhong.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2019,
面頁冊數:
139 p.
附註:
Source: Dissertation Abstracts International, Volume: 80-08(E), Section: B.
Contained By:
Dissertation Abstracts International80-08B(E).
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13805507
ISBN:
9781392010549
Physics-based Dynamic Reconstruction of Deformable Objects.
Li, Zhong.
Physics-based Dynamic Reconstruction of Deformable Objects.
- Ann Arbor : ProQuest Dissertations & Theses, 2019 - 139 p.
Source: Dissertation Abstracts International, Volume: 80-08(E), Section: B.
Thesis (Ph.D.)--University of Delaware, 2019.
Recently, the renewed interest in virtual reality (VR) and augmented reality (AR) has created new demands for dynamic reconstruction, i.e., scanning real objects efficiently and accurately from all directions. But there are several challenges. Firstly, 3D reconstruction produced by traditional photogrammetry or multi-view geometry is heavily corrupted due to occlusions, noise, limited field-of-view, etc. Secondly, because of the high-quality demand, rendering dynamic reconstruction results costs a significant amount of disk and memory, which is not practical when the regular user wants to access a relatively longer free-viewpoint 3D video. Thirdly, reliable human parts segmentation on images plays an important role in 3D reconstruction tasks. While significant achievements have been made on human pose estimation, the performance on human parts segmentation remains dissatisfying. Finally, recovering time-dependent volumetric 3D fluid flow is a challenging task as the particles lie at different depths but with similar appearance, making it particularly difficult to track a large number of particles.
ISBN: 9781392010549Subjects--Topical Terms:
523869
Computer science.
Physics-based Dynamic Reconstruction of Deformable Objects.
LDR
:04441nmm a2200337 4500
001
2203697
005
20190531105742.5
008
201008s2019 ||||||||||||||||| ||eng d
020
$a
9781392010549
035
$a
(MiAaPQ)AAI13805507
035
$a
(MiAaPQ)udel:13661
035
$a
AAI13805507
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Li, Zhong.
$3
1059017
245
1 0
$a
Physics-based Dynamic Reconstruction of Deformable Objects.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2019
300
$a
139 p.
500
$a
Source: Dissertation Abstracts International, Volume: 80-08(E), Section: B.
500
$a
Adviser: Jingyi Yu.
502
$a
Thesis (Ph.D.)--University of Delaware, 2019.
520
$a
Recently, the renewed interest in virtual reality (VR) and augmented reality (AR) has created new demands for dynamic reconstruction, i.e., scanning real objects efficiently and accurately from all directions. But there are several challenges. Firstly, 3D reconstruction produced by traditional photogrammetry or multi-view geometry is heavily corrupted due to occlusions, noise, limited field-of-view, etc. Secondly, because of the high-quality demand, rendering dynamic reconstruction results costs a significant amount of disk and memory, which is not practical when the regular user wants to access a relatively longer free-viewpoint 3D video. Thirdly, reliable human parts segmentation on images plays an important role in 3D reconstruction tasks. While significant achievements have been made on human pose estimation, the performance on human parts segmentation remains dissatisfying. Finally, recovering time-dependent volumetric 3D fluid flow is a challenging task as the particles lie at different depths but with similar appearance, making it particularly difficult to track a large number of particles.
520
$a
In this dissertation, for deformable shape completion, I present a graph-based non- rigid shape registration framework that can simultaneously recover 3D human body geometry and estimate pose/motion at high fidelity. My approach first generates a global full body template by registering all poses in the acquired motion sequence, and then constructs a deformable graph utilizing the rigid components in the global template. The global template graph can be directly used to warp each motion frame as well as to fill in missing geometry. Specifically, I combine local rigidity and temporal coherence constraints to maintain motion and geometry consistencies.
520
$a
For deformable shape correspondence and compression, I present an end-to-end deep learning scheme to establish dense shape correspondences and subsequently compress the data. My approach uses the sparse set of panoramic depth maps or PDMs, each emulating an inward-viewing concentric mosaic (CM). Then, it develops a learning-based technique to learn pixel-wise feature descriptors on PDMs. Finally, it feeds the results into an autoencoder-based network for compression.
520
$a
In order to improve the reconstruction results, I present a novel technique which I call Pose2Body, able to robustly conduct human parts segmentation based on the pose estimation results. I partition an image into superpixels and set out to assign a segment label to each superpixel most consistent with the pose. Then, I design special feature vectors for every superpixel-label assignment as well as superpixel-superpixel pairs, and model optimal labeling as solving a conditional random field (CRF). In addition, the segmentation results can further improve 3D reconstruction by effectively removing outliers and accelerating feature matching.
520
$a
Finally, I present a light field based 3D deformable particle reconstruction and matching scheme that I call light field PIV. I exploit the refocusing capability and focal symmetry constraint of the light field for reliable particle depth estimation. I further propose a new motion-constrained optical flow estimation scheme by enforcing local motion rigidity and the Navier-Stoke constraint. Comprehensive experiments on synthetic and real experiments show that my technique can recover dense and accurate 3D fluid flows in small to medium volumes using a single light field camera.
590
$a
School code: 0060.
650
4
$a
Computer science.
$3
523869
690
$a
0984
710
2
$a
University of Delaware.
$b
Computer and Information Sciences.
$3
3188280
773
0
$t
Dissertation Abstracts International
$g
80-08B(E).
790
$a
0060
791
$a
Ph.D.
792
$a
2019
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=13805507
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9380246
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入