語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Cinematic Virtual Reality with Head-...
~
Thatte, Jayant.
FindBook
Google Book
Amazon
博客來
Cinematic Virtual Reality with Head-Motion Parallax.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Cinematic Virtual Reality with Head-Motion Parallax./
作者:
Thatte, Jayant.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2020,
面頁冊數:
165 p.
附註:
Source: Dissertations Abstracts International, Volume: 82-10, Section: B.
Contained By:
Dissertations Abstracts International82-10B.
標題:
Augmented reality. -
電子資源:
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28354084
ISBN:
9798597043371
Cinematic Virtual Reality with Head-Motion Parallax.
Thatte, Jayant.
Cinematic Virtual Reality with Head-Motion Parallax.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 165 p.
Source: Dissertations Abstracts International, Volume: 82-10, Section: B.
Thesis (Ph.D.)--Stanford University, 2020.
This item must not be sold to any third party vendors.
Even as virtual reality has rapidly gained popularity over the past decade, visual fatigue, imperfect sense of immersion, and nausea remain significant barriers to its wide adoption. A key cause of this discomfort is the failure of the current technology to render accurate perspective changes or parallax resulting from the viewer's head motion. This mismatch induces a visual-vestibular conflict. Moreover, rendering accurate head-motion parallax is essential for making the computer-generated experience immersive and more like reality. The lack of this perceptual cue degrades the feeling of presence and makes the overall experience less compelling. This work addresses the issue by proposing an end-to-end framework that can capture, store, and render natural scenery with accurate head-motion parallax.At the core of the problem is the trade-off between storing enough scene information to facilitate fast, high-fidelity rendering of head-motion parallax and keeping the representation compact enough to be practically viable. In this regard, we explore several novel scene representations, compare them with qualitative and quantitative evaluations, and discuss their advantages and disadvantages. We demonstrate the practical applicability of the proposed representations by developing an end-to-end virtual reality system that can render real-time head-motion parallax for natural environments. To that end, we build a two-level camera rig and present an algorithm to construct the proposed representations using the images captured by our camera system. Furthermore, we develop a custom OpenGL renderer that uses the constructed intermediate representations to synthesize full-resolution, stereo frames in a head-mounted display, updating the rendered perspective in real-time based on the viewer's head position and orientation. Finally, we propose a theoretical model for understanding the disocclusion behavior in depth-based novel-view synthesis and analyze the impact of the choice of intermediate representation and camera geometry on the synthesized views in terms of quantitative image quality metrics and the occurrence of disocclusion holes.
ISBN: 9798597043371Subjects--Topical Terms:
1620831
Augmented reality.
Subjects--Index Terms:
Head motion
Cinematic Virtual Reality with Head-Motion Parallax.
LDR
:03333nmm a2200373 4500
001
2281936
005
20210927083433.5
008
220723s2020 ||||||||||||||||| ||eng d
020
$a
9798597043371
035
$a
(MiAaPQ)AAI28354084
035
$a
(MiAaPQ)STANFORDgd337dj1396
035
$a
AAI28354084
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Thatte, Jayant.
$3
3560646
245
1 0
$a
Cinematic Virtual Reality with Head-Motion Parallax.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
165 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-10, Section: B.
500
$a
Advisor: Girod, Bernd;Wandell, Brian A.;Wetzstein, Gordon.
502
$a
Thesis (Ph.D.)--Stanford University, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
Even as virtual reality has rapidly gained popularity over the past decade, visual fatigue, imperfect sense of immersion, and nausea remain significant barriers to its wide adoption. A key cause of this discomfort is the failure of the current technology to render accurate perspective changes or parallax resulting from the viewer's head motion. This mismatch induces a visual-vestibular conflict. Moreover, rendering accurate head-motion parallax is essential for making the computer-generated experience immersive and more like reality. The lack of this perceptual cue degrades the feeling of presence and makes the overall experience less compelling. This work addresses the issue by proposing an end-to-end framework that can capture, store, and render natural scenery with accurate head-motion parallax.At the core of the problem is the trade-off between storing enough scene information to facilitate fast, high-fidelity rendering of head-motion parallax and keeping the representation compact enough to be practically viable. In this regard, we explore several novel scene representations, compare them with qualitative and quantitative evaluations, and discuss their advantages and disadvantages. We demonstrate the practical applicability of the proposed representations by developing an end-to-end virtual reality system that can render real-time head-motion parallax for natural environments. To that end, we build a two-level camera rig and present an algorithm to construct the proposed representations using the images captured by our camera system. Furthermore, we develop a custom OpenGL renderer that uses the constructed intermediate representations to synthesize full-resolution, stereo frames in a head-mounted display, updating the rendered perspective in real-time based on the viewer's head position and orientation. Finally, we propose a theoretical model for understanding the disocclusion behavior in depth-based novel-view synthesis and analyze the impact of the choice of intermediate representation and camera geometry on the synthesized views in terms of quantitative image quality metrics and the occurrence of disocclusion holes.
590
$a
School code: 0212.
650
4
$a
Augmented reality.
$3
1620831
650
4
$a
Measurement techniques.
$3
3560647
650
4
$a
Computer graphics.
$3
517127
650
4
$a
Visual communication.
$3
537612
650
4
$a
Acoustics.
$3
879105
650
4
$a
Optics.
$3
517925
650
4
$a
Virtual reality.
$3
527460
650
4
$a
Signal processing.
$3
533904
650
4
$a
Pattern recognition.
$3
3560648
653
$a
Head motion
653
$a
Parallax
653
$a
Camera systems
653
$a
Scene representations
690
$a
0544
690
$a
0464
690
$a
0435
710
2
$a
Stanford University.
$3
754827
773
0
$t
Dissertations Abstracts International
$g
82-10B.
790
$a
0212
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
https://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28354084
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9433669
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入