Computer vision - ECCV 2022 = 17th E...
European Conference on Computer Vision (2022 :)

FindBook      Google Book      Amazon      博客來     
  • Computer vision - ECCV 2022 = 17th European Conference, Tel Aviv, Israel, October 23-27, 2022 : proceedings.. Part I /
  • 紀錄類型: 書目-電子資源 : Monograph/item
    正題名/作者: Computer vision - ECCV 2022/ edited by Shai Avidan ... [et al.].
    其他題名: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022 : proceedings.
    其他作者: Avidan, Shai.
    團體作者: European Conference on Computer Vision
    出版者: Cham :Springer Nature Switzerland : : 2022.,
    面頁冊數: lvi, 747 p. :ill. (chiefly color), digital ;24 cm.
    內容註: Learning Depth from Focus in the Wild -- Learning-Based Point Cloud Registration for 6D Object Pose Estimation in the Real World -- An End-to-End Transformer Model for Crowd Localization -- Few-Shot Single-View 3D Reconstruction with Memory Prior Contrastive Network -- DID-M3D: Decoupling Instance Depth for Monocular 3D Object Detection -- Adaptive Co-Teaching for Unsupervised Monocular Depth Estimation -- Fusing Local Similarities for Retrieval-Based 3D Orientation Estimation of Unseen Objects -- Lidar Point Cloud Guided Monocular 3D Object Detection -- Structural Causal 3D Reconstruction -- 3D Human Pose Estimation Using Mӧbius Graph Convolutional Networks -- Learning to Train a Point Cloud Reconstruction Network without Matching -- PanoFormer: Panorama Transformer for Indoor 360° Depth Estimation -- Self-supervised Human Mesh Recovery with Cross-Representation Alignment -- AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object Reconstruction -- A Reliable Online Method for Joint Estimation of Focal Length and Camera Rotation -- PS-NeRF: Neural Inverse Rendering for Multi-View Photometric Stereo -- Share with Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency -- Towards Comprehensive Representation Enhancement in Semantics- Guided Self-Supervised Monocular Depth Estimation -- AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture -- Cross-Attention of Disentangled Modalities for 3D Human Mesh Recovery with Transformers -- GeoRefine: Self-Supervised Online Depth Refinement for Accurate Dense Mapping -- Multi-modal Masked Pre-training for Monocular Panoramic Depth Completion -- GitNet: Geometric Prior-Based Transformation for Birds-Eye View Segmentation -- Learning Visibility for Robust Dense Human Body Estimation -- Towards High-Fidelity Single-View Holistic Reconstruction of Indoor Scenes -- CompNVS: Novel View Synthesis with Scene Completion -- SketchSampler: Sketch-Based 3D Reconstruction via View-Dependent Depth Sampling -- LocalBins: Improving Depth Estimation by Learning Local Distributions -- 2D GANs Meet Unsupervised Single-View 3D Reconstruction -- InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images -- Semi-Supervised Single-View 3D Reconstruction via Prototype Shape Priors -- Bilateral Normal Integration -- S2Contact: Graph-Based Network for 3D Hand-Object Contact Estimation with Semi-Supervised Learning -- SC-wLS: Towards Interpretable Feed-Forward Camera Re-localization -- FloatingFusion: Depth from ToF and Image-Stabilized Stereo Cameras -- DELTAR: Depth Estimation from a Light-Weight ToF Sensor and RGB Image -- 3D Room Layout Estimation from a Cubemap of Panorama Image via Deep Manhattan Hough Transform -- RBP-Pose: Residual Bounding Box Projection for Category-Level Pose Estimation -- Monocular 3D Object Reconstruction with GAN Inversion -- Map-Free Visual Relocalization: Metric Pose Relative to a Single Image -- Self-Distilled Feature Aggregation for Self-Supervised Monocular Depth Estimation -- Planes vs. Chairs: Category-Guided 3D Shape Learning without Any 3D Cues.
    Contained By: Springer Nature eBook
    標題: Computer vision - Congresses. -
    電子資源: https://doi.org/10.1007/978-3-031-19769-7
    ISBN: 9783031197697
館藏地:  出版年:  卷號: 
館藏
  • 1 筆 • 頁數 1 •
 
W9446321 電子資源 11.線上閱覽_V 電子書 EB TA1634 .E87 2022 一般使用(Normal) 在架 0
  • 1 筆 • 頁數 1 •
多媒體
評論
Export
取書館
 
 
變更密碼
登入