語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Learning-Based Methods for Single Im...
~
Zhang, He.
FindBook
Google Book
Amazon
博客來
Learning-Based Methods for Single Image Restoration and Translation.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Learning-Based Methods for Single Image Restoration and Translation./
作者:
Zhang, He.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2019,
面頁冊數:
149 p.
附註:
Source: Dissertations Abstracts International, Volume: 80-12, Section: B.
Contained By:
Dissertations Abstracts International80-12B.
標題:
Artificial intelligence. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10982777
ISBN:
9781392221631
Learning-Based Methods for Single Image Restoration and Translation.
Zhang, He.
Learning-Based Methods for Single Image Restoration and Translation.
- Ann Arbor : ProQuest Dissertations & Theses, 2019 - 149 p.
Source: Dissertations Abstracts International, Volume: 80-12, Section: B.
Thesis (Ph.D.)--Rutgers The State University of New Jersey, School of Graduate Studies, 2019.
This item must not be sold to any third party vendors.
In many applications such as drone-based video surveillance, self driving cars and recognition under night-time and low-light conditions, the captured images and videos contain undesirable degradations such as haze, rain, snow, and noise. Furthermore, the performance of many computer vision algorithms often degrades when they are presented with images containing such artifacts. Hence, it is important to develop methods that can automatically remove these artifacts. However, these are difficult problems to solve due to their inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert them into well-posed problems. In this thesis, rather than purely relying on prior-based models, we propose to combine them with data-driven models for image restoration and translation. In particular, we develop new data-driven approaches for 1) single image de-raining, 2) single image dehazing, and 3) thermal-to-visible face synthesis.In the first part of the thesis, we develop three different methods for single image deraining. In the first approach, we develop novel convolutional coding-based methods for single image de-raining, where two different types of filters are learned via convolutional sparse and low-rank coding to characterize the background component and rain-streak component separately. These pre-trained filters are then used to separate the rain component from the image.In the second approach, to ensure that the restored de-rained results are indistinguishable from their corresponding clear images, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN) which consists of a new refined perceptual loss function and a novel multi-scale discriminator. Finally, to deal with nonuniform rain densities, we present a novel density-aware multi-stream densely connected convolutional neural network-based algorithm that enables the network itself to automatically determine the rain-density information and then efficiently remove the corresponding rain-streaks guided by the estimated rain-density label.In the final part of the thesis, we develop an image-to-image translation method for generating high-quality visible images from polarimetric thermal faces. Since polarimetric images contain dierent stokes images containing various polarization state information, we propose a Generative Adversarial Network-based multi-stream feature-level fusion technique to synthesize high-quality visible images from polarimetric thermal images. An application of this approach is presented in polarimetric thermal-to-visible cross-modal face recognition.
ISBN: 9781392221631Subjects--Topical Terms:
516317
Artificial intelligence.
Learning-Based Methods for Single Image Restoration and Translation.
LDR
:03781nmm a2200325 4500
001
2209172
005
20191025102848.5
008
201008s2019 ||||||||||||||||| ||eng d
020
$a
9781392221631
035
$a
(MiAaPQ)AAI10982777
035
$a
(MiAaPQ)gsnb.rutgers:10001
035
$a
AAI10982777
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Zhang, He.
$3
1256334
245
1 0
$a
Learning-Based Methods for Single Image Restoration and Translation.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2019
300
$a
149 p.
500
$a
Source: Dissertations Abstracts International, Volume: 80-12, Section: B.
500
$a
Publisher info.: Dissertation/Thesis.
500
$a
Advisor: Patel, Vishal M.
502
$a
Thesis (Ph.D.)--Rutgers The State University of New Jersey, School of Graduate Studies, 2019.
506
$a
This item must not be sold to any third party vendors.
520
$a
In many applications such as drone-based video surveillance, self driving cars and recognition under night-time and low-light conditions, the captured images and videos contain undesirable degradations such as haze, rain, snow, and noise. Furthermore, the performance of many computer vision algorithms often degrades when they are presented with images containing such artifacts. Hence, it is important to develop methods that can automatically remove these artifacts. However, these are difficult problems to solve due to their inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert them into well-posed problems. In this thesis, rather than purely relying on prior-based models, we propose to combine them with data-driven models for image restoration and translation. In particular, we develop new data-driven approaches for 1) single image de-raining, 2) single image dehazing, and 3) thermal-to-visible face synthesis.In the first part of the thesis, we develop three different methods for single image deraining. In the first approach, we develop novel convolutional coding-based methods for single image de-raining, where two different types of filters are learned via convolutional sparse and low-rank coding to characterize the background component and rain-streak component separately. These pre-trained filters are then used to separate the rain component from the image.In the second approach, to ensure that the restored de-rained results are indistinguishable from their corresponding clear images, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN) which consists of a new refined perceptual loss function and a novel multi-scale discriminator. Finally, to deal with nonuniform rain densities, we present a novel density-aware multi-stream densely connected convolutional neural network-based algorithm that enables the network itself to automatically determine the rain-density information and then efficiently remove the corresponding rain-streaks guided by the estimated rain-density label.In the final part of the thesis, we develop an image-to-image translation method for generating high-quality visible images from polarimetric thermal faces. Since polarimetric images contain dierent stokes images containing various polarization state information, we propose a Generative Adversarial Network-based multi-stream feature-level fusion technique to synthesize high-quality visible images from polarimetric thermal images. An application of this approach is presented in polarimetric thermal-to-visible cross-modal face recognition.
590
$a
School code: 0190.
650
4
$a
Artificial intelligence.
$3
516317
650
4
$a
Computer science.
$3
523869
690
$a
0800
690
$a
0984
710
2
$a
Rutgers The State University of New Jersey, School of Graduate Studies.
$b
Electrical and Computer Engineering.
$3
3429082
773
0
$t
Dissertations Abstracts International
$g
80-12B.
790
$a
0190
791
$a
Ph.D.
792
$a
2019
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10982777
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9385721
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入