語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Single Channel Auditory Source Separ...
~
Chen, Zhuo.
FindBook
Google Book
Amazon
博客來
Single Channel Auditory Source Separation with Neural Network.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Single Channel Auditory Source Separation with Neural Network./
作者:
Chen, Zhuo.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, : 2017,
面頁冊數:
120 p.
附註:
Source: Dissertation Abstracts International, Volume: 78-09(E), Section: B.
Contained By:
Dissertation Abstracts International78-09B(E).
標題:
Computer science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10275945
ISBN:
9781369748413
Single Channel Auditory Source Separation with Neural Network.
Chen, Zhuo.
Single Channel Auditory Source Separation with Neural Network.
- Ann Arbor : ProQuest Dissertations & Theses, 2017 - 120 p.
Source: Dissertation Abstracts International, Volume: 78-09(E), Section: B.
Thesis (Ph.D.)--Columbia University, 2017.
Although distinguishing different sounds in noisy environment is a relative easy task for human, source separation has long been extremely di?cult in audio signal processing. The problem is challenging for three reasons: the large variety of sound type, the abundant mixing conditions and the unclear mechanism to distinguish sources, especially for similar sounds.
ISBN: 9781369748413Subjects--Topical Terms:
523869
Computer science.
Single Channel Auditory Source Separation with Neural Network.
LDR
:04920nmm a2200373 4500
001
2160783
005
20180727125212.5
008
190424s2017 ||||||||||||||||| ||eng d
020
$a
9781369748413
035
$a
(MiAaPQ)AAI10275945
035
$a
(MiAaPQ)columbia:13979
035
$a
AAI10275945
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Chen, Zhuo.
$3
1681217
245
1 0
$a
Single Channel Auditory Source Separation with Neural Network.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2017
300
$a
120 p.
500
$a
Source: Dissertation Abstracts International, Volume: 78-09(E), Section: B.
500
$a
Adviser: Nima Mesgarani.
502
$a
Thesis (Ph.D.)--Columbia University, 2017.
520
$a
Although distinguishing different sounds in noisy environment is a relative easy task for human, source separation has long been extremely di?cult in audio signal processing. The problem is challenging for three reasons: the large variety of sound type, the abundant mixing conditions and the unclear mechanism to distinguish sources, especially for similar sounds.
520
$a
In recent years, the neural network based methods achieved impressive successes in various problems, including the speech enhancement, where the task is to separate the clean speech out of the noise mixture. However, the current deep learning based source separator does not perform well on real recorded noisy speech, and more importantly, is not applicable in a more general source separation scenario such as overlapped speech.
520
$a
In this thesis, we firstly propose extensions for the current mask learning network, for the problem of speech enhancement, to fix the scale mismatch problem which is usually occurred in real recording audio. We solve this problem by combining two additional restoration layers in the existing mask learning network. We also proposed a residual learning architecture for the speech enhancement, further improving the network generalization under different recording conditions. We evaluate the proposed speech enhancement models on CHiME 3 data. Without retraining the acoustic model, the best bi-direction LSTM with residue connections yields 25.13% relative WER reduction on real data and 34.03% WER on simulated data.
520
$a
Then we propose a novel neural network based model called "deep clustering" for more general source separation tasks. We train a deep network to assign contrastive embedding vectors to each time-frequency region of the spectrogram in order to implicitly predict the segmentation labels of the target spectrogram from the input mixtures. This yields a deep network-based analogue to spectral clustering, in that the embeddings form a low-rank pairwise affinity matrix that approximates the ideal affinity matrix, while enabling much faster performance. At test time, the clustering step "decodes" the segmentation implicit in the embeddings by optimizing K-means with respect to the unknown assignments. Experiments on singlechannel mixtures from multiple speakers show that a speaker-independent model trained on two-speaker and three speakers mixtures can improve signal quality for mixtures of held-out speakers by an average over 10dB.
520
$a
We then propose an extension for deep clustering named "deep attractor" network that allows the system to perform efficient end-to-end training. In the proposed model, attractor points for each source are firstly created the acoustic signals which pull together the time-frequency bins corresponding to each source by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. We showed that this frame work can achieve even better results.
520
$a
Lastly, we introduce two applications of the proposed models, in singing voice separation and the smart hearing aid device. For the former, a multi-task architecture is proposed, which combines the deep clustering and the classification based network. And a new state of the art separation result was achieved, where the signal to noise ratio was improved by 11.1dB on music and 7.9dB on singing voice. In the application of smart hearing aid device, we combine the neural decoding with the separation network. The system firstly decodes the user's attention, which is further used to guide the separator for the targeting source. Both objective study and subjective study show the proposed system can accurately decode the attention and signi?cantly improve the user experience.
590
$a
School code: 0054.
650
4
$a
Computer science.
$3
523869
650
4
$a
Acoustics.
$3
879105
650
4
$a
Statistics.
$3
517247
690
$a
0984
690
$a
0986
690
$a
0463
710
2
$a
Columbia University.
$b
Electrical Engineering.
$3
1675652
773
0
$t
Dissertation Abstracts International
$g
78-09B(E).
790
$a
0054
791
$a
Ph.D.
792
$a
2017
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=10275945
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9360330
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入