語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Co-channel speech separation using s...
~
Mahgoub, Yasser.
FindBook
Google Book
Amazon
博客來
Co-channel speech separation using state-space reconstruction and sinusoidal modelling.
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Co-channel speech separation using state-space reconstruction and sinusoidal modelling./
作者:
Mahgoub, Yasser.
面頁冊數:
183 p.
附註:
Source: Dissertation Abstracts International, Volume: 72-01, Section: B, page: 0438.
Contained By:
Dissertation Abstracts International72-01B.
標題:
Engineering, Computer. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=NR67893
ISBN:
9780494678930
Co-channel speech separation using state-space reconstruction and sinusoidal modelling.
Mahgoub, Yasser.
Co-channel speech separation using state-space reconstruction and sinusoidal modelling.
- 183 p.
Source: Dissertation Abstracts International, Volume: 72-01, Section: B, page: 0438.
Thesis (Ph.D.)--Carleton University (Canada), 2010.
This thesis deals with the separation of mixed speech signals from a single acquisition channel; a problem that is commonly referred to as co-channel speech separation. The goal of the thesis is to present some contributions towards the design and implementation of a robust and enhanced co-channel speech separation system.
ISBN: 9780494678930Subjects--Topical Terms:
1669061
Engineering, Computer.
Co-channel speech separation using state-space reconstruction and sinusoidal modelling.
LDR
:03780nam 2200301 4500
001
1400171
005
20111005095609.5
008
130515s2010 ||||||||||||||||| ||eng d
020
$a
9780494678930
035
$a
(UMI)AAINR67893
035
$a
AAINR67893
040
$a
UMI
$c
UMI
100
1
$a
Mahgoub, Yasser.
$3
1679195
245
1 0
$a
Co-channel speech separation using state-space reconstruction and sinusoidal modelling.
300
$a
183 p.
500
$a
Source: Dissertation Abstracts International, Volume: 72-01, Section: B, page: 0438.
502
$a
Thesis (Ph.D.)--Carleton University (Canada), 2010.
520
$a
This thesis deals with the separation of mixed speech signals from a single acquisition channel; a problem that is commonly referred to as co-channel speech separation. The goal of the thesis is to present some contributions towards the design and implementation of a robust and enhanced co-channel speech separation system.
520
$a
The phenomenon of co-channel speech commonly occurs due to the combination of speech signals from simultaneous and independent sources into one signal at the receiving microphone, or when two speech signals are transmitted simultaneously over a single channel. An efficient co-channel speech separation system is an important front-end component in many applications such as Automatic Speech Recognition (ASR), Speaker Identification (SID), and hearing aids.
520
$a
The separation process of co-channel speech consists, mainly, of three stages: Analysis, Separation, and Reconstruction . The central separation stage represents the heart of the system in which the target speech is separated from the interfering speech. At the front, since the separation process works on one segment of co-channel speech at a time, a mean must be found in the analysis stage to accurately classify each segment into single or multi-speaker before separation. Precise estimation of each speaker's speech model parameters is another important task in the analysis stage. The speech signal of the desired speaker is finally synthesized from its estimated parameters in the reconstruction stage. In order to have a reliable overall speech separation system, improvements need to be achieved in all three stages.
520
$a
This thesis introduces a classification algorithm that is capable of determining the voicing-state of co-channel speech. The algorithm uses some features of the reconstructed state-space of the speech data as a measure to identify the three voicing-states of co-channel speech; Unvoiced/Unvoiced (U/U), Voiced/Unvoiced (V/U), and Voiced/Voiced (V/V). The proposed method requires neither a priori information nor speech training data. Nonetheless, simulation results show enhanced performance in identifying the three voicing-states at different target-to-interference ratio (TIR) values as well as at different levels of background noise compared to other existing techniques.
520
$a
A time-domain method to precisely estimate the sinusoidal model parameters of co-channel speech is also presented. The method does not require the calculation of the discrete Fourier transform nor the multiplication by a window function which both degrade the estimate of the sinusoidal model parameters. The method incorporates a least-squares estimator and an adaptive technique to model and separate the co-channel speech into its individual speakers. The application of this method on speech data demonstrates the effectiveness of this method in separating co-channel speech signals with different TIRs.
590
$a
School code: 0040.
650
4
$a
Engineering, Computer.
$3
1669061
650
4
$a
Engineering, Electronics and Electrical.
$3
626636
690
$a
0464
690
$a
0544
710
2
$a
Carleton University (Canada).
$3
1018407
773
0
$t
Dissertation Abstracts International
$g
72-01B.
790
$a
0040
791
$a
Ph.D.
792
$a
2010
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=NR67893
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9163310
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入