Language:
English
繁體中文
Help
回圖書館首頁
手機版館藏查詢
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Syllables and concepts in large voca...
~
De Palma, Paul.
Linked to FindBook
Google Book
Amazon
博客來
Syllables and concepts in large vocabulary speech recognition.
Record Type:
Language materials, printed : Monograph/item
Title/Author:
Syllables and concepts in large vocabulary speech recognition./
Author:
De Palma, Paul.
Description:
377 p.
Notes:
Source: Dissertation Abstracts International, Volume: 71-07, Section: A, page: 2436.
Contained By:
Dissertation Abstracts International71-07A.
Subject:
Language, Linguistics. -
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3409325
ISBN:
9781124067063
Syllables and concepts in large vocabulary speech recognition.
De Palma, Paul.
Syllables and concepts in large vocabulary speech recognition.
- 377 p.
Source: Dissertation Abstracts International, Volume: 71-07, Section: A, page: 2436.
Thesis (Ph.D.)--The University of New Mexico, 2010.
Transforming an acoustic signal to words is the gold standard in automatic speech recognition. While recognizing that orthographic transcription is a valuable technique for comparing speech recognition systems without respect to application, it must also be recognized that transcription is not something that human beings do with their language partners. In fact, transforming speech into words is not necessary to emulate human performance in many contexts. By relaxing the constraint that the output of speech recognition be words, we might at the same time effectively relax the bias toward writing in speech recognition research. This puts our work in the camp of those who have argued over the years that speech and writing differ in significant ways.
ISBN: 9781124067063Subjects--Topical Terms:
1018079
Language, Linguistics.
Syllables and concepts in large vocabulary speech recognition.
LDR
:03400nam 2200349 4500
001
1400084
005
20111005095541.5
008
130515s2010 ||||||||||||||||| ||eng d
020
$a
9781124067063
035
$a
(UMI)AAI3409325
035
$a
AAI3409325
040
$a
UMI
$c
UMI
100
1
$a
De Palma, Paul.
$3
631255
245
1 0
$a
Syllables and concepts in large vocabulary speech recognition.
300
$a
377 p.
500
$a
Source: Dissertation Abstracts International, Volume: 71-07, Section: A, page: 2436.
500
$a
Advisers: George F. Luger; Caroline L. Smith.
502
$a
Thesis (Ph.D.)--The University of New Mexico, 2010.
520
$a
Transforming an acoustic signal to words is the gold standard in automatic speech recognition. While recognizing that orthographic transcription is a valuable technique for comparing speech recognition systems without respect to application, it must also be recognized that transcription is not something that human beings do with their language partners. In fact, transforming speech into words is not necessary to emulate human performance in many contexts. By relaxing the constraint that the output of speech recognition be words, we might at the same time effectively relax the bias toward writing in speech recognition research. This puts our work in the camp of those who have argued over the years that speech and writing differ in significant ways.
520
$a
This study explores two hypotheses. The first is that a large vocabulary continuous speech recognition (LVCSR) system will perform more accurately if it were trained on syllables instead of words. Though several researchers have examined the use of syllables in the acoustic model of an LVCSR system, very little attention has been paid to their use in the language model. The second hypothesis has to do with adding a post-processing component to a recognizer equipped with a syllable language model. The first step is to group words that seem to mean the same thing into equivalence classes called concepts. The second step is to insert the equivalence classes into the output of a recognizer. The hypothesis is that by using this concept post-processor, we will achieve better results than with the syllable language model alone.
520
$a
The study reports that the perplexity of a trigram syllable language model drops by half when compared to a trigram word language model using the same training transcript. The drop in perplexity carries over to error rate. The error rate of a recognizer equipped with syllable language model drops by over 14% when compared with one using a word language model. Nevertheless, the study reports a slight increase in error rate when a concept post-processor is added to a recognizer equipped with a syllable language model. We conjecture that this is the result of deterministic mapping from syllable strings to concepts. Consequently, we outline a probabilistic mapping scheme from concepts to syllable strings.
590
$a
School code: 0142.
650
4
$a
Language, Linguistics.
$3
1018079
650
4
$a
Artificial Intelligence.
$3
769149
650
4
$a
Computer Science.
$3
626642
690
$a
0290
690
$a
0800
690
$a
0984
710
2
$a
The University of New Mexico.
$b
Linguistics.
$3
1679103
773
0
$t
Dissertation Abstracts International
$g
71-07A.
790
1 0
$a
Luger, George F.,
$e
advisor
790
1 0
$a
Smith, Caroline L.,
$e
advisor
790
1 0
$a
Croft, William
$e
committee member
790
1 0
$a
Wooters, Charles
$e
committee member
790
$a
0142
791
$a
Ph.D.
792
$a
2010
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3409325
based on 0 review(s)
Location:
ALL
電子資源
Year:
Volume Number:
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
W9163223
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login