語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Compression and Predictive Distribut...
~
Yang, Xiao.
FindBook
Google Book
Amazon
博客來
Compression and Predictive Distributions for Large Alphabets.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Compression and Predictive Distributions for Large Alphabets./
作者:
Yang, Xiao.
面頁冊數:
107 p.
附註:
Source: Dissertation Abstracts International, Volume: 76-11(E), Section: B.
Contained By:
Dissertation Abstracts International76-11B(E).
標題:
Statistics. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3663558
ISBN:
9781321945300
Compression and Predictive Distributions for Large Alphabets.
Yang, Xiao.
Compression and Predictive Distributions for Large Alphabets.
- 107 p.
Source: Dissertation Abstracts International, Volume: 76-11(E), Section: B.
Thesis (Ph.D.)--Yale University, 2015.
This item must not be sold to any third party vendors.
Data generated from large alphabet exist almost everywhere in our life, for example, texts, images and videos. Traditional universal compression algorithms mostly involve small alphabets and assume implicitly an asymptotic condition under which the extra bits induced in the compression process vanishes as an infinite number of data come. In this thesis, we put the main focus on compression and prediction for large alphabets with the alphabet size comparable or larger than the sample size.
ISBN: 9781321945300Subjects--Topical Terms:
517247
Statistics.
Compression and Predictive Distributions for Large Alphabets.
LDR
:03232nmm a2200337 4500
001
2063293
005
20151027100449.5
008
170521s2015 ||||||||||||||||| ||eng d
020
$a
9781321945300
035
$a
(MiAaPQ)AAI3663558
035
$a
AAI3663558
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Yang, Xiao.
$3
1927968
245
1 0
$a
Compression and Predictive Distributions for Large Alphabets.
300
$a
107 p.
500
$a
Source: Dissertation Abstracts International, Volume: 76-11(E), Section: B.
500
$a
Adviser: Andrew R. Barron.
502
$a
Thesis (Ph.D.)--Yale University, 2015.
506
$a
This item must not be sold to any third party vendors.
506
$a
This item must not be added to any third party search indexes.
520
$a
Data generated from large alphabet exist almost everywhere in our life, for example, texts, images and videos. Traditional universal compression algorithms mostly involve small alphabets and assume implicitly an asymptotic condition under which the extra bits induced in the compression process vanishes as an infinite number of data come. In this thesis, we put the main focus on compression and prediction for large alphabets with the alphabet size comparable or larger than the sample size.
520
$a
We first consider sequences of random variables independent and identically generated from a large alphabet. In particular, the size of the sample is allowed to be variable. A product distribution based on Poisson sampling and tiling is proposed as the coding distribution, which highly simplifies the implementation and analysis through independence. Moreover, we characterize the behavior of the coding distribution through a condition on the tail sum of the ordered counts, and apply it to sequences satisfying this condition. Further, we apply this method to envelope classes. This coding distribution provides a convenient method to approximately compute the Shtarkov's normalized maximum likelihood (NML) distribution. And the extra price paid for this convenience is small compared to the total cost. Furthermore, we find this coding distribution can also be used to calculate the NML distribution exactly. And this calculation remains simple due to the independence of the coding distribution.
520
$a
Further, we consider a more realistic class---the Markov class, and in particular, tree sources. A context tree based algorithm is designed to describe the dependencies among the contexts. It is a greedy algorithm which seeks for the greatest savings in codelength when constructing the tree. Compression and prediction of individual counts associated with the contexts uses the same coding distribution as in the i.i.d case. Combining these two procedures, we demonstrate a compression algorithm based on the tree model.
520
$a
Results of simulation and real data experiments for both the i.i.d model and Markov model have been included to illustrate the performance of the proposed algorithm.
590
$a
School code: 0265.
650
4
$a
Statistics.
$3
517247
650
4
$a
Electrical engineering.
$3
649834
690
$a
0463
690
$a
0544
710
2
$a
Yale University.
$3
515640
773
0
$t
Dissertation Abstracts International
$g
76-11B(E).
790
$a
0265
791
$a
Ph.D.
792
$a
2015
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3663558
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9295951
電子資源
11.線上閱覽_V
電子書
EB
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入