語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Modern data mining algorithms in C++...
~
Masters, Timothy.
FindBook
Google Book
Amazon
博客來
Modern data mining algorithms in C++ and CUDA C = recent developments in feature extraction and selection algorithms for data science /
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Modern data mining algorithms in C++ and CUDA C/ by Timothy Masters.
其他題名:
recent developments in feature extraction and selection algorithms for data science /
作者:
Masters, Timothy.
出版者:
Berkeley, CA :Apress : : 2020.,
面頁冊數:
ix, 228 p. :ill., digital ;24 cm.
內容註:
1. Introduction -- 2. Forward Selection Component Analysis -- 3. Local Feature Selection -- 4. Memory in Time Series Features -- 5. Stepwise Selection on Steroids -- 6. Nominal-to-Ordinal Conversion.
Contained By:
Springer eBooks
標題:
Data mining. -
電子資源:
https://doi.org/10.1007/978-1-4842-5988-7
ISBN:
9781484259887
Modern data mining algorithms in C++ and CUDA C = recent developments in feature extraction and selection algorithms for data science /
Masters, Timothy.
Modern data mining algorithms in C++ and CUDA C
recent developments in feature extraction and selection algorithms for data science /[electronic resource] :by Timothy Masters. - Berkeley, CA :Apress :2020. - ix, 228 p. :ill., digital ;24 cm.
1. Introduction -- 2. Forward Selection Component Analysis -- 3. Local Feature Selection -- 4. Memory in Time Series Features -- 5. Stepwise Selection on Steroids -- 6. Nominal-to-Ordinal Conversion.
As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are: Forward selection component analysis Local feature selection Linking features and a target with a hidden Markov model Improvements on traditional stepwise selection Nominal-to-ordinal conversion All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code. The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it. You will: Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set. Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods. Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets. Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.
ISBN: 9781484259887
Standard No.: 10.1007/978-1-4842-5988-7doiSubjects--Topical Terms:
562972
Data mining.
LC Class. No.: QA76.9.D343 / M378 2020
Dewey Class. No.: 006.312
Modern data mining algorithms in C++ and CUDA C = recent developments in feature extraction and selection algorithms for data science /
LDR
:03884nmm a2200337 a 4500
001
2221399
003
DE-He213
005
20201103153833.0
006
m d
007
cr nn 008maaau
008
201216s2020 cau s 0 eng d
020
$a
9781484259887
$q
(electronic bk.)
020
$a
9781484259870
$q
(paper)
024
7
$a
10.1007/978-1-4842-5988-7
$2
doi
035
$a
978-1-4842-5988-7
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
QA76.9.D343
$b
M378 2020
072
7
$a
UNF
$2
bicssc
072
7
$a
COM021030
$2
bisacsh
072
7
$a
UNF
$2
thema
072
7
$a
UYQE
$2
thema
082
0 4
$a
006.312
$2
23
090
$a
QA76.9.D343
$b
M423 2020
100
1
$a
Masters, Timothy.
$3
683540
245
1 0
$a
Modern data mining algorithms in C++ and CUDA C
$h
[electronic resource] :
$b
recent developments in feature extraction and selection algorithms for data science /
$c
by Timothy Masters.
260
$a
Berkeley, CA :
$b
Apress :
$b
Imprint: Apress,
$c
2020.
300
$a
ix, 228 p. :
$b
ill., digital ;
$c
24 cm.
505
0
$a
1. Introduction -- 2. Forward Selection Component Analysis -- 3. Local Feature Selection -- 4. Memory in Time Series Features -- 5. Stepwise Selection on Steroids -- 6. Nominal-to-Ordinal Conversion.
520
$a
As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are: Forward selection component analysis Local feature selection Linking features and a target with a hidden Markov model Improvements on traditional stepwise selection Nominal-to-ordinal conversion All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code. The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it. You will: Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set. Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods. Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets. Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.
650
0
$a
Data mining.
$3
562972
650
0
$a
C++ (Computer program language)
$3
527229
650
1 4
$a
Data Mining and Knowledge Discovery.
$3
898250
650
2 4
$a
Professional Computing.
$3
3201325
650
2 4
$a
Statistics, general.
$3
896933
650
2 4
$a
Programming Languages, Compilers, Interpreters.
$3
891123
710
2
$a
SpringerLink (Online service)
$3
836513
773
0
$t
Springer eBooks
856
4 0
$u
https://doi.org/10.1007/978-1-4842-5988-7
950
$a
Professional and Applied Computing (Springer-12059)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9394978
電子資源
11.線上閱覽_V
電子書
EB QA76.9.D343 M378 2020
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入