語系:
繁體中文
English
說明(常見問題)
回圖書館首頁
手機版館藏查詢
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Signal processing and machine learni...
~
Richter, Michael M.
FindBook
Google Book
Amazon
博客來
Signal processing and machine learning with applications
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Signal processing and machine learning with applications/ by Michael M. Richter ... [et al.].
其他作者:
Richter, Michael M.
出版者:
Cham :Springer International Publishing : : 2022.,
面頁冊數:
xli, 607 p. :ill., digital ;24 cm.
內容註:
Part I Realms of Signal Processing -- 1 Digital Signal Representation -- 1.1 Introduction -- 1.2 Numbers -- 1.2.1 Numbers and Numerals -- 1.2.2 Types of Numbers -- 1.2.3 Positional Number Systems -- 1.3 Sampling and Reconstruction of Signals -- 1.3.1 Scalar Quantization -- 1.3.2 Quantization Noise -- 1.3.3 Signal-To-Noise Ratio -- 1.3.4 Transmission Rate -- 1.3.5 Nonuniform Quantizer -- 1.3.6 Companding -- 1.4 Data Representations -- 1.4.1 Fixed-Point Number Representations -- 1.4.2 Sign-Magnitude Format -- 1.4.3 One's-Complement Format -- 1.4.4 Two's-Complement Format -- 1.5 Fix-Point DSP's -- 1.6 Fixed-Point Representations Based on Radix-Point -- 1.7 Dynamic Range -- 1.8 Precision -- 1.9 Background Information -- 1.10 Exercises -- 2 Signal Processing Background -- 2.1 Basic Concepts -- 2.2 Signals and Information -- 2.3 Signal Processing -- ix -- x Contents -- 2.4 Discrete Signal Representations -- 2.5 Delta and Impulse Function -- 2.6 Parseval's Theorem -- 2.7 Gibbs Phenomenon -- 2.8 Wold Decomposition -- 2.9 State Space Signal Processing -- 2.10 Common Measurements -- 2.10.1 Convolution -- 2.10.2 Correlation -- 2.10.3 Auto Covariance -- 2.10.4 Coherence -- 2.10.5 Power Spectral Density (PSD) -- 2.10.6 Estimation and Detection -- 2.10.7 Central Limit Theorem -- 2.10.8 Signal Information Processing Types -- 2.10.9 Machine Learning -- 2.10.10Exercises -- 3 Fundamentals of Signal Transformations -- 3.1 Transformation Methods -- 3.1.1 Laplace Transform -- 3.1.2 Z-Transform -- 3.1.3 Fourier Series -- 3.1.4 Fourier Transform -- 3.1.5 Discrete Fourier Transform and Fast Fourier Transform -- 3.1.6 Zero Padding -- 3.1.7 Overlap-Add and Overlap-Save Convolution -- Algorithms -- 3.1.8 Short Time Fourier Transform (STFT) -- 3.1.9 Wavelet Transform -- 3.1.10 Windowing Signal and the DCT Transforms -- 3.2 Analysis and Comparison of Transformations -- 3.3 Background Information -- 3.4 Exercises -- 3.5 References -- 4 Digital Filters -- 4.1 Introduction -- 4.1.1 FIR and IIR Filters -- 4.1.2 Bilinear Transform -- 4.2 Windowing for Filtering -- 4.3 Allpass Filters -- 4.4 Lattice Filters -- 4.5 All-Zero Lattice Filter -- 4.6 Lattice Ladder Filters -- Contents xi -- 4.7 Comb Filter -- 4.8 Notch Filter -- 4.9 Background Information -- 4.10 Exercises -- 5 Estimation and Detection -- 5.1 Introduction -- 5.2 Hypothesis Testing -- 5.2.1 Bayesian Hypothesis Testing -- 5.2.2 MAP Hypothesis Testing -- 5.3 Maximum Likelihood (ML) Hypothesis Testing -- 5.4 Standard Analysis Techniques -- 5.4.1 Best Linear Unbiased Estimator (BLUE) -- 5.4.2 Maximum Likelihood Estimator (MLE) -- 5.4.3 Least Squares Estimator (LSE) -- 5.4.4 Linear Minimum Mean Square Error Estimator -- (LMMSE) -- 5.5 Exercises -- 6 Adaptive Signal Processing -- 6.1 Introduction -- 6.2 Parametric Signal Modeling -- 6.2.1 Parametric Estimation -- 6.3 Wiener Filtering -- 6.4 Kalman Filter -- 6.4.1 Smoothing -- 6.5 Particle Filter -- 6.6 Fundamentals of Monte Carl -- 6.6.1 Importance Sampling (IS) -- 6.7 Non-Parametric Signal Modeling -- 6.8 Non-Parametric Estimation -- 6.8.1 Correlogram -- 6.8.2 Periodogram -- 6.9 Filter Bank Method -- 6.10 Quadrature Mirror Filter Bank (QMF) -- 6.11 Background Information -- 6.12 Exercises -- 7 Spectral Analysis -- 7.1 Introduction -- 7.2 Adaptive Spectral Analysis -- 7.3 Multivariate Signal Processing -- 7.3.1 Sub-band Coding and Subspace Analysis -- 7.4 Wavelet Analysis -- 7.5 Adaptive Beam Forming -- xii Contents -- 7.6 Independent Component Analysis (ICA) -- 7.7 Principal Component Analysis (PCA) -- 7.8 Best Basis Algorithms -- 7.9 Background Information -- 7.10 Exercises -- Part II Machine Learning and Recognition -- 8 General Learning -- 8.1 Introduction to Learning -- 8.2 The Learning Phases -- 8.2.1 Search and Utility -- 8.3 Search -- 8.3.1 General Search Model -- 8.3.2 Preference relations -- 8.3.3 Different learning methods -- 8.3.4 Similarities -- 8.3.5 Learning to Recognize -- 8.3.6 Learning again -- 8.4 Background Information -- 8.5 Exercises -- 9 Signal Processes, Learning, and Recognition -- 9.1 Learning -- 9.2 Bayesian Formalism -- 9.2.1 Dynamic Bayesian Theory -- 9.2.2 Recognition and Search -- 9.2.3 Influences -- 9.3 Subjectivity -- 9.4 Background Information -- 9.5 Exercises -- 10 Stochastic Processes -- 10.1 Preliminaries on Probabilities -- 10.2 Basic Concepts of Stochastic Processes -- 10.2.1 Markov Processes -- 10.2.2 Hidden Stochastic Models (HSM) -- 10.2.3 HSM Topology -- 10.2.4 Learning Probabilities -- 10.2.5 Re-estimation -- 10.2.6 Redundancy -- 10.2.7 Data Preparation -- 10.2.8 Proper Redundancy Removal -- 10.3 Envelope Detection -- 10.3.1 Silence Threshold Selection -- 10.3.2 Pre-emphasis -- Contents xiii -- 10.4 Several Processes -- 10.4.1 Similarity -- 10.4.2 The Local-Global Principle -- 10.4.3 HSM Similarities -- 10.5 Conflict and Support -- 10.6 Examples and Applications -- 10.7 Predictions -- 10.8 Background Information -- 10.9 Exercises -- 11 Feature Extraction -- 11.1 Feature Extractions -- 11.2 Basic Techniques -- 11.2.1 Spectral Shaping -- 11.3 Spectral Analysis and Feature Transformation -- 11.3.1 Parametric Feature Transformations and Cepstrum -- 11.3.2 Standard Feature Extraction Techniques -- 11.3.3 Frame Energy -- 11.4 Linear Prediction Coe_cients (LPC) -- 11.5 Linear Prediction Cepstral Coe_cients (LPCC) -- 11.6 Adaptive Perceptual Local Trigonometric Transformation -- (APLTT) -- 11.7 Search -- 11.7.1 General Search Model -- 11.8 Predictions -- 11.8.1 Purpose -- 11.8.2 Linear Prediction -- 11.8.3 Mean Squared Error Minimization -- 11.8.4 Computation of Probability of an Observation Sequence -- 11.8.5 Forward and Backward Prediction -- 11.8.6 Forward-Backward Prediction -- 11.9 Background Information -- 11.10Exercises -- 12 Unsupervised Learning -- 12.1 Generalities -- 12.2 Clustering Principles -- 12.3 Cluster Analysis Methods -- 12.4 Special Methods -- 12.4.1 K-means -- 12.4.2 Vector Quantization (VQ) -- 12.4.3 Expectation Maximization (EM) -- 12.4.4 GMM Clustering -- 12.5 Background Information -- 12.6 Exercises -- xiv Contents -- 13 Markov Model and Hidden Stochastic Model -- 13.1 Markov Process -- 13.2 Gaussian Mixture Model (GMM) -- 13.3 Advantages of using GMM -- 13.4 Linear Prediction Analysis -- 13.4.1 Autocorrelation Method -- 13.4.2 Yule-Walker Approach -- 13.4.3 Covariance Method -- 13.4.4 Comparison of Correlation and Covariance methods -- 13.5 The ULS Approach -- 13.6 Comparison of ULS and Covariance Methods -- 13.7 Forward Prediction -- 13.8 Backward Prediction -- 13.9 Forward-Backward Prediction -- 13.10Baum-Welch Algorithm -- 13.11Viterbi Algorithm -- 13.12Background Information -- 13.13Exercises -- 14 Fuzzy Logic and Rough Sets -- 14.1 Rough Sets -- 14.2 Fuzzy Sets -- 14.2.1 Basis Elements -- 14.2.2 Possibility and Necessity -- 14.3 Fuzzy Clustering -- 14.4 Fuzzy Probabilities -- 14.5 Background Information -- 14.6 Exercises -- 15 Neural Networks -- 15.1 Neural Network Types -- 15.1.1 Neural Network Training -- 15.1.2 Neural Network Topology -- 15.2 Parallel Distributed Processing -- 15.2.1 Forward and Backward Uses -- 15.2.2 Learning -- 15.3 Applications to Signal Processing -- 15.4 Background Information -- 15.5 Exercises -- Part III Real Aspects and Applications -- Contents xv -- 16 Noisy Signals -- 16.1 Introduction -- 16.2 Noise Questions -- 16.3 Sources of Noise -- 16.4 Noise Measurement -- 16.5 Weights and A-Weights -- 16.6 Signal to Noise Ratio (SNR) -- 16.7 Noise Measuring Filters and Evaluation -- 16.8 Types of noise -- 16.9 Origin of noises -- 16.10Box Plot Evaluation -- 16.11Individual noise types -- 16.11.1Residual -- 16.11.2Mild -- 16.11.3Steady-unsteady Time varying Noise -- 16.11.4Strong Noise -- 16.12Solution to Strong Noise: Matched Filter -- 16.13Background Information -- 16.14Exercises -- 17 Reasoning Methods and Noise Removal -- 17.1 Generalities -- 17.2 Special Noise Removal Methods -- 17.2.1 Residual Noise -- 17.2.2 Mild Noise -- 17.2.3 Steady-Unsteady Noise -- 17.2.4 Strong Noise -- 17.3 Poisson Distribution -- 17.3.1 Outliers and Shots -- 17.3.2 Underlying probability of Shots -- 17.4 Kalman Filter -- 17.4.1 Prediction Estimates -- 17.4.2 White noise Kalman filtering -- 17.4.3 Application of Kalman filter -- 17.5 Classification, Recognition and Learning -- 17.5.1 Summary of the used concepts -- 17.6 Principle Component Analysis (PCA) -- 17.7 Reasoning Methods -- 17.7.1 Case-Based Reasoning (CBR) -- 17.8 Background Information -- 17.9 Exercises -- xvi Contents -- 18
內容註:
Audio Signals and Speech Recognition -- 18.1 Generalities of Speech -- 18.2 Categories of Speech Recognition -- 18.3 Automatic Speech Recognition -- 18.3.1 System Structure -- 18.4 Speech Production Model -- 18.5 Acoustics -- 18.6 Human Speech Production -- 18.6.1 The Human Speech Generation -- 18.6.2 Excitation -- 18.6.3 Voiced Speech -- 18.6.4 Unvoiced Speech -- 18.7 Silence Regions -- 18.8 Glottis -- 18.9 Lips -- 18.10Plosive Speech Source -- 18.11Vocal-Tract -- 18.12Parametric and Non-Parametric Models -- 18.13Formants -- 18.14Strong Noise -- 18.15Background Information -- 18.16Exercises -- 19 Noisy Speech -- 19.1 Introduction -- 19.2 Colored Noise -- 19.2.1 Additional types of Colored Noise -- 19.3 Poisson Processes and Shots -- 19.4 Matched Filters -- 19.5 Shot Noise -- 19.6 Background Information -- 19.7 Exercises -- 20 Aspects Of Human Hearing -- 20.1 Human Ear -- 20.2 Human Auditory System -- 20.3 Critical Bands and Scales -- 20.3.1 Mel Scale -- 20.3.2 Bark Scale -- 20.3.3 Erb Scale -- 20.3.4 Greenwood Scale -- 20.4 Filter Banks -- 20.4.1 ICA Network -- 20.4.2 Auditory Filter Banks -- 20.4.3 Filter Banks -- Contents xvii -- 20.4.4 Mel Critical Filter Bank -- 20.5 Psycho-acoustic Phenomena -- 20.5.1 Perceptual Measurement -- 20.5.2 Human Hearing and Perception -- 20.5.3 Sound Pressure Level (SPL) -- 20.5.4 Absolute Threshold of Hearing (ATH) -- 20.6 Perceptual Adaptation -- 20.7 Auditory System and Hearing Model -- 20.8 Auditory Masking and Masking Frequency -- 20.
Contained By:
Springer Nature eBook
標題:
Signal processing - Digital techniques. -
電子資源:
https://doi.org/10.1007/978-3-319-45372-9
ISBN:
9783319453729
Signal processing and machine learning with applications
Signal processing and machine learning with applications
[electronic resource] /by Michael M. Richter ... [et al.]. - Cham :Springer International Publishing :2022. - xli, 607 p. :ill., digital ;24 cm.
Part I Realms of Signal Processing -- 1 Digital Signal Representation -- 1.1 Introduction -- 1.2 Numbers -- 1.2.1 Numbers and Numerals -- 1.2.2 Types of Numbers -- 1.2.3 Positional Number Systems -- 1.3 Sampling and Reconstruction of Signals -- 1.3.1 Scalar Quantization -- 1.3.2 Quantization Noise -- 1.3.3 Signal-To-Noise Ratio -- 1.3.4 Transmission Rate -- 1.3.5 Nonuniform Quantizer -- 1.3.6 Companding -- 1.4 Data Representations -- 1.4.1 Fixed-Point Number Representations -- 1.4.2 Sign-Magnitude Format -- 1.4.3 One's-Complement Format -- 1.4.4 Two's-Complement Format -- 1.5 Fix-Point DSP's -- 1.6 Fixed-Point Representations Based on Radix-Point -- 1.7 Dynamic Range -- 1.8 Precision -- 1.9 Background Information -- 1.10 Exercises -- 2 Signal Processing Background -- 2.1 Basic Concepts -- 2.2 Signals and Information -- 2.3 Signal Processing -- ix -- x Contents -- 2.4 Discrete Signal Representations -- 2.5 Delta and Impulse Function -- 2.6 Parseval's Theorem -- 2.7 Gibbs Phenomenon -- 2.8 Wold Decomposition -- 2.9 State Space Signal Processing -- 2.10 Common Measurements -- 2.10.1 Convolution -- 2.10.2 Correlation -- 2.10.3 Auto Covariance -- 2.10.4 Coherence -- 2.10.5 Power Spectral Density (PSD) -- 2.10.6 Estimation and Detection -- 2.10.7 Central Limit Theorem -- 2.10.8 Signal Information Processing Types -- 2.10.9 Machine Learning -- 2.10.10Exercises -- 3 Fundamentals of Signal Transformations -- 3.1 Transformation Methods -- 3.1.1 Laplace Transform -- 3.1.2 Z-Transform -- 3.1.3 Fourier Series -- 3.1.4 Fourier Transform -- 3.1.5 Discrete Fourier Transform and Fast Fourier Transform -- 3.1.6 Zero Padding -- 3.1.7 Overlap-Add and Overlap-Save Convolution -- Algorithms -- 3.1.8 Short Time Fourier Transform (STFT) -- 3.1.9 Wavelet Transform -- 3.1.10 Windowing Signal and the DCT Transforms -- 3.2 Analysis and Comparison of Transformations -- 3.3 Background Information -- 3.4 Exercises -- 3.5 References -- 4 Digital Filters -- 4.1 Introduction -- 4.1.1 FIR and IIR Filters -- 4.1.2 Bilinear Transform -- 4.2 Windowing for Filtering -- 4.3 Allpass Filters -- 4.4 Lattice Filters -- 4.5 All-Zero Lattice Filter -- 4.6 Lattice Ladder Filters -- Contents xi -- 4.7 Comb Filter -- 4.8 Notch Filter -- 4.9 Background Information -- 4.10 Exercises -- 5 Estimation and Detection -- 5.1 Introduction -- 5.2 Hypothesis Testing -- 5.2.1 Bayesian Hypothesis Testing -- 5.2.2 MAP Hypothesis Testing -- 5.3 Maximum Likelihood (ML) Hypothesis Testing -- 5.4 Standard Analysis Techniques -- 5.4.1 Best Linear Unbiased Estimator (BLUE) -- 5.4.2 Maximum Likelihood Estimator (MLE) -- 5.4.3 Least Squares Estimator (LSE) -- 5.4.4 Linear Minimum Mean Square Error Estimator -- (LMMSE) -- 5.5 Exercises -- 6 Adaptive Signal Processing -- 6.1 Introduction -- 6.2 Parametric Signal Modeling -- 6.2.1 Parametric Estimation -- 6.3 Wiener Filtering -- 6.4 Kalman Filter -- 6.4.1 Smoothing -- 6.5 Particle Filter -- 6.6 Fundamentals of Monte Carl -- 6.6.1 Importance Sampling (IS) -- 6.7 Non-Parametric Signal Modeling -- 6.8 Non-Parametric Estimation -- 6.8.1 Correlogram -- 6.8.2 Periodogram -- 6.9 Filter Bank Method -- 6.10 Quadrature Mirror Filter Bank (QMF) -- 6.11 Background Information -- 6.12 Exercises -- 7 Spectral Analysis -- 7.1 Introduction -- 7.2 Adaptive Spectral Analysis -- 7.3 Multivariate Signal Processing -- 7.3.1 Sub-band Coding and Subspace Analysis -- 7.4 Wavelet Analysis -- 7.5 Adaptive Beam Forming -- xii Contents -- 7.6 Independent Component Analysis (ICA) -- 7.7 Principal Component Analysis (PCA) -- 7.8 Best Basis Algorithms -- 7.9 Background Information -- 7.10 Exercises -- Part II Machine Learning and Recognition -- 8 General Learning -- 8.1 Introduction to Learning -- 8.2 The Learning Phases -- 8.2.1 Search and Utility -- 8.3 Search -- 8.3.1 General Search Model -- 8.3.2 Preference relations -- 8.3.3 Different learning methods -- 8.3.4 Similarities -- 8.3.5 Learning to Recognize -- 8.3.6 Learning again -- 8.4 Background Information -- 8.5 Exercises -- 9 Signal Processes, Learning, and Recognition -- 9.1 Learning -- 9.2 Bayesian Formalism -- 9.2.1 Dynamic Bayesian Theory -- 9.2.2 Recognition and Search -- 9.2.3 Influences -- 9.3 Subjectivity -- 9.4 Background Information -- 9.5 Exercises -- 10 Stochastic Processes -- 10.1 Preliminaries on Probabilities -- 10.2 Basic Concepts of Stochastic Processes -- 10.2.1 Markov Processes -- 10.2.2 Hidden Stochastic Models (HSM) -- 10.2.3 HSM Topology -- 10.2.4 Learning Probabilities -- 10.2.5 Re-estimation -- 10.2.6 Redundancy -- 10.2.7 Data Preparation -- 10.2.8 Proper Redundancy Removal -- 10.3 Envelope Detection -- 10.3.1 Silence Threshold Selection -- 10.3.2 Pre-emphasis -- Contents xiii -- 10.4 Several Processes -- 10.4.1 Similarity -- 10.4.2 The Local-Global Principle -- 10.4.3 HSM Similarities -- 10.5 Conflict and Support -- 10.6 Examples and Applications -- 10.7 Predictions -- 10.8 Background Information -- 10.9 Exercises -- 11 Feature Extraction -- 11.1 Feature Extractions -- 11.2 Basic Techniques -- 11.2.1 Spectral Shaping -- 11.3 Spectral Analysis and Feature Transformation -- 11.3.1 Parametric Feature Transformations and Cepstrum -- 11.3.2 Standard Feature Extraction Techniques -- 11.3.3 Frame Energy -- 11.4 Linear Prediction Coe_cients (LPC) -- 11.5 Linear Prediction Cepstral Coe_cients (LPCC) -- 11.6 Adaptive Perceptual Local Trigonometric Transformation -- (APLTT) -- 11.7 Search -- 11.7.1 General Search Model -- 11.8 Predictions -- 11.8.1 Purpose -- 11.8.2 Linear Prediction -- 11.8.3 Mean Squared Error Minimization -- 11.8.4 Computation of Probability of an Observation Sequence -- 11.8.5 Forward and Backward Prediction -- 11.8.6 Forward-Backward Prediction -- 11.9 Background Information -- 11.10Exercises -- 12 Unsupervised Learning -- 12.1 Generalities -- 12.2 Clustering Principles -- 12.3 Cluster Analysis Methods -- 12.4 Special Methods -- 12.4.1 K-means -- 12.4.2 Vector Quantization (VQ) -- 12.4.3 Expectation Maximization (EM) -- 12.4.4 GMM Clustering -- 12.5 Background Information -- 12.6 Exercises -- xiv Contents -- 13 Markov Model and Hidden Stochastic Model -- 13.1 Markov Process -- 13.2 Gaussian Mixture Model (GMM) -- 13.3 Advantages of using GMM -- 13.4 Linear Prediction Analysis -- 13.4.1 Autocorrelation Method -- 13.4.2 Yule-Walker Approach -- 13.4.3 Covariance Method -- 13.4.4 Comparison of Correlation and Covariance methods -- 13.5 The ULS Approach -- 13.6 Comparison of ULS and Covariance Methods -- 13.7 Forward Prediction -- 13.8 Backward Prediction -- 13.9 Forward-Backward Prediction -- 13.10Baum-Welch Algorithm -- 13.11Viterbi Algorithm -- 13.12Background Information -- 13.13Exercises -- 14 Fuzzy Logic and Rough Sets -- 14.1 Rough Sets -- 14.2 Fuzzy Sets -- 14.2.1 Basis Elements -- 14.2.2 Possibility and Necessity -- 14.3 Fuzzy Clustering -- 14.4 Fuzzy Probabilities -- 14.5 Background Information -- 14.6 Exercises -- 15 Neural Networks -- 15.1 Neural Network Types -- 15.1.1 Neural Network Training -- 15.1.2 Neural Network Topology -- 15.2 Parallel Distributed Processing -- 15.2.1 Forward and Backward Uses -- 15.2.2 Learning -- 15.3 Applications to Signal Processing -- 15.4 Background Information -- 15.5 Exercises -- Part III Real Aspects and Applications -- Contents xv -- 16 Noisy Signals -- 16.1 Introduction -- 16.2 Noise Questions -- 16.3 Sources of Noise -- 16.4 Noise Measurement -- 16.5 Weights and A-Weights -- 16.6 Signal to Noise Ratio (SNR) -- 16.7 Noise Measuring Filters and Evaluation -- 16.8 Types of noise -- 16.9 Origin of noises -- 16.10Box Plot Evaluation -- 16.11Individual noise types -- 16.11.1Residual -- 16.11.2Mild -- 16.11.3Steady-unsteady Time varying Noise -- 16.11.4Strong Noise -- 16.12Solution to Strong Noise: Matched Filter -- 16.13Background Information -- 16.14Exercises -- 17 Reasoning Methods and Noise Removal -- 17.1 Generalities -- 17.2 Special Noise Removal Methods -- 17.2.1 Residual Noise -- 17.2.2 Mild Noise -- 17.2.3 Steady-Unsteady Noise -- 17.2.4 Strong Noise -- 17.3 Poisson Distribution -- 17.3.1 Outliers and Shots -- 17.3.2 Underlying probability of Shots -- 17.4 Kalman Filter -- 17.4.1 Prediction Estimates -- 17.4.2 White noise Kalman filtering -- 17.4.3 Application of Kalman filter -- 17.5 Classification, Recognition and Learning -- 17.5.1 Summary of the used concepts -- 17.6 Principle Component Analysis (PCA) -- 17.7 Reasoning Methods -- 17.7.1 Case-Based Reasoning (CBR) -- 17.8 Background Information -- 17.9 Exercises -- xvi Contents -- 18
Signal processing captures, interprets, describes and manipulates physical phenomena. Mathematics, statistics, probability, and stochastic processes are among the signal processing languages we use to interpret real-world phenomena, model them, and extract useful information. This book presents different kinds of signals humans use and applies them for human machine interaction to communicate. Signal Processing and Machine Learning with Applications presents methods that are used to perform various Machine Learning and Artificial Intelligence tasks in conjunction with their applications. It is organized in three parts: Realms of Signal Processing; Machine Learning and Recognition; and Advanced Applications and Artificial Intelligence. The comprehensive coverage is accompanied by numerous examples, questions with solutions, with historical notes. The book is intended for advanced undergraduate and postgraduate students, researchers and practitioners who are engaged with signal processing, machine learning and the applications.
ISBN: 9783319453729
Standard No.: 10.1007/978-3-319-45372-9doiSubjects--Topical Terms:
624853
Signal processing
--Digital techniques.
LC Class. No.: TK5102.9
Dewey Class. No.: 621.3822
Signal processing and machine learning with applications
LDR
:12022nmm a2200337 a 4500
001
2303900
003
DE-He213
005
20220929203811.0
006
m d
007
cr nn 008maaau
008
230409s2022 sz s 0 eng d
020
$a
9783319453729
$q
(electronic bk.)
020
$a
9783319453712
$q
(paper)
024
7
$a
10.1007/978-3-319-45372-9
$2
doi
035
$a
978-3-319-45372-9
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
TK5102.9
072
7
$a
UYQ
$2
bicssc
072
7
$a
COM004000
$2
bisacsh
072
7
$a
UYQ
$2
thema
082
0 4
$a
621.3822
$2
23
090
$a
TK5102.9
$b
.S578 2022
245
0 0
$a
Signal processing and machine learning with applications
$h
[electronic resource] /
$c
by Michael M. Richter ... [et al.].
260
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2022.
300
$a
xli, 607 p. :
$b
ill., digital ;
$c
24 cm.
505
0
$a
Part I Realms of Signal Processing -- 1 Digital Signal Representation -- 1.1 Introduction -- 1.2 Numbers -- 1.2.1 Numbers and Numerals -- 1.2.2 Types of Numbers -- 1.2.3 Positional Number Systems -- 1.3 Sampling and Reconstruction of Signals -- 1.3.1 Scalar Quantization -- 1.3.2 Quantization Noise -- 1.3.3 Signal-To-Noise Ratio -- 1.3.4 Transmission Rate -- 1.3.5 Nonuniform Quantizer -- 1.3.6 Companding -- 1.4 Data Representations -- 1.4.1 Fixed-Point Number Representations -- 1.4.2 Sign-Magnitude Format -- 1.4.3 One's-Complement Format -- 1.4.4 Two's-Complement Format -- 1.5 Fix-Point DSP's -- 1.6 Fixed-Point Representations Based on Radix-Point -- 1.7 Dynamic Range -- 1.8 Precision -- 1.9 Background Information -- 1.10 Exercises -- 2 Signal Processing Background -- 2.1 Basic Concepts -- 2.2 Signals and Information -- 2.3 Signal Processing -- ix -- x Contents -- 2.4 Discrete Signal Representations -- 2.5 Delta and Impulse Function -- 2.6 Parseval's Theorem -- 2.7 Gibbs Phenomenon -- 2.8 Wold Decomposition -- 2.9 State Space Signal Processing -- 2.10 Common Measurements -- 2.10.1 Convolution -- 2.10.2 Correlation -- 2.10.3 Auto Covariance -- 2.10.4 Coherence -- 2.10.5 Power Spectral Density (PSD) -- 2.10.6 Estimation and Detection -- 2.10.7 Central Limit Theorem -- 2.10.8 Signal Information Processing Types -- 2.10.9 Machine Learning -- 2.10.10Exercises -- 3 Fundamentals of Signal Transformations -- 3.1 Transformation Methods -- 3.1.1 Laplace Transform -- 3.1.2 Z-Transform -- 3.1.3 Fourier Series -- 3.1.4 Fourier Transform -- 3.1.5 Discrete Fourier Transform and Fast Fourier Transform -- 3.1.6 Zero Padding -- 3.1.7 Overlap-Add and Overlap-Save Convolution -- Algorithms -- 3.1.8 Short Time Fourier Transform (STFT) -- 3.1.9 Wavelet Transform -- 3.1.10 Windowing Signal and the DCT Transforms -- 3.2 Analysis and Comparison of Transformations -- 3.3 Background Information -- 3.4 Exercises -- 3.5 References -- 4 Digital Filters -- 4.1 Introduction -- 4.1.1 FIR and IIR Filters -- 4.1.2 Bilinear Transform -- 4.2 Windowing for Filtering -- 4.3 Allpass Filters -- 4.4 Lattice Filters -- 4.5 All-Zero Lattice Filter -- 4.6 Lattice Ladder Filters -- Contents xi -- 4.7 Comb Filter -- 4.8 Notch Filter -- 4.9 Background Information -- 4.10 Exercises -- 5 Estimation and Detection -- 5.1 Introduction -- 5.2 Hypothesis Testing -- 5.2.1 Bayesian Hypothesis Testing -- 5.2.2 MAP Hypothesis Testing -- 5.3 Maximum Likelihood (ML) Hypothesis Testing -- 5.4 Standard Analysis Techniques -- 5.4.1 Best Linear Unbiased Estimator (BLUE) -- 5.4.2 Maximum Likelihood Estimator (MLE) -- 5.4.3 Least Squares Estimator (LSE) -- 5.4.4 Linear Minimum Mean Square Error Estimator -- (LMMSE) -- 5.5 Exercises -- 6 Adaptive Signal Processing -- 6.1 Introduction -- 6.2 Parametric Signal Modeling -- 6.2.1 Parametric Estimation -- 6.3 Wiener Filtering -- 6.4 Kalman Filter -- 6.4.1 Smoothing -- 6.5 Particle Filter -- 6.6 Fundamentals of Monte Carl -- 6.6.1 Importance Sampling (IS) -- 6.7 Non-Parametric Signal Modeling -- 6.8 Non-Parametric Estimation -- 6.8.1 Correlogram -- 6.8.2 Periodogram -- 6.9 Filter Bank Method -- 6.10 Quadrature Mirror Filter Bank (QMF) -- 6.11 Background Information -- 6.12 Exercises -- 7 Spectral Analysis -- 7.1 Introduction -- 7.2 Adaptive Spectral Analysis -- 7.3 Multivariate Signal Processing -- 7.3.1 Sub-band Coding and Subspace Analysis -- 7.4 Wavelet Analysis -- 7.5 Adaptive Beam Forming -- xii Contents -- 7.6 Independent Component Analysis (ICA) -- 7.7 Principal Component Analysis (PCA) -- 7.8 Best Basis Algorithms -- 7.9 Background Information -- 7.10 Exercises -- Part II Machine Learning and Recognition -- 8 General Learning -- 8.1 Introduction to Learning -- 8.2 The Learning Phases -- 8.2.1 Search and Utility -- 8.3 Search -- 8.3.1 General Search Model -- 8.3.2 Preference relations -- 8.3.3 Different learning methods -- 8.3.4 Similarities -- 8.3.5 Learning to Recognize -- 8.3.6 Learning again -- 8.4 Background Information -- 8.5 Exercises -- 9 Signal Processes, Learning, and Recognition -- 9.1 Learning -- 9.2 Bayesian Formalism -- 9.2.1 Dynamic Bayesian Theory -- 9.2.2 Recognition and Search -- 9.2.3 Influences -- 9.3 Subjectivity -- 9.4 Background Information -- 9.5 Exercises -- 10 Stochastic Processes -- 10.1 Preliminaries on Probabilities -- 10.2 Basic Concepts of Stochastic Processes -- 10.2.1 Markov Processes -- 10.2.2 Hidden Stochastic Models (HSM) -- 10.2.3 HSM Topology -- 10.2.4 Learning Probabilities -- 10.2.5 Re-estimation -- 10.2.6 Redundancy -- 10.2.7 Data Preparation -- 10.2.8 Proper Redundancy Removal -- 10.3 Envelope Detection -- 10.3.1 Silence Threshold Selection -- 10.3.2 Pre-emphasis -- Contents xiii -- 10.4 Several Processes -- 10.4.1 Similarity -- 10.4.2 The Local-Global Principle -- 10.4.3 HSM Similarities -- 10.5 Conflict and Support -- 10.6 Examples and Applications -- 10.7 Predictions -- 10.8 Background Information -- 10.9 Exercises -- 11 Feature Extraction -- 11.1 Feature Extractions -- 11.2 Basic Techniques -- 11.2.1 Spectral Shaping -- 11.3 Spectral Analysis and Feature Transformation -- 11.3.1 Parametric Feature Transformations and Cepstrum -- 11.3.2 Standard Feature Extraction Techniques -- 11.3.3 Frame Energy -- 11.4 Linear Prediction Coe_cients (LPC) -- 11.5 Linear Prediction Cepstral Coe_cients (LPCC) -- 11.6 Adaptive Perceptual Local Trigonometric Transformation -- (APLTT) -- 11.7 Search -- 11.7.1 General Search Model -- 11.8 Predictions -- 11.8.1 Purpose -- 11.8.2 Linear Prediction -- 11.8.3 Mean Squared Error Minimization -- 11.8.4 Computation of Probability of an Observation Sequence -- 11.8.5 Forward and Backward Prediction -- 11.8.6 Forward-Backward Prediction -- 11.9 Background Information -- 11.10Exercises -- 12 Unsupervised Learning -- 12.1 Generalities -- 12.2 Clustering Principles -- 12.3 Cluster Analysis Methods -- 12.4 Special Methods -- 12.4.1 K-means -- 12.4.2 Vector Quantization (VQ) -- 12.4.3 Expectation Maximization (EM) -- 12.4.4 GMM Clustering -- 12.5 Background Information -- 12.6 Exercises -- xiv Contents -- 13 Markov Model and Hidden Stochastic Model -- 13.1 Markov Process -- 13.2 Gaussian Mixture Model (GMM) -- 13.3 Advantages of using GMM -- 13.4 Linear Prediction Analysis -- 13.4.1 Autocorrelation Method -- 13.4.2 Yule-Walker Approach -- 13.4.3 Covariance Method -- 13.4.4 Comparison of Correlation and Covariance methods -- 13.5 The ULS Approach -- 13.6 Comparison of ULS and Covariance Methods -- 13.7 Forward Prediction -- 13.8 Backward Prediction -- 13.9 Forward-Backward Prediction -- 13.10Baum-Welch Algorithm -- 13.11Viterbi Algorithm -- 13.12Background Information -- 13.13Exercises -- 14 Fuzzy Logic and Rough Sets -- 14.1 Rough Sets -- 14.2 Fuzzy Sets -- 14.2.1 Basis Elements -- 14.2.2 Possibility and Necessity -- 14.3 Fuzzy Clustering -- 14.4 Fuzzy Probabilities -- 14.5 Background Information -- 14.6 Exercises -- 15 Neural Networks -- 15.1 Neural Network Types -- 15.1.1 Neural Network Training -- 15.1.2 Neural Network Topology -- 15.2 Parallel Distributed Processing -- 15.2.1 Forward and Backward Uses -- 15.2.2 Learning -- 15.3 Applications to Signal Processing -- 15.4 Background Information -- 15.5 Exercises -- Part III Real Aspects and Applications -- Contents xv -- 16 Noisy Signals -- 16.1 Introduction -- 16.2 Noise Questions -- 16.3 Sources of Noise -- 16.4 Noise Measurement -- 16.5 Weights and A-Weights -- 16.6 Signal to Noise Ratio (SNR) -- 16.7 Noise Measuring Filters and Evaluation -- 16.8 Types of noise -- 16.9 Origin of noises -- 16.10Box Plot Evaluation -- 16.11Individual noise types -- 16.11.1Residual -- 16.11.2Mild -- 16.11.3Steady-unsteady Time varying Noise -- 16.11.4Strong Noise -- 16.12Solution to Strong Noise: Matched Filter -- 16.13Background Information -- 16.14Exercises -- 17 Reasoning Methods and Noise Removal -- 17.1 Generalities -- 17.2 Special Noise Removal Methods -- 17.2.1 Residual Noise -- 17.2.2 Mild Noise -- 17.2.3 Steady-Unsteady Noise -- 17.2.4 Strong Noise -- 17.3 Poisson Distribution -- 17.3.1 Outliers and Shots -- 17.3.2 Underlying probability of Shots -- 17.4 Kalman Filter -- 17.4.1 Prediction Estimates -- 17.4.2 White noise Kalman filtering -- 17.4.3 Application of Kalman filter -- 17.5 Classification, Recognition and Learning -- 17.5.1 Summary of the used concepts -- 17.6 Principle Component Analysis (PCA) -- 17.7 Reasoning Methods -- 17.7.1 Case-Based Reasoning (CBR) -- 17.8 Background Information -- 17.9 Exercises -- xvi Contents -- 18
505
0
$a
Audio Signals and Speech Recognition -- 18.1 Generalities of Speech -- 18.2 Categories of Speech Recognition -- 18.3 Automatic Speech Recognition -- 18.3.1 System Structure -- 18.4 Speech Production Model -- 18.5 Acoustics -- 18.6 Human Speech Production -- 18.6.1 The Human Speech Generation -- 18.6.2 Excitation -- 18.6.3 Voiced Speech -- 18.6.4 Unvoiced Speech -- 18.7 Silence Regions -- 18.8 Glottis -- 18.9 Lips -- 18.10Plosive Speech Source -- 18.11Vocal-Tract -- 18.12Parametric and Non-Parametric Models -- 18.13Formants -- 18.14Strong Noise -- 18.15Background Information -- 18.16Exercises -- 19 Noisy Speech -- 19.1 Introduction -- 19.2 Colored Noise -- 19.2.1 Additional types of Colored Noise -- 19.3 Poisson Processes and Shots -- 19.4 Matched Filters -- 19.5 Shot Noise -- 19.6 Background Information -- 19.7 Exercises -- 20 Aspects Of Human Hearing -- 20.1 Human Ear -- 20.2 Human Auditory System -- 20.3 Critical Bands and Scales -- 20.3.1 Mel Scale -- 20.3.2 Bark Scale -- 20.3.3 Erb Scale -- 20.3.4 Greenwood Scale -- 20.4 Filter Banks -- 20.4.1 ICA Network -- 20.4.2 Auditory Filter Banks -- 20.4.3 Filter Banks -- Contents xvii -- 20.4.4 Mel Critical Filter Bank -- 20.5 Psycho-acoustic Phenomena -- 20.5.1 Perceptual Measurement -- 20.5.2 Human Hearing and Perception -- 20.5.3 Sound Pressure Level (SPL) -- 20.5.4 Absolute Threshold of Hearing (ATH) -- 20.6 Perceptual Adaptation -- 20.7 Auditory System and Hearing Model -- 20.8 Auditory Masking and Masking Frequency -- 20.
520
$a
Signal processing captures, interprets, describes and manipulates physical phenomena. Mathematics, statistics, probability, and stochastic processes are among the signal processing languages we use to interpret real-world phenomena, model them, and extract useful information. This book presents different kinds of signals humans use and applies them for human machine interaction to communicate. Signal Processing and Machine Learning with Applications presents methods that are used to perform various Machine Learning and Artificial Intelligence tasks in conjunction with their applications. It is organized in three parts: Realms of Signal Processing; Machine Learning and Recognition; and Advanced Applications and Artificial Intelligence. The comprehensive coverage is accompanied by numerous examples, questions with solutions, with historical notes. The book is intended for advanced undergraduate and postgraduate students, researchers and practitioners who are engaged with signal processing, machine learning and the applications.
650
0
$a
Signal processing
$x
Digital techniques.
$3
624853
650
0
$a
Machine learning.
$3
533906
650
0
$a
Application software.
$3
527258
650
1 4
$a
Artificial Intelligence.
$3
769149
650
2 4
$a
Digital and Analog Signal Processing.
$3
3538815
650
2 4
$a
Data Mining and Knowledge Discovery.
$3
898250
700
1
$a
Richter, Michael M.
$3
1070010
710
2
$a
SpringerLink (Online service)
$3
836513
773
0
$t
Springer Nature eBook
856
4 0
$u
https://doi.org/10.1007/978-3-319-45372-9
950
$a
Computer Science (SpringerNature-11645)
筆 0 讀者評論
館藏地:
全部
電子資源
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
W9445449
電子資源
11.線上閱覽_V
電子書
EB TK5102.9
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
Export
取書館
處理中
...
變更密碼
登入