Pattern Classification, Part 1This unique text/professional reference provides the information you need to choose the most appropriate method for a given class of problems, presenting an in-depth, systematic account of the major topics in pattern recognition today. A new edition of a classic work that helped define the field for over a quarter century, this practical book updates and expands the original work, focusing on pattern classification and the immense progress it has experienced in recent years."--BOOK JACKET. |
From inside the book
Results 1-3 of 86
Page 138
... convergence criterion is met ( e.g. , sufficiently small change in the estimated values of the parameters on subsequent iterations ) . This is the Baum - Welch or forward - backward algorithm — an example of a generalized expectation ...
... convergence criterion is met ( e.g. , sufficiently small change in the estimated values of the parameters on subsequent iterations ) . This is the Baum - Welch or forward - backward algorithm — an example of a generalized expectation ...
Page 166
... converge to the unknown density p ( x ) . In discussing convergence , we must recognize that we are talking about the con- vergence of a sequence of random variables , because for any fixed x the value of Pn ( x ) depends on the random ...
... converge to the unknown density p ( x ) . In discussing convergence , we must recognize that we are talking about the con- vergence of a sequence of random variables , because for any fixed x the value of Pn ( x ) depends on the random ...
Page 253
... converge to some limiting value || e || 2 . But for convergence to take place , e * ( k ) must converge to zero , so that all the positive components of e ( k ) must converge to zero . Because e ' ( k ) b = 0 for all k , it follows that ...
... converge to some limiting value || e || 2 . But for convergence to take place , e * ( k ) must converge to zero , so that all the positive components of e ( k ) must converge to zero . Because e ' ( k ) b = 0 for all k , it follows that ...
Contents
MAXIMUMLIKELIHOOD AND BAYESIAN | 84 |
NONPARAMETRIC TECHNIQUES | 161 |
LINEAR DISCRIMINANT FUNCTIONS | 215 |
Copyright | |
10 other sections not shown
Other editions - View all
Computer Manual in MATLAB to accompany Pattern Classification David G. Stork,Elad Yom-Tov No preview available - 2004 |
Computer Manual in MATLAB to accompany Pattern Classification David G. Stork,Elad Yom-Tov No preview available - 2004 |
Common terms and phrases
analysis approach assume backpropagation Bayes Bayesian bias binary Boltzmann calculate Chapter cluster centers component classifiers Consider convergence corresponding covariance matrix criterion function d-dimensional data set decision boundary denote derivation discriminant function distance distribution entropy error rate feature space FIGURE Gaussian given gradient descent Hidden Markov Models hidden units independent input iteration jackknife estimate labeled large number learning algorithm maximum-likelihood estimate mean methods minimize minimum minimum description length mixture density nearest-neighbor neural networks node nonlinear normal number of clusters number of samples obtain optimal output units p(xw parameters pattern recognition Perceptron points prior probabilities probability density problem procedure random variables randomly Section sequence shown shows simple solution split statistical statistically independent string Suppose target tion training data training error training patterns training set tree two-category unsupervised learning variance w₁ weight vector x₁ zero