Pattern Classification, Part 1This unique text/professional reference provides the information you need to choose the most appropriate method for a given class of problems, presenting an in-depth, systematic account of the major topics in pattern recognition today. A new edition of a classic work that helped define the field for over a quarter century, this practical book updates and expands the original work, focusing on pattern classification and the immense progress it has experienced in recent years."--BOOK JACKET. |
From inside the book
Results 1-3 of 54
Page 127
... iteration leads to an improved estimate , labeled by the iteration number i ; here , after three iterations the algorithm has converged . We must be careful and note that the EM algorithm leads to the greatest log- likelihood of the ...
... iteration leads to an improved estimate , labeled by the iteration number i ; here , after three iterations the algorithm has converged . We must be careful and note that the EM algorithm leads to the greatest log- likelihood of the ...
Page 139
... iterative scheme to maximize model parameters , even when some data are missing . Each iteration employs two steps : the expectation or E step , which requires marginalizing over the missing variables given the current model , and the ...
... iterative scheme to maximize model parameters , even when some data are missing . Each iteration employs two steps : the expectation or E step , which requires marginalizing over the missing variables given the current model , and the ...
Page 528
... iterations . = the labels assigned to the data , the trajectories are symmetric about the line μ 2. The trajectories lead ... iteration of the classical k - means procedure , each data point is assumed to be in exactly one cluster , as ...
... iterations . = the labels assigned to the data , the trajectories are symmetric about the line μ 2. The trajectories lead ... iteration of the classical k - means procedure , each data point is assumed to be in exactly one cluster , as ...
Contents
MAXIMUMLIKELIHOOD AND BAYESIAN | 84 |
NONPARAMETRIC TECHNIQUES | 161 |
LINEAR DISCRIMINANT FUNCTIONS | 215 |
Copyright | |
10 other sections not shown
Other editions - View all
Computer Manual in MATLAB to accompany Pattern Classification David G. Stork,Elad Yom-Tov No preview available - 2004 |
Computer Manual in MATLAB to accompany Pattern Classification David G. Stork,Elad Yom-Tov No preview available - 2004 |
Common terms and phrases
analysis approach assume backpropagation Bayes Bayesian bias binary Boltzmann calculate Chapter cluster centers component classifiers Consider convergence corresponding covariance matrix criterion function d-dimensional data set decision boundary denote derivation discriminant function distance distribution entropy error rate feature space FIGURE Gaussian given gradient descent Hidden Markov Models hidden units independent input iteration jackknife estimate labeled large number learning algorithm maximum-likelihood estimate mean methods minimize minimum minimum description length mixture density nearest-neighbor neural networks node nonlinear normal number of clusters number of samples obtain optimal output units p(xw parameters pattern recognition Perceptron points prior probabilities probability density problem procedure random variables randomly Section sequence shown shows simple solution split statistical statistically independent string Suppose target tion training data training error training patterns training set tree two-category unsupervised learning variance w₁ weight vector x₁ zero