Pattern Classification, Part 1This unique text/professional reference provides the information you need to choose the most appropriate method for a given class of problems, presenting an in-depth, systematic account of the major topics in pattern recognition today. A new edition of a classic work that helped define the field for over a quarter century, this practical book updates and expands the original work, focusing on pattern classification and the immense progress it has experienced in recent years."--BOOK JACKET. |
From inside the book
Results 1-3 of 18
Page 226
... HESSIAN MATRIX NEWTON'S ALGORITHM J ( a ) ~ J ( a ( k ) ) + ▽ J ' ( a – a ( k ) ) + ( a − a ( k ) ) ' H ( a — a ( k ) ) , · — 2 - ( 13 ) where H is the Hessian matrix of second partial derivatives a2J / da ; da ; evaluated at a ( k ) ...
... HESSIAN MATRIX NEWTON'S ALGORITHM J ( a ) ~ J ( a ( k ) ) + ▽ J ' ( a – a ( k ) ) + ( a − a ( k ) ) ' H ( a — a ( k ) ) , · — 2 - ( 13 ) where H is the Hessian matrix of second partial derivatives a2J / da ; da ; evaluated at a ( k ) ...
Page 332
... Hessian matrix in line 3 is particularly simple for a diagonal matrix . The above algorithm terminates when the error is greater than a criterion ini- tialzed to be 0. Another approach is to change line 6 to terminate when the change in ...
... Hessian matrix in line 3 is particularly simple for a diagonal matrix . The above algorithm terminates when the error is greater than a criterion ini- tialzed to be 0. Another approach is to change line 6 to terminate when the change in ...
Page 341
... Hessian matrix for a sum squared error criterion in a three - layer network , as given in Eqs . 47 and 48 . 32. Repeat Problem 31 but for a cross - entropy error criterion . 33. Suppose a Hessian matrix for an error function is ...
... Hessian matrix for a sum squared error criterion in a three - layer network , as given in Eqs . 47 and 48 . 32. Repeat Problem 31 but for a cross - entropy error criterion . 33. Suppose a Hessian matrix for an error function is ...
Contents
MAXIMUMLIKELIHOOD AND BAYESIAN | 84 |
NONPARAMETRIC TECHNIQUES | 161 |
LINEAR DISCRIMINANT FUNCTIONS | 215 |
Copyright | |
10 other sections not shown
Other editions - View all
Computer Manual in MATLAB to accompany Pattern Classification David G. Stork,Elad Yom-Tov No preview available - 2004 |
Computer Manual in MATLAB to accompany Pattern Classification David G. Stork,Elad Yom-Tov No preview available - 2004 |
Common terms and phrases
analysis approach assume backpropagation Bayes Bayesian bias binary Boltzmann calculate Chapter cluster centers component classifiers Consider convergence corresponding covariance matrix criterion function d-dimensional data set decision boundary denote derivation discriminant function distance distribution entropy error rate feature space FIGURE Gaussian given gradient descent Hidden Markov Models hidden units independent input iteration jackknife estimate labeled large number learning algorithm maximum-likelihood estimate mean methods minimize minimum minimum description length mixture density nearest-neighbor neural networks node nonlinear normal number of clusters number of samples obtain optimal output units p(xw parameters pattern recognition Perceptron points prior probabilities probability density problem procedure random variables randomly Section sequence shown shows simple solution split statistical statistically independent string Suppose target tion training data training error training patterns training set tree two-category unsupervised learning variance w₁ weight vector x₁ zero