首页计算机书籍计算机语言《The Elements of Statistical Learning Data Mining Inference and Prediction Second Edition》Trevor Hastie
康盼

文档

193

关注

0

好评

0
PDF

《The Elements of Statistical Learning Data Mining Inference and Prediction Second Edition》Trevor Hastie

阅读 590 下载 0 大小 12.16M 总页数 758 页 2022-11-23 分享
价格:¥ 10.00
下载文档
/ 758
全屏查看
《The Elements of Statistical Learning Data Mining Inference and Prediction Second Edition》Trevor Hastie
还有 758 页未读 ,您可以 继续阅读 或 下载文档
1、本文档共计 758 页,下载后文档不带www.pdfdz.com水印,支持完整阅读内容。
2、古籍基本都为PDF扫描版,所以文档不支持编辑功能,即不支持文档内文字的复制粘贴。
3、当您付费下载文档后,您只拥有了使用权限,并不意味着购买了版权,文档只能用于自身使用,不得用于其他商业用途(如 [转卖]进行直接盈利或[编辑后售卖]进行间接盈利)。
4、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。
5、如文档内容存在违规,或者侵犯商业秘密、侵犯著作权等,请点击“违规举报”。
This is page viiPrinter:Opaque thisPreface to the Second EditionIn God we trust,all others bring data.-William Edwards Deming (1900-1993)We have been gratified by the popularity of the first edition of TheElements of Statistical Learning.This,along with the fast pace of researchin the statistical learning field,motivated us to update our book with asecond edition.We have added four new chapters and updated some of the existingchapters.Because ny readers are familiar with the layout of the firstedition,we have tried to change it as little as possible.Here is a sumryof the in changes:IOn the Web,this quote has been widely attributed to both Deming and Robert W.Hayden;however Profesor Hayden told us that he can claim no credit for this quote,and ironically we could find no "data"confirming that Deming actually said this.Preface to the Second EditionChapterWhat's new1.Introduction2.Overview of Supervised Learning3.Linear Methods for RegressionLAR algorithm and generalizationsof the lasso4.Linear Methods for ClassificationLasso path for logistic regression5.Basis Expansions and Regulariza-Additional illustrations of RKHStion6.Kernel Smoothing Methods7.Model Assesent and SelectionStrengths and pitfalls of cross-validation8.Model Inference and Averaging9.Additive Models,Trees,andRelated Methods10.Boosting and Additive TreesNew example from ecology;someterial split off to Chapter 16.11.Neural NetworksBayesian neural nets and the NIPS2003 challenge12.Support Vector Machines andPath algorithm for SVM classifierFlexible Discriminants13.Prototype Methods andNearest-Neighbors14.Unsupervised LearningSpectral clustering,kernel PCA,sparse PCA,non-negative trixfactorization archetypal ysis,nonlinear dimension reduction,Google page rank algorithm,adirect approach to ICA15.Random ForestsNew16.Ensemble LearningNew17.Undirected Graphical ModelsNew18.High-Dimensional ProblemsNewSome further notes:Our first edition was unfriendly to colorblind readers;in particular,we tended to favor red/green contrasts which are particularly trou-blesome.We have changed the color palette in this edition to a largeextent,replacing the above with an orange/blue contrast.·We have changed the name of Chapter6from“Kernel Methods'”to"Kernel Smoothing Methods",to avoid confusion with the chine-learning kernel method that is discussed in the context of support vec-tor chines (Chapter 11)and more generally in Chapters 5 and 14..In the first edition,the discussion of error-rate estition in Chap-ter 7 was sloppy,as we did not clearly differentiate the notions ofconditional error rates (conditional on the training set)and uncondi-tional rates.We have fixed this in the new edition.
返回顶部