机器学习中的随机优化算法

第1页 / 共34页

第2页 / 共34页

第3页 / 共34页

第4页 / 共34页

第5页 / 共34页

第6页 / 共34页

第7页 / 共34页

第8页 / 共34页
试读已结束,还剩26页,您可下载完整版后进行离线阅读
机器学习中的随机优化算法-知知文库网
机器学习中的随机优化算法
此内容为付费资源,请付费后查看
10
限时特惠
20
立即购买
您当前未登录!建议登陆后购买,可保存购买订单
付费资源
© 版权声明
THE END
Stochastic Optimization Algorithm in Machine LearningAbstractFor the optimization problem in machine learning field,traditionalmethod have difficulties in solving the high dimension and big data problemIn recent years,there are many researches in large scale machine learningproblems,especially stochastic algorithms.Generally,stochastic method candivided into two parts.One is first-order gradient method and the other issecond-order Newton method.There is more improvement and research in firstorder method,and the first order method is more mature and perfect.Thereis two classes for first order method.For the primal class,SVRG,SAG,SAGAis the representation,and SDCA,SPDC for dual class.Otherwise theacceleration method such as catalyst and katyusha,which has the optimalcon-vergence speed for first order method,is put forward in last two years.Second order method is one important research area and it has betterconvergence but not better performance because it has to compute the hessianmatrix one useful method is L-BFGS and its variants.In this paper,the author will introduce stochastic algorithms inmachine learning area in detail.In the end numerical experiments comparesome common algorithm and give a direct view to readers.Key Words:Large-scale machine leaming problem,Stochastic algorithm,Optimizationmethod
喜欢就支持一下吧
点赞9 分享
评论 抢沙发
头像
欢迎您留下宝贵的见解!
提交
头像

昵称

取消
昵称表情代码图片

    暂无评论内容