Stochastic Optimization Algorithm in Machine LearningAbstractFor the optimization problem in machine learning field,traditionalmethod have difficulties in solving the high dimension and big data problemIn recent years,there are many researches in large scale machine learningproblems,especially stochastic algorithms.Generally,stochastic method candivided into two parts.One is first-order gradient method and the other issecond-order Newton method.There is more improvement and research in firstorder method,and the first order method is more mature and perfect.Thereis two classes for first order method.For the primal class,SVRG,SAG,SAGAis the representation,and SDCA,SPDC for dual class.Otherwise theacceleration method such as catalyst and katyusha,which has the optimalcon-vergence speed for first order method,is put forward in last two years.Second order method is one important research area and it has betterconvergence but not better performance because it has to compute the hessianmatrix one useful method is L-BFGS and its variants.In this paper,the author will introduce stochastic algorithms inmachine learning area in detail.In the end numerical experiments comparesome common algorithm and give a direct view to readers.Key Words:Large-scale machine leaming problem,Stochastic algorithm,Optimizationmethod
暂无评论内容