search for




 

ADMM algorithms in statistics and machine learning
Journal of the Korean Data & Information Science Society 2017;28:1229-44
Published online November 30, 2017
© 2017 Korean Data & Information Science Society.

Hosik Choi1 · Hyunjip Choi2 · Sangun Park3

12Department of Applied Statistics, Kyonggi University
3Department of Management Information System, Kyonggi University
Correspondence to: Hosik Choi
Assistant professor, Department of Applied Statistics, Kyonggi University, Gwanggyosan-ro, Yeongtong-gu, Suwon 16227, Korea. E-mail: choi.hosik@gmail.com
Received October 31, 2017; Revised November 14, 2017; Accepted November 21, 2017.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
In recent years, as demand for data-based analytical methodologies increases in various fields, optimization methods have been developed to handle them. In particular, various constraints required for problems in statistics and machine learning can be solved by convex optimization. Alternating direction method of multipliers (ADMM) can effectively deal with linear constraints, and it can be effectively used as a parallel optimization algorithm. ADMM is an approximation algorithm that solves complex original problems by dividing and combining the partial problems that are easier to optimize than original problems. It is useful for optimizing non-smooth or composite objective functions. It is widely used in statistical and machine learning because it can systematically construct algorithms based on dual theory and proximal operator. In this paper, we will examine applications of ADMM algorithm in various fields related to statistics, and focus on two major points: (1) splitting strategy of objective function, and (2) role of the proximal operator in explaining the Lagrangian method and its dual problem. In this case, we introduce methodologies that utilize regularization. Simulation results are presented to demonstrate effectiveness of the lasso.
Keywords : Constraint, optimization, parallel computing, penalty function, regularization