search for


Adaptive stochastic gradient method under two mixing heterogenous models
Journal of the Korean Data & Information Science Society 2017;28:1245-55
Published online November 30, 2017
© 2017 Korean Data & Information Science Society.

Sang Jun Moon1 · Jong-June Jeon2

12Department of Statistics, University of Seoul
Correspondence to: Jong-June Jeon
Assistant professor, Department of Statistics, University of Seoul, Seoul 02504, Korea. E-mail:
Received October 31, 2017; Revised November 14, 2017; Accepted November 14, 2017.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
The online learning is a process of obtaining the solution for a given objective function where the data is accumulated in real time or in batch units. The stochastic gradient descent method is one of the most widely used for the online learning. This method is not only easy to implement, but also has good properties of the solution under the assumption that the generating model of data is homogeneous. However, the stochastic gradient method could severely mislead the online-learning when the homogeneity is actually violated. We assume that there are two heterogeneous generating models in the observation, and propose the a new stochastic gradient method that mitigate the problem of the heterogeneous models. We introduce a robust mini-batch optimization method using statistical tests and investigate the convergence radius of the solution in the proposed method. Moreover, the theoretical results are confirmed by the numerical simulations.
Keywords : Mini-batch, on-line learning, robustness, stochastic gradient descent method