search for


The effect of latent variable distribution and divergence on the performance of deep generative network based semi-supervised learning
Journal of the Korean Data & Information Science Society 2019;30:997-1009
Published online September 30, 2019;
© 2019 Korean Data and Information Science Society.

Younghyun Lee1 · Kyungha Seok2

12Department of Statistics, Inje University
Correspondence to: Professor, Institute of Statistical Information, Department of Statistics, Inje University, Kimhae 50834, Korea. E-mail:

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1E1A1A01075541).
Received August 23, 2019; Revised September 14, 2019; Accepted September 16, 2019.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Semi-supervised learning uses unlabeled data, unlike supervised learning, which uses labeled data only. Because of the difficulty in obtaining the target values, study on semi-supervised learning is active. Recently, there is much interest in semi-supervised learning using deep generative model (SSL-DG). In this learning, the evidence lower bound, which serves as a loss function with a misclassification rate, consists of reconstruction error, regularization term, and entropy. The latent variables required for reconstruction error and penalty terms are assumed to follow normal distributions and penalty terms are calculated using Kullback-Leibler divergence. In this study, we investigate the effect of the latent variable distribution and divergence on the performance of SSL-DG with MNIST and Fashion-MNIST data. Experimental results show that SSL-DG performs well when latent variables follow the normal distribution and the divergence is Kullback-Leibler or Neyman or Jeffrey. The other divergences appear to be heavily influenced by the distribution of latent variables, especially in beta distribution. In addition, when the latent variable follows the Cauchy distribution, SSL-DG yields more robust performance than other distributions. In this way, we can conclude that the distribution of latent variables and the divergences can have a great influence on the performance of SSL-DG.
Keywords : Classification, deep generative network, Kullback-Leibler divergence, semisupervised learning, variational autoencoder.