search for


Bayesian analysis of principal component regression model
Journal of the Korean Data & Information Science Society 2019;30:247-59
Published online March 31, 2019;
© 2019 Korean Data and Information Science Society.

Minjung Kyung1

1Department of Statistics, Duksung Women’s University
Correspondence to: Associate Professor, Department of Statistics, Duksung Women’s University, Seoul 132-714, Korea. E-mail:
This work was supported by the Duksung Women’s University research grants 3000002995.
Received January 31, 2019; Revised February 27, 2018; Accepted March 10, 2019.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License ( which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Principal component analysis (PCA) Regression has been knwon as a tool for data analysis and dimension reduction in applications throughout science and engineering. Even though there exists controversy for the interpretation and usage of PCA regression, it is still a useful tool when there exists a multicollinearity problem among explanatory variables in the regression models. We here introduce a Bayesian inference for PCA regression based on the use of general classes of shrinkage priors. We also discuss a method to choose the number of principal components considering the linear relationship with dependent variables based on the Bayesian information criteria. Based on the applications to real datasets, we observe that for the data of p > n, the proposed method selects the number of principal component for the purpose of better prediction, and for the data of variable selection, it choose variables include the variables which previous methods have proposed.
Keywords : Bayesian inference, Bayesian information criteria, principal component regression, singular value decomposition.