做預測為統計在實務上一個重要的應用,一般而言,建立預測模型最常見的方法為線性迴歸。然而傳統的線性迴歸會將過多解釋能力不佳的解釋變數選入預測模型中,要解決這個問題,除了做逐步迴歸(stepwise regression),也可透過適當的選取準則來建立模型。近年來AIC、BIC、Lasso等選取準則被廣泛的使用,而Sung(2011)提出使用L_1懲罰的選取準則除了調整參數λ,另外加入了一個臨界參數τ,使得預測模型的表現更為良好。本文則使用L_(1⁄2)懲罰的選取準則,透過選擇(selection)與收縮(shrinkage)作用得到更精確的預測模型。
1.Akaike, H. (1973). Information theory and the maximum likelihood principle. In International symposium on Information Theory. Edited by Petrov, V. and Csa ́ki, F. Akademiai Kia ́do, Budapest.
2.Akaike, H. (1974). A New Look at the Statistical Model Identification. IEEE Transactions on Automatic Control, 19, 716-723.
3.Schwarz, G. (1978). Estimating the Dimension of a Model. The Annuals of Statistics, 19, 461-464.
4.Tibshirani, R. (1996). Regression Shrinkage and Selection via Lasso. Journal of the Royal Statistical Society. Series B (Methodological), Vol. 58, No. 1 267-288
5.Sung, P. Y. (2011). Model Selection for Generalized Linear Models Using Penalized Likelihood.
6.Sakamoto, Y., Ishiguro, M., and Kitagawa, G. (1986). Akaike Information Criterion Statistics. Springer, New York.
7.Avriel, M. (1976). Nonlinear Programming: Analysis and Methods. Prentice-Hall, INC. Englewood Cliffs, New Jersey.