簡易檢索 / 詳目顯示

研究生: 李珩
Li, Heng.
論文名稱: 使用深度學習之無線通訊系統回傳通道資訊的重建與預測技術
Reconstruction and Forecasting of Feedback Channel State Information Using Deep Learning for Wireless Communication Systems
指導教授: 王晉良
Wang, Chin-Lian
口試委員: 陳永芳
Chen, Yung-Fang
古聖如
Ku, Sheng-Ju
黃昱智
Huang, Yu-Chih
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 通訊工程研究所
Communications Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 32
中文關鍵詞: 單輸入單輸出多輸入多輸出通道狀態資訊回傳深度學習人工智慧時間序列分析遞迴神經網路
外文關鍵詞: SISO, MIMO, CSI feedback, deep learning, artificial intelligence, time series analysis, recurrent neural network
相關次數: 點閱:4下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文提出一種基於深度學習的處理方法,壓縮、重建並預測回傳的通道狀態資訊(channel state information;CSI),以降低在無線通訊系統中的CSI回傳資料和回傳頻率;此一處理方法乃基於混合雙向序列神經元之長短期記憶(composite bidirectional ordered neurons long short-term memory;CBON-LSTM)模型發展而成,並採用考量現在和未來資料的平行結構以提升學習效果;有別於傳統的LSTM作法,所提出之CBON-LSTM方法在每一基本處理單元中引進了主遺忘閥(master forget gate)與主輸入閥(master input gate),以更精確地萃取資料的特徵。為評估所提出之CBON-LSTM方法的效能,我們分別針對時變單輸入單輸出 (single-input single-output;SISO)與多輸入多輸出(multiple-input multiple-output;MIMO) 通道環境進行實驗,並與現有兩種適用於CSI壓縮與重建的作法進行比較,包含基於CSI感測網路的CsiNet和基於虛擬三維A式卷積(pseudo-3D-A convolutional)LSTM的ConvlstmCsiNet-A;多數實驗結果顯示,在SISO和MIMO環境中的各種CSI壓縮率情況下,CBON-LSTM均比CsiNet和ConvlstmCsiNet-A具有更好的CSI重建效果;CBON-LSTM的CSI預測效果雖然不如其CSI重建表現,但大部分仍優於其他兩種方法的CSI重建效能。


    This thesis presents a deep learning-based scheme for compression, reconstruction, and forecasting of channel state information (CSI) to reduce the CSI feedback data and feedback frequency in wireless communication systems. The scheme is developed based on a composite bidirectional ordered neurons long short-term memory (CBON-LSTM) model, where a parallel structure considering both present and future data is adopted to increase the learning performance. Different from the traditional LSTM method, the proposed one introduces mechanism of master forget gates and master input gates in each basic cell to extract data features more precisely. For performance evaluation, both time-varying single-input single-output (SISO) and multiple-input multiple-output (MIMO) channels are used for experiments, and the proposed CBON-LSTM scheme is compared with two existing methods intended for CSI compression and reconstruction, one based on a CSI sensing neural network (CsiNet) and the other based on a pseudo-3D-A convolutional long short-term memory CsiNet (ConvlstmCsiNet-A). Most experimental results show that the CBON-LSTM method achieves much better CSI reconstruction performance than the CsiNet and ConvlstmCsiNet approaches for both SISO and MIMO channels under different CSI compression ratios. The CSI forecasting or prediction performance of the proposed scheme is inferior to the corresponding CSI reconstruction performance, but is still superior to the CSI reconstruction performance of the other two methods in most cases.

    Abstract i Contents ii List of Figures iii List of Tables iv I. Introduction...................................................1 II. System Model...................................................3 III. CBON-LSTM......................................................8 IV. Experiments...................................................16 A. Experiment setups of SISO.....................................17 B. Experiment setups of MIMO.....................................19 C. Result Comparison.............................................20 V. Conclusion....................................................29 References.........................................................30 List of Figures Fig. 1. The concepts of the BON-LSTM...............................11 Fig. 2. The simplified expression of the 4 time steps architecture of the CBON-LSTM method...............................................14 Fig. 3. The 4 time steps architecture of the CBON-LSTM method......15 List of Tables TABLE I The number of parameters in SISO environment...............22 TABLE II The number of parameters in MIMO environments.............23 TABLE III The NMSErec and ρrec in SISO environment................24 TABLE IV The NMSErec and ρrec in 2×2 MIMO environment.............24 TABLE V The NMSErec and ρrec in 4×4 MIMO environment..............25 TABLE VI The NMSErec and ρrec in 8×8 MIMO environment.............26 TABLE VII The NMSEpre and ρpre of predicting CSI in SISO environment ...................................................................27 TABLE VIII The NMSEpre and ρpre of predicting CSI in MIMO environments.......................................................28

    [1] D. J. Love, R. W. Heath, V. K. N. Lau, D. Gesbert, B. D. Rao, and M. Andrews, “An overview of limited feedback in wireless communication systems,” IEEE J. Sel. Areas Commun., vol. 26, no. 8, pp. 1341–1365, Oct. 2008.
    [2] C.-K. Wen, W.-T. Shih, and S. Jin, “Deep learning for massive MIMO CSI feedback,” IEEE Wireless Commun. Lett., vol. 7, no. 5, pp. 748–751, Oct. 2018.
    [3] Z. Lu, J. Wang, and J. Song, “Multi-resolution CSI feedback with deep learning in massive MIMO system,” in Proc. IEEE Int. Conf. Commun. (ICC), Dublin, Ireland. Jun. 2020, pp. 1–6.
    [4] Z. Liu, L. Zhang, and Z. Ding, “Exploiting bi-directional channel reciprocity in deep learning for low rate massive MIMO CSI feedback,” IEEE Wireless Commun. Lett., vol. 8, no. 3, pp. 889–892, Jun. 2019.
    [5] L. Liu, C. Oestges, J. Poutanen, K. Haneda, P. Vainikainen, F. Quitin, F. Tufvesson, and P. D. Doncker, “The COST 2100 MIMO channel model,” IEEE Wireless Commun., vol. 19, no. 6, pp. 92–99, Dec. 2012.
    [6] T. Wang, C.-K. Wen, S. Jin, and G. Y. Li, “Deep learning-based CSI feedback approach for time-varying massive MIMO channels,” IEEE Wireless Commun. Lett., vol. 8, no. 2, pp. 416–419, Apr. 2019.
    [7] X. Li and H. Wu, “Spatio-temporal representation with deep neural recurrent network in MIMO CSI feedback,” IEEE Wireless Commun. Lett., vol. 9, no. 5, pp. 653–657, May 2020.
    [8] R. Pascanu, C¸ . Gu¨lc¸ehre, K. Cho, and Y. Bengio, “How to construct deep recurrent neural networks,” in Proc. 2nd Int. Conf. Learn. Represent. (ICLR), Banff, Canada, Apr. 2014, pp. 1–13.
    [9] N. Srivastava, E. Mansimov, and R. Salakhutdinov, “Unsupervised learning of video representations using LSTMs,” in Proc. 32nd Int. Conf. Mach. Learn. (ICML), Lille, France, vol. 37, pp. 843-852, Jul. 2015.
    [10] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, Jul. 2006.
    [11] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997.
    [12] Y. Shen, S. Tan, A. Sordoni, and A. Courville, “Ordered neurons: Integrating tree structures into recurrent neural networks,” in Proc. 7th Int. Conf. Learn. Represent. (ICLR), New Orleans, LA, USA, May 2019, pp. 1–14.
    [13] M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Trans. Signal Process., vol. 45, no. 11, pp. 2673–2681, Nov. 1997.
    [14] Z. Cui, R. Ke, and Y. Wang, “Deep bidirectional and unidirectional LSTM recurrent neural network for network-wide traffic speed prediction,” in Proc. 6th Int. Workshop Urban Comput. (UrbComp), Halifax, Canada, Aug. 2017, pp. 1–11.
    [15] A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional LSTM and other neural network architectures,” Neural Netw., vol. 18, no. 5–6, pp. 602–610, Jul.–Aug. 2005.
    [16] A. Graves, N. Jaitly, and A. Mohamed, “Hybrid speech recognition with deep bidirectional LSTM,” in Proc. IEEE Workshop Autom. Speech Recogn. Underst. (ASRU), Olomouc, Czech Republic, Dec. 2013, pp. 273–278.
    [17] K. E. Baddour and N. C. Beaulieu, “Autoregressive modeling for fading channel simulation,” IEEE Trans. Wireless Commun., vol. 4, no. 4, pp. 1650–1662, Apr. 2005.
    [18] A. Jamoos, “Rayleigh fading channel simulation,” MATLAB Central File Exchange, Jun. 2006.
    [19] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier
    nonlinearities improve neural network acoustic models,” in Proc. ICML Workshop Deep Learn. Audio, Speech, Language Process., Atlanta, GA, USA, Jun. 2013, pp. 1–6.
    [20] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv: 1505.00853v2 [cs.LG], Nov. 2015.
    [21] T. Tieleman and G. Hinton, “Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude,” COURSERA: Neural Netw. Mach. Learn., 2012.
    [22] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv: 1609.04747v2 [cs.LG], Jun. 2017.
    [23] S. Merity, N. S. Keskar, and R. Socher, “Regularizing and optimizing LSTM language models,” in Proc. 6th Int. Conf. Learn. Represent. (ICLR), Vancouver, Canada, Apr./May 2018, pp. 1–13.
    [24] Y. Gal and Z. Ghahramani, “A theoretically grounded application of dropout in recurrent neural networks,” in Proc. 30th Int. Conf. Neural Inf. Process. Syst. (NIPS), Barcelona, Spain, Dec. 2016, pp. 1027–1035.
    [25] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, no. 56, pp. 1929–1958, Jun. 2014.
    [26] M. Soltani, V. Pourahmadi, A. Mirzaei, and H. Sheikhzadeh, “Deep learning-based channel estimation,” IEEE Commun. Lett., vol. 23, no. 4, pp. 652–655, Apr. 2019.

    QR CODE