簡易檢索 / 詳目顯示

研究生: 鄭柏偉
Cheng, Po-Wei
論文名稱: 深度學習輔助的極化碼循序列表可靠性推進置信傳播解碼器
Deep Learning Aided Sequential Reliability-Boosting Belief Propagation List Decoding for Polar Codes
指導教授: 翁詠祿
Ueng, Yeong-Luh
口試委員: 王忠炫
李晃昌
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 通訊工程研究所
Communications Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 57
中文關鍵詞: 極化碼深度學習
外文關鍵詞: Polar Codes, Deep Learning
相關次數: 點閱:4下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 極化碼是當前5G通訊的熱門主題,傳統的置信傳播演算法用在碼長長的極化碼時往往收斂速度很慢因此需要許多迭代來收斂,如此會限制了此演算法的吞吐量。為了解決這個問題,有一項被稱作是可靠度推進的技巧被提出了,能夠利用可靠度較高的訊息來改善置信傳播演算法的收斂速度,然而此技巧使用的參數是非最佳化的,在這篇論文中我們將用深度學習的優化技巧來進一步改善此可靠度推進演算法,能達到更快的收斂速度以及更好的錯誤率表現。之後基於此優化過的演算法,我們提出了循環冗餘校閱輔助的極化碼循序列表可靠性推進置信傳播解碼演算法,與之前提出的列表置信傳播解碼演算法相比,可以擁有更低的解碼複雜度與達到更好的錯誤率表現。


    Polar codes hava been a hot topic in the 5G communication senarios. The traditional belief propagation (BP) decoding for polar codes is known to converge slowly for large code lengths and therefore needs many iterations to converge, which makes it hard to achieve high throughput. To address this problem, a technique called reliability-boosting (RB-) decoding scheme was proposed, which improves the converging speed using the more reliable information in the belief propagation decoding process. However, the parameters used in this decoding scheme is sub-optimal. In this thesis, we further improved this decoding scheme using deep learning optimization, which achieved better converging speed and better error-rate performance. Later on top of that, we proposed a cyclic redundancy check (CRC)-aided sequential neural reliability-boosting belief propagation list decoding scheme, which can improve the bit error-rate performance and decoding complexity compared with the previous belief propagation list decoding scheme.

    摘要 目錄 Introduction-------------------------------------------------1 Preliminaries------------------------------------------------4 Deep Learning Based Reliability-Boosting Belief Propagation--13 Sequential Neural RB-BP List Decoding------------------------28 Conclusion---------------------------------------------------53

    [1] C. E. Shannon, ”A mathematical theory of communication,” The Bell System Technical Journal, vol. 27, no. 3, pp. 379-423, July 1948.
    [2] R. Gallager, “Low-density parity-check codes,” IRE Transactions on Informa-tion Theory, vol. 8, no. 1, pp. 21–28, Jan. 1962.
    [3] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1,” IEEE International Confer-ence on Communications, vol. 2, pp. 1064–1070 vol.2, May 1993.
    [4] E. Arikan,“Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,” IEEE Transactions on Information Theory, vol.55, no.7, pp.3051-3073, July2009.
    [5] I. Tal and A. Vardy, “List decoding of polar codes,” in IEEE International Symposium on Information Theory Proceedings, pp. 1–5, Jul. 2011.
    [6] K. Niu and K. Chen, “CRC-aided decoding of polar codes,” IEEE Communi-cations Letters, vol. 16, no. 10, pp. 1668–1671, Oct. 2012.
    [7] I. Tal and A. Vardy, “List decoding of polar codes,” IEEE Transactions on Information Theory, vol. 61, no. 5, pp. 2213–2226, May 2015.
    [8] B. Yuan and K. K. Parhi, “Architecture optimizations for BP polar decoders,” in IEEE International Conference on Acoustics, Speech and Signal Processing (CASSP), pp. 2654–-2658, May 2013.
    [9] T.-L. Tsai, “A hardware-friendly reliability-boosting belief propagation decod-ing for polar codes,” Master’s thesis, NTHU, Taiwan, 2016.
    [10] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in neural information pro-cessing systems, 2012, pp. 1097–1105.
    [11] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior,
    V. Vanhoucke, P. Nguyen, T. N. Sainath et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012.
    [12] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driess-che, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016.
    [13] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org.
    [14] T. Wang, L. Zhang, S.C. Liew, ”Deep Learning for Joint MIMO Detection and Channel Decoding”, Computing Research Repository, http://arxiv.org/abs/1901.05647, 2019
    [15] E. Nachmani, Y. Be’ery and D. Burshtein, ”Learning to decode linear codes us-ing deep learning,” 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, 2016, pp. 341-346.
    [16] L. Lugosch and W. J. Gross, ”Neural offset min-sum decoding,” 2017 IEEE International Symposium on Information Theory (ISIT), Aachen, 2017, pp. 1361-1365.
    [17] W. Xu, Z. Wu, Y. Ueng, X. You and C. Zhang, ”Improved polar decoder based on deep learning,” 2017 IEEE International Workshop on Signal Processing Systems (SiPS), Lorient, 2017, pp. 1-6.
    [18] Elkelesh, Ahmed, Moustafa Ebada, Sebastian Cammerer and Stephan ten Brink. “Belief Propagation List Decoding of Polar Codes.” IEEE Communi-cations Letters 22 (2018): 1536-1539.
    [19] N. Hussami, S. B. Korada and R. Urbanke, ”Performance of polar codes for channel and source coding,” 2009 IEEE International Symposium on Informa-tion Theory, Seoul, 2009, pp. 1488-1492.
    [20] S. -Y.Chung, T. J. Richardson, and R. L. Urbanke, “Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation,” IEEE Transactions on Information Theory, vol.47, no.2, pp.657-670, Feb 2001.
    [21] F. Zarkeshvari and A. H. Banihashemi, “On implementation of min sum al-gorithm for decoding low-density parity-check (LDPC) codes,” GLOBECOM, vol.2, pp.1349-1353 vol.2, Nov 2002.
    [22] B. Yuan and K. K. Parhi, “Early stopping criteria for energy-efficient low-latency belief-propagation polar code decoders,” IEEE TSP, vol.62, no.24, pp.6496-6506, Dec 2014.
    [23] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” In-ternational Conference on Learning Representations, 2015.
    [24] T. Tieleman and G. Hinton, “Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude,” COURSERA: Neural Networks for Machine Learning, vol. 4, no. 2, 2012.
    [25] M. Abadi, A. Agarwal, P. Barham et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
    [26] N. Doan, S. A. Hashemi, M. Mondelli, and W. J. Gross, “On the de-coding of polar codes on permuted factor graphs.” [Online]. Available: https://arxiv.org/abs/1806.11195v1

    QR CODE