研究生: |
陳家齊 Chen, Jia-Chi |
---|---|
論文名稱: |
基於QEMU-virtio CUDA虛擬化解決方案 CUDA virtualization using QEMU and virtio |
指導教授: |
李哲榮
Lee, Che-Rung |
口試委員: |
徐慰中
Hsu, Wei-Chung 鍾葉青 Chung, Yeh-Ching 洪士灝 Hung, Shih-Hao |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Computer Science |
論文出版年: | 2016 |
畢業學年度: | 104 |
語文別: | 中文 |
論文頁數: | 38 |
中文關鍵詞: | 虛擬化 、CUDA 、virtio 、qemu |
外文關鍵詞: | virtualization, CUDA, virtio, qemu |
相關次數: | 點閱:1 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
為了解決GPGPU虛擬化的問題,在本篇論文中我們提出qCUDA:基於QEMU
virtio對NVIDIA CUDA虛擬化解決方案。我們以QEMU 2.4.0、NVidia CUDA 7.5和Ubuntu
14.04.3為基礎,實作出qCUDA。qCUDA利用API forward的方式讓使用者可以在虛
擬中使用NVidia CUDA API。qCUDA架構由上而下主要分為三個部分,函式庫、
驅動程式和虛擬硬體裝置。在實驗方面,我們進行了三種不同類別的實驗:記
憶體頻寬實驗、計算密集型實驗和記憶體密集型實驗,並且和原生CUDA以及
目前常用的GPGPU虛擬系統rCUDA相比較。在記憶體頻寬實驗中雖然qCUDA只
有原生CUDA的50%,但rCUDA卻只有5%。而在計算密集型實驗中,小資料量
時qCUDA比rCUDA快2000%、在大資料量時qCUDA也比rCUDA快200%以上。最後
在記憶體密集型實驗中qCUDA比rCUDA快1000%~˜2500%。
Virtualization has become a key technology in cloud computing. However, no single
solution of GPGPU virtualization can satisfy all different demands. In this thesis, we propose
qCUDA: a GPGPU virtualization method based on QEMU virtio for NVidia CUDA.
The architecture of qCUDA consists of three parts: library, driver, and virtual hardware
devices. The virtualization method of qCUDA is based on API forwarding, which accepts
users’ invocation of CUDA API in the virtual machine, and forwards the APIs to the physical
machine through virtIO and QEMU. The experiments evaluate three different types of
benchmarks, which are of bandwidth bound, computational bound, and memory bound,
and compare qCUDA with native CUDA and rCUDA, which is a popular GPGPU virtualization
method. For the bandwidth bound benchmark, qCUDA can reach 50% bandwidth
performance of native CUDA, but rCUDA can only have 3% bandwidth performance of
native CUDA. For the computational bound benchmark, qCUDA is 2000% time faster than
rCUDA for a small data size and 200% time faster than rCUDA for a large data size. For
the memory-bound benchmark, qCUDA is 1000%~˜2500% times faster than rCUDA.
[1] Amazon Elastic Compute Cloud. URL:https://aws.amazon.com/ec2/.
[2] Facebook to open-source AI hardware design. URL:https://code.facebook.com/posts/1687861518126048/facebook-to-open-source-ai-hardware-design/.
[3] Amazon EC2 Linux GPU Instances.URL:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using cluster computing.html#gpu-instance-specifications.
[4] Dale Southard. GPU Cloud Computing 101:Getting Started. 2010.URL:http : //www.nvidia.com/content/pdf/sc 2010/theater/southard sc10.pdf.
[5] TRUE VIRTUAL ACCELERATION WITH GPUs. URL : http://www.nvidia.com/object/grid-technology.html.
[6] CUDA Toolkit Documentation v7.5. 2015. URL : https://docs.nvidia.com/cuda/.
[7] Giulio Giunta et al. “A GPGPU transparent virtualization component for high per-formance computing clouds”. Euro-Par 2010-Parallel Processing. Springer, 2010,pp. 379–391.
[8] Mathias Gottschlag et al. “LoGV: Low-overhead GPGPU virtualization”. High Per-formance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (HPCC EUC), 2013 IEEE 10th Interna-tional Conference on. IEEE. 2013, pp. 1721–1726.
[9] Lin Shi et al. “vCUDA: GPU-accelerated high-performance computing in virtual machines”. Computers, IEEE Transactions on 61.6 (2012), pp. 804–816.
[10] Vishakha Gupta et al. “GViM: GPU-accelerated virtual machines”. Proceedings of the 3rd ACM Workshop on System-level Virtualization for High Performance Computing. ACM. 2009, pp. 17–24.
[11] Yusuke Suzuki et al. “GPUvm: Why not virtualizing GPUs at the hypervisor?” 2014 USENIX Annual Technical Conference (USENIX ATC 14). Philadelphia, PA: USENIX Association. 2014, pp. 109–120.
[12] Jos ́e Duato et al. “rCUDA: Reducing the number of GPU-based accelerators in high performance clusters”. High Performance Computing and Simulation (HPCS), 2010 International Conference on. IEEE. 2010, pp. 224–231.