研究生: |
林明森 MingShen Lin |
---|---|
論文名稱: |
網格MPI:PACX-MPI最佳化 Grid-enabled MPI: PACX-MPI Optimization |
指導教授: |
許雅三
Yarsun Hsu |
口試委員: | |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
論文出版年: | 2005 |
畢業學年度: | 93 |
語文別: | 英文 |
論文頁數: | 57 |
中文關鍵詞: | 網格 、平行程式 |
外文關鍵詞: | Grid, Globus, MPI, PACX-MPI |
相關次數: | 點閱:32 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來由於電腦的硬體能力與日增進,而軟體更加地複雜且聰明。同時網路基礎建設飛快地進步。利用網路連接,整合這些位於不同地方的硬體與軟體資源開始受到重視,也因此大區域的資源共享與管理成為一個重要的課題。
在這篇論文一開始,首先敘述了網格(Grid)的概念,接著便是OGSA(Open Grid Service Architecture)、OGSI(Open Grid Service Infrastructure)、與Globus Toolkits的介紹,以及它們之間的相互關係。OGSA敘述在整合眾多資源成為一個單一整合資源時所需要的基礎架構,這樣的一個整合環境被稱為網格。而OGSI則定義了許多在OGSA中提到的元件。Globus Toolkits是第一個實作OGSI標準的一套軟體組。
在資源共享中,運算能力共享是最主要的一個。將數台電腦連結來完成一件龐大的工作,這樣的架構稱之為叢集系統(cluster)。而現在將多組叢集連結來完成一個更大的工作,這樣的架構稱之為叢集再叢集(cluster of cluster)。為了讓原本在單一叢集上運行的MPI程式可以運行在多個叢集之間,因此有所謂的網格MPI的產生。網格MPI最重要的特點便是可以整合多個現有叢集系統成為叢集再叢集。
在這篇論文中介紹了三種具不同架構的網格MPI,MPICH-Globus2、MPICH-VMI、與PACX-MPI。並且探討在私有IP(private IP)環境下,使用IP tunnel與Port forwarding來做為網格MPI的因應方法。最後對PACX-MPI的架構做最佳化,試著以多執行緒的方式改善叢集系統之間的通訊延遲。最佳化之後的結果以Pallas Benchmark Measurement、NCSA Parallel Benchmark、與mpi-POVRAY測量不同方面的效能。
As the power of computer progresses, software gets increasingly more complex and intelligent. Meanwhile, network infrastructure has also improved at a high speed. Because of these advancements, there is a strong interest in sharing computing resources scattered over many different places. Therefore, the large-scale resources sharing and management become very important issues.
At the beginning, concept of Grid is described first, followed by the introduction of the components of OGSA (Open Grid Service Architecture) standard, OGSI (Open Grid Service Infrastructure) standard, Globus Toolkits, and relationship among them. OGSA standard maps the path of integrating resources into a Grid. OGSI standard is the most important component of OGSA. Globus Toolkits are the first full-scale implementation of the OGSI standard.
The sharing of computational resources is the primary interest of resource sharing. A typical case is the so-called cluster, which consists of a group of computers working together to solve a large computation-intensive application. To solve larger applications, cluster of cluster which cinsists a group of clusters is developed. To facilitate MPI programs to function properly across clusters, MPI extensions, such as MPICH-Globus2 and MPICH-VMI, have been introduced and are briefly discussed here. How these MPI implementations work across clusters behind firewalls is also disscussed. In addition, an optimization of PACX-MPI for data transmission between two clusters has been implemented. Communications between two clusters are studied and handled with multithread. System throughputs are measured by use of NAS Parallel Benchmark and Persistence of Vision Raytracer.
1 : Ian Foster and Carl Kesselman, The Grid: blueprint for a New Computing Infrastructure, Morgan Kaufmann Publishers, Inc. San Francisco, California, 1999.
2 : Ian Foster, Carl Kesselman, Steven Tuecke, The Anatomy of the Grid: Enabling Scalable Virtual Organizations, International J. Supercomputer Applications, 15(3), 2001.
3 : Ian Foster, C. Kesselman, J. Nick, S. Tuecke, The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration, Open Grid Service Infrastructure WG, Global Grid Forum, June 22, 2002.
4 : Ian Foster What is the Grid? A Three Point Checklist, http://www-fp.mcs.anl.gov/~foster/Articles/WhatIsTheGrid.pdf, July 20, 2002.
5 : Ian Foster, Nicholas T. Karonis, A Grid-Enabled MPI: Message Passing in Heterogeneous Distributed Computing Systems, Proc. 1998 SC Conference, November, 1998.
6 : David Booth, Hugo Haas, Francis McCabe, Eric Newcomer, Michael Champion, Chris Ferris, and David Orchard, Web Service Architecture,http://www.w3.org/TR/2004/NOTE-ws-arch-20040211/, W3C Working Draft, 8 Aug. 2003.
7 : Hariharan Balakrishnan (bharihar@in.ibm.com), C. Eric Wu (cwu@us.ibm.com), OGSI-based system management: Manageability services for Linux,
8 : Gropp and Ewing Lusk, An abstract device definition to support the implementation of a high-level point-to-point message-passing interface, Preprint MCS-P342-1193, Argonne National Laboratory, 1994.
9 : Steve Kleiman, Devang Shah, and Bart Smaalders, Programming with Threads, Prentice Hall.
10 : W. Richard Stevens, UNIX Network Programming Volume 1 Networking APIs: sockets and XTI, Prentice Hall.
11 : W. Richard Stevens, UNIX Network Programming Volume 2 Interprocess Communications, Prentice Hall.
12 : Barry Wilkinson and Michael Allen, Parallel Programming, Techniques and Application Using Networked Workstations and Parallel Computers, Prentice Hall.
13 : Marlon Pierce and Geoffrey Fox, Making Scientific Applications as Web Services, Computing in Science & Engineering [see also IEEE Computational Science and Engineering], Volume: 6, Issue: 1, Jan.-Feb. 2004 Pages: 93 -- 96.
14 : Scott Pakin and Avneesh Pant, VMI 2.0: A Dynamically Reconfigurable Messaging Layer for Availability, Usability, and Management.
15 : Von Welch, Software Architect, Globus Project, Globus Toolkits Firewall Requirements, http://www.globus.org/security/firewalls/Globus%20Firewall%20Requirements-5.pdf, 22 July,2003
16 : Pradeep Kumar Panjwani, MONITORING AND COMPRESSION FRAMEWORK IN VIRTUAL MACHINE INTERFACE 2.0, B.E., University of Bombay, 2000, Thesis.
17 : Jay Unger (unger@us.ibm.com), Matt Haynos (mph@us.ibm.com), A visual tour of Open Grid Services Architecture, Examine the component structure of OGSA.
18 : teraGrid, http://www.teragrid.org/.
19 : MPICH, http://www-unix.mcs.anl.gov/mpi/mpich/.
20 : MiMPI, http://www.arcos.inf.uc3m.es/~mimpi/
21 : PACX-MPI, Rainer Keller, Matthias Muller, http://www.hlrs.de/organization/pds/projects/pacx-mpi/.
22 : Virtual Machine Interface (VMI),http://vmi.ncsa.uiuc.edu/
23 : MPICH-Globus2, http://www3.niu.edu/mpi/.
24 : MPICH-V, http://www.lri.fr/~gk/MPICH-V/.
25 : pyGridWare, http://www-itg.lbl.gov/gtg/projects/pyGridWare/
26 : OGSI::Lite, http://www.sve.man.ac.uk/Research/AtoZ/ILCT.
27 : OGSI.Net, http://www.cs.virginia.edu/~humphrey/GCG/ogsi.net.html
28 : Public Key Infrastructure, http://www.ietf.org/html.charters/pkix-charter.html.
29 : Pallas MPI Benchmark, http://www.pallas.com/e/products/pmb/
30 : NASA Perfermance Benchmark (NPB), http://www.nas.nasa.gov/Software/NPB/.
31 : POVRAY, http://www.povray.org.
32 : POVRAY with MPI patch, Leon Verrall, http://www.verrall.demon.co.uk/mpipov/
33 : Nicholas T. Karonis, Michael E. Papka, Justin Binns, John Bresnahan, Joseph A. Insley, David Jones, Joseph M. Link, High-Resolution Remote Rendering of Large Datasets in a Collaborative Environment.
34 : S. Tuecke, K. Czajkowski, I. Foster, J. Frey, S. Graham, C. Kesselman, T. Maquire, T. Sandholm, D. Snelling, P. Vanderbilt, Open Grid Services Infrastructure (OGSI) version 1.0, June 27, 2003.
35 : The NAS Parallel Benchmarks 2.0, David Bailey, Tim Harris, William Saphir, Rob van der Wijngaart, Alex Woo, Maurice Yarrow, Report NAS-95-020, December 1995, http://www.nas.nasa.gov/Research/Reports/Techreports/1995/PDF/nas-95-020.pdf
36 : New Implementations and Results for the NAS Parallel Benchmarks 2, William Saphir, Rob Van der Wijngaart, Alex Woo, Maurice Yarrow, http://www.nas.nasa.gov/Software/NPB/Specs/npb2.2_new_implementations.ps
37 : NAS Parallel Benchmarks Version 2.4, Rob F. Van der Wijngaart, NAS Technical Report NAS-02-007, Oct 2002, http://www.nas.nasa.gov/Research/Reports/Techreports/2002/PDF/nas-02-007.pdf.
38 : Lempel-Ziv-Oberhumer compression library, Markus F.X.J. Oberhumer, http://www.oberhumer.com/opensource/lzo/.
39 : Globus Toolkits, http://www.globus.org
40 : Java 2 Enterprise Edition (J2EE), http://java.sun.com/j2ee/overview.html
41 : NPB 2 Results Report 8/96, http://www.nas.nasa.gov/Software/NPB/Reports/NAS-96-010/npb21results.html