ImageVerifierCode 换一换
格式:DOCX , 页数:12 ,大小:59.15KB ,
资源ID:21104580      下载积分:3 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/21104580.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(学术英语论文文档格式.docx)为本站会员(b****5)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

学术英语论文文档格式.docx

1、 scientists throughout industry and academia are already using CUDA to achieve dramatic speedups on production and research codes. In this paper, we propose a hybrid parallel programming approach using hybrid CUDA and MPI programming, which partition loop iterations according to the number of C1060

2、GPU nodes in a GPU cluster which consists of one C1060 and one S1070. Loop iterations assigned to one MPI process are processed in parallel by CUDA run by the processor cores in the same computational node.Keywords: CUDA, GPU, MPI, OpenMP, hybrid, parallel programmingI. INTRODUCTIONNowadays, NVIDIAs

3、 CUDA 1, 16 is a general purpose scalable parallel programming model for writing highly parallel applications. It provides several key abstractions a hierarchy of thread blocks, shared memory, and barrier synchronization. This model has proven quite successful at programming multithreaded many core

4、GPUs and scales transparently to hundreds of cores: scientists throughout industry and academia are already using CUDA 1, 16 to achieve dramatic speedups on production and research codes.In NVDIA the CUDA chip, all to the core of hundreds of ways to construct their chips, in here we will try to use

5、NVIDIA to provide computing equipment for parallel computing. This paper proposes a solution to not only simplify the use of hardware acceleration in conventional general purpose applications, but also to keep the application code portable. In this paper, we propose a parallel programming approach u

6、sing hybrid CUDA, OpenMP and MPI 3 programming, which partition loop iterations according to the performance weighting of multi-core 4 nodes in a cluster. Because iterations assigned to one MPI process are processed in parallel by OpenMP threads run by the processor cores in the same computational n

7、ode, the number of loop iterations allocated to one computational node at each scheduling step depends on the number of processor cores in that node.In this paper, we propose a general approach that uses performance functions to estimate performance weights for each node. To verify the proposed appr

8、oach, a heterogeneous cluster and a homogeneous cluster were built. In ourimplementation, the master node also participates in computation, whereas in previous schemes, only slave nodes do computation work. Empirical results show that in heterogeneous and homogeneous clusters environments, the propo

9、sed approach improved performance over all previous schemes.The rest of this paper is organized as follows. In Section 2, we introduce several typical and well-known self-scheduling schemes, and a famous benchmark used to analyze computer system performance. In Section 3, we define our model and des

10、cribe our approach. Our system configuration is then specified in Section 4, and experimental results for three types of application program are presented. Concluding remarks and future work are given in Section 5.II. BACKGROUND REVIEWA. History of GPU and CUDAIn the past, we have to use more than o

11、ne computer to multiple CPU parallel computing, as shown in the last chip in the history of the beginning of the show does not need a lot of computation, then gradually the need for the game and even the graphics were and the need for 3D, 3D accelerator card appeared, and gradually we began to displ

12、ay chip for processing, began to show separate chips, and even made asimilar in their CPU chips, that is GPU. We know that GPU computing could be used to get the answers we want, but why do we choose to use the GPU? This slide shows the current CPU and GPU comparison. First, we can see only a maximu

13、m of eight core CPU now, but the GPU has grown to 260 core, the core number, well know a lot of parallel programs for GPU computing, despite his relatively low frequency of core, we I believe a large number of parallel computing power could be weaker than a single issue. Next, we know that there are

14、 within the GPU memory, and more access to main memory and GPU CPU GPU access on the memory capacity, we find that the speed of accessing GPU faster than CPU by 10 times, a whole worse 90GB / s, This isquite alarming gap, of course, this also means that when computing the time required to access lar

15、ge amounts of data can have a good GPU to improve.CPU using advanced flow control such as branch predict or delay branch and a large cache to reduce memory access latency, and GPUs cache and a relatively small number of flow control nor his simple, so the method is to use a lot of GPU computing devi

16、ces to cover up the problem of memory latency, that is, assuming an access memory GPU takes 5 seconds of the time, but if there are 100 thread simultaneous access to, the time is 5 seconds, but the assumption that CPU time memory access time is 0.1 seconds, if the 100 thread access, the time is 10 s

17、econds, therefore, GPU parallel processing can be used to hide even in access memory thanCPU speed. GPU is designed such that more transistors are devoted to data processing rather than data caching and flow control, as schematically illustrated by Figure 1.Therefore, we in the arithmetic logic by G

18、PU advantage, trying to use NVIDIAs multi-core available to help us a lot of computation, and we will provide NVIDIA with so many core programs, and NVIDIA Corporation to provide the API of parallel programming large number of operations to carry out. We must use the form provided by NVIDIA Corporat

19、ion GPU computing to run it? Not really. We can use NVIDIA CUDA, ATI CTM and apple made OpenCL (Open Computing Language), is the development of CUDA is one of the earliest and most people at this stage in the language but with the NVIDIA CUDA only supports its own graphics card, from where we You ca

20、n see at this stage to use GPU graphics card with the operator of almost all of NVIDIA, ATI also has developed its own language of CTM, APPLE also proposed OpenCL (Open Computing Language), which OpenCL has been supported by NVIDIA and ATI, but ATI CTM has also given up the language of another, by t

21、he use of the previous relationship between the GPU, usually only support singleprecision floating-point operations, and in science, precision is a very important indicator, therefore, introduced this year computing graphics card has to support a Double precision floating-point operations.B. CUDA Pr

22、ogrammingCUDA (an acronym for Compute Unified Device Architecture) is a parallel computing 2 architecture developed by NVIDIA. CUDA is the computing engine in NVIDIA graphics processing units or GPUs that is accessible to software developers through industry standard programming languages. The CUDA

23、software stack is composed of several layers as illustrated in Figure 2: a hardware driver, an application programming interface (API) and its runtime, and two higher-level mathematical libraries of common usage, CUFFT 17 and CUBLAS 18. The hardware has been designed to support lightweight driver an

24、d runtime layers, resulting in high performance. CUDA architecture supports a range of computational interfaces including OpenGL 9 and Direct Compute. CUDAs parallel programming model is designed to overcome this challenge while maintaining a low learning curve for programmer familiar with standard

25、programming languages such as C. At its core are three key abstractions a hierarchy of thread groups, shared memories, and barrier synchronization that are simply exposed to the programmer as a minimal set oflanguage extensions.These abstractions provide fine-grained data parallelism and thread para

26、llelism, nested within coarse-grained data parallelism and task parallelism. They guide the programmer to partition the problem into coarse sub-problems that can be solved independently in parallel, and then into finer pieces that can be solved cooperatively in parallel. Such a decomposition preserv

27、es language expressivity by allowing threads to cooperate when solving each sub-problem, and at the same time enables transparent scalability since each sub-problem can be scheduled to be solved on any of the available processor cores: A compiled CUDA program can therefore execute on any number of p

28、rocessor cores, and only the runtime system needs to know the physical processor count.C. CUDA Processing flowIn follow illustration, CUDA processing flow is described as Figure 3 16. The first step: copy data from main memory to GPU memory, second: CPU instructs the process to GPU, third: GPU execu

29、te parallel in each core, finally: copy the result from GPU memory to main memory.III. SYSTEM HARDWAREA.Tesla C1060 GPU Computing ProcessorThe NVIDIA Tesla C1060 transforms a workstation into a high-performance computer that outperforms a small cluster. This gives technical professionals a dedicated

30、 computing resource at their desk-side that is much faster and more energy-efficient than a shared cluster in the data center. The NVIDIA Tesla C1060 computing processor board which consists of 240 cores is a PCI Express 2.0 form factor computing add-in card based on the NVIDIA Tesla T10 graphics pr

31、ocessing unit (GPU). This board is targeted as high-performance computing (HPC) solution for PCI Express systems. The Tesla C1060 15 is capable of 933GFLOPs/s13 of processing performance and comes standard with 4GB of GDDR3 memory at 102 GB/s bandwidth.A computer system with an available PCI Express *16 slot is required for the Tesla C1060. For the best system bandwidth between the host processor and the Tesla C1060, it is recommended (but no

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1