Abstract
GPUs have been widely used in most off-the-shelf high-performance computing systems, since CPUs could not meet the increasing throughput demands efficiently. Therefore, the performance of up-to-date high-performance computing systems can be maximized when the task scheduling between the CPU and the GPU is optimized. In this paper, we analyze CPU and GPU co-execution in the perspective of performance, energy efficiency and temperature, depending on task scheduling. Usually, GPU execution leads to better performance and better energy efficiency than CPU execution when single application is executed. However, in cases that multiple applications are executed, GPU execution cannot guarantee better performance and better energy efficiency than CPU execution, depending on application characteristics. Especially, the system behavior becomes more unpredictable when multimedia applications are executed compared to the cases that computation-intensive applications are executed. We also analyze the performance, energy efficiency and temperature of computing systems varying the GPU types. Experimental results show that high-end GPUs provide better performance and energy efficiency than low-end GPUs, while the temperature of high-end GPUs goes higher than mat of low-end GPUs.
Original language | English |
---|---|
Pages (from-to) | 2923-2936 |
Number of pages | 14 |
Journal | Information |
Volume | 15 |
Issue number | 7 |
Publication status | Published - 2012 Jul |
Keywords
- CPU
- CUDA
- GPU
- High-performance computing
- Scheduling
ASJC Scopus subject areas
- Information Systems