A programmable processing array architecture supporting dynamic task scheduling and module-level prefetching

Junghee Lee, Hyung Gyu Lee, Soonhoi Ha, Jongman Kim, Chrysostomos Nicopoulos

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Massively Parallel Processing Arrays (MPPA) constitute programmable hardware accelerators that excel in the execution of applications exhibiting Data-Level Parallelism (DLP). The concept of employing such programmable accelerators as sidekicks to the more traditional, general-purpose processing cores has very recently entered the mainstream; both Intel and AMD have introduced processor architectures integrating a Graphics Processing Unit (GPU) alongside the main CPU cores. These GPU engines are expected to play a pivotal role in the espousal of General-Purpose computing on GPUs (GPGPU). However, the widespread adoption of MPPAs, in general, as hardware accelerators entails the effective tackling of some fundamental obstacles: the expressiveness of the programming model, the debugging capabilities, and the memory hierarchy design. Toward this end, this paper proposes a hardware architecture for MPPA that adopts an event-driven execution model. It supports dynamic task scheduling, which offers better expressiveness to the execution model and improves the utilization of processing elements. Moreover, a novel module-level prefetching mechanism - enabled by the specification of the execution model - hides the access time to memory and the scheduler. The execution model also ensures complete encapsulation of the modules, which greatly facilitates debugging. Finally, the fact that all associated inputs of a module are explicitly known can be exploited by the hardware to hide memory access latency without having to resort to caches and a cache coherence protocol. Results using a cycle-level simulator of the proposed architecture and a variety of real application benchmarks demonstrate the efficacy and efficiency of the proposed paradigm.

Original languageEnglish
Title of host publicationCF '12 - Proceedings of the ACM Computing Frontiers Conference
Pages153-162
Number of pages10
DOIs
Publication statusPublished - 2012 Jun 28
Externally publishedYes
EventACM Computing Frontiers Conference, CF '12 - Cagliari, Italy
Duration: 2012 May 152012 May 17

Publication series

NameCF '12 - Proceedings of the ACM Computing Frontiers Conference

Other

OtherACM Computing Frontiers Conference, CF '12
CountryItaly
CityCagliari
Period12/5/1512/5/17

Fingerprint

Array processing
Scheduling
Particle accelerators
Hardware
Data storage equipment
Computer debugging
Program debugging
Processing
Computer programming
Encapsulation
Computer hardware
Program processors
Simulators
Engines
Specifications
Network protocols
Graphics processing unit

Keywords

  • dynamic scheduling
  • hardware accelerator
  • many-core
  • prefetch
  • programmable
  • reconfigurable

ASJC Scopus subject areas

  • Software

Cite this

Lee, J., Lee, H. G., Ha, S., Kim, J., & Nicopoulos, C. (2012). A programmable processing array architecture supporting dynamic task scheduling and module-level prefetching. In CF '12 - Proceedings of the ACM Computing Frontiers Conference (pp. 153-162). (CF '12 - Proceedings of the ACM Computing Frontiers Conference). https://doi.org/10.1145/2212908.2212931

A programmable processing array architecture supporting dynamic task scheduling and module-level prefetching. / Lee, Junghee; Lee, Hyung Gyu; Ha, Soonhoi; Kim, Jongman; Nicopoulos, Chrysostomos.

CF '12 - Proceedings of the ACM Computing Frontiers Conference. 2012. p. 153-162 (CF '12 - Proceedings of the ACM Computing Frontiers Conference).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lee, J, Lee, HG, Ha, S, Kim, J & Nicopoulos, C 2012, A programmable processing array architecture supporting dynamic task scheduling and module-level prefetching. in CF '12 - Proceedings of the ACM Computing Frontiers Conference. CF '12 - Proceedings of the ACM Computing Frontiers Conference, pp. 153-162, ACM Computing Frontiers Conference, CF '12, Cagliari, Italy, 12/5/15. https://doi.org/10.1145/2212908.2212931
Lee J, Lee HG, Ha S, Kim J, Nicopoulos C. A programmable processing array architecture supporting dynamic task scheduling and module-level prefetching. In CF '12 - Proceedings of the ACM Computing Frontiers Conference. 2012. p. 153-162. (CF '12 - Proceedings of the ACM Computing Frontiers Conference). https://doi.org/10.1145/2212908.2212931
Lee, Junghee ; Lee, Hyung Gyu ; Ha, Soonhoi ; Kim, Jongman ; Nicopoulos, Chrysostomos. / A programmable processing array architecture supporting dynamic task scheduling and module-level prefetching. CF '12 - Proceedings of the ACM Computing Frontiers Conference. 2012. pp. 153-162 (CF '12 - Proceedings of the ACM Computing Frontiers Conference).
@inproceedings{143c9d0785f044f4b53cea04f07cad4b,
title = "A programmable processing array architecture supporting dynamic task scheduling and module-level prefetching",
abstract = "Massively Parallel Processing Arrays (MPPA) constitute programmable hardware accelerators that excel in the execution of applications exhibiting Data-Level Parallelism (DLP). The concept of employing such programmable accelerators as sidekicks to the more traditional, general-purpose processing cores has very recently entered the mainstream; both Intel and AMD have introduced processor architectures integrating a Graphics Processing Unit (GPU) alongside the main CPU cores. These GPU engines are expected to play a pivotal role in the espousal of General-Purpose computing on GPUs (GPGPU). However, the widespread adoption of MPPAs, in general, as hardware accelerators entails the effective tackling of some fundamental obstacles: the expressiveness of the programming model, the debugging capabilities, and the memory hierarchy design. Toward this end, this paper proposes a hardware architecture for MPPA that adopts an event-driven execution model. It supports dynamic task scheduling, which offers better expressiveness to the execution model and improves the utilization of processing elements. Moreover, a novel module-level prefetching mechanism - enabled by the specification of the execution model - hides the access time to memory and the scheduler. The execution model also ensures complete encapsulation of the modules, which greatly facilitates debugging. Finally, the fact that all associated inputs of a module are explicitly known can be exploited by the hardware to hide memory access latency without having to resort to caches and a cache coherence protocol. Results using a cycle-level simulator of the proposed architecture and a variety of real application benchmarks demonstrate the efficacy and efficiency of the proposed paradigm.",
keywords = "dynamic scheduling, hardware accelerator, many-core, prefetch, programmable, reconfigurable",
author = "Junghee Lee and Lee, {Hyung Gyu} and Soonhoi Ha and Jongman Kim and Chrysostomos Nicopoulos",
year = "2012",
month = "6",
day = "28",
doi = "10.1145/2212908.2212931",
language = "English",
isbn = "9781450312158",
series = "CF '12 - Proceedings of the ACM Computing Frontiers Conference",
pages = "153--162",
booktitle = "CF '12 - Proceedings of the ACM Computing Frontiers Conference",

}

TY - GEN

T1 - A programmable processing array architecture supporting dynamic task scheduling and module-level prefetching

AU - Lee, Junghee

AU - Lee, Hyung Gyu

AU - Ha, Soonhoi

AU - Kim, Jongman

AU - Nicopoulos, Chrysostomos

PY - 2012/6/28

Y1 - 2012/6/28

N2 - Massively Parallel Processing Arrays (MPPA) constitute programmable hardware accelerators that excel in the execution of applications exhibiting Data-Level Parallelism (DLP). The concept of employing such programmable accelerators as sidekicks to the more traditional, general-purpose processing cores has very recently entered the mainstream; both Intel and AMD have introduced processor architectures integrating a Graphics Processing Unit (GPU) alongside the main CPU cores. These GPU engines are expected to play a pivotal role in the espousal of General-Purpose computing on GPUs (GPGPU). However, the widespread adoption of MPPAs, in general, as hardware accelerators entails the effective tackling of some fundamental obstacles: the expressiveness of the programming model, the debugging capabilities, and the memory hierarchy design. Toward this end, this paper proposes a hardware architecture for MPPA that adopts an event-driven execution model. It supports dynamic task scheduling, which offers better expressiveness to the execution model and improves the utilization of processing elements. Moreover, a novel module-level prefetching mechanism - enabled by the specification of the execution model - hides the access time to memory and the scheduler. The execution model also ensures complete encapsulation of the modules, which greatly facilitates debugging. Finally, the fact that all associated inputs of a module are explicitly known can be exploited by the hardware to hide memory access latency without having to resort to caches and a cache coherence protocol. Results using a cycle-level simulator of the proposed architecture and a variety of real application benchmarks demonstrate the efficacy and efficiency of the proposed paradigm.

AB - Massively Parallel Processing Arrays (MPPA) constitute programmable hardware accelerators that excel in the execution of applications exhibiting Data-Level Parallelism (DLP). The concept of employing such programmable accelerators as sidekicks to the more traditional, general-purpose processing cores has very recently entered the mainstream; both Intel and AMD have introduced processor architectures integrating a Graphics Processing Unit (GPU) alongside the main CPU cores. These GPU engines are expected to play a pivotal role in the espousal of General-Purpose computing on GPUs (GPGPU). However, the widespread adoption of MPPAs, in general, as hardware accelerators entails the effective tackling of some fundamental obstacles: the expressiveness of the programming model, the debugging capabilities, and the memory hierarchy design. Toward this end, this paper proposes a hardware architecture for MPPA that adopts an event-driven execution model. It supports dynamic task scheduling, which offers better expressiveness to the execution model and improves the utilization of processing elements. Moreover, a novel module-level prefetching mechanism - enabled by the specification of the execution model - hides the access time to memory and the scheduler. The execution model also ensures complete encapsulation of the modules, which greatly facilitates debugging. Finally, the fact that all associated inputs of a module are explicitly known can be exploited by the hardware to hide memory access latency without having to resort to caches and a cache coherence protocol. Results using a cycle-level simulator of the proposed architecture and a variety of real application benchmarks demonstrate the efficacy and efficiency of the proposed paradigm.

KW - dynamic scheduling

KW - hardware accelerator

KW - many-core

KW - prefetch

KW - programmable

KW - reconfigurable

UR - http://www.scopus.com/inward/record.url?scp=84862690505&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84862690505&partnerID=8YFLogxK

U2 - 10.1145/2212908.2212931

DO - 10.1145/2212908.2212931

M3 - Conference contribution

AN - SCOPUS:84862690505

SN - 9781450312158

T3 - CF '12 - Proceedings of the ACM Computing Frontiers Conference

SP - 153

EP - 162

BT - CF '12 - Proceedings of the ACM Computing Frontiers Conference

ER -