Access region cache: A multi-porting solution for future wide-issue processors

B. S. Thakar, Kyung Ho Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Wide-issue processors issuing tens of instructions per cycle, put heavy stress on the memory system, including data caches. For wide-issue architecture, data cache needs to be heavily multi-ported with extremely wide data-paths. This paper studies a scalable solution to achieve multi-porting with short data-paths and less hardware complexity at higher clock-rates. Our approach divides memory streams into multiple independent sub-streams with the help of prediction mechanism before they enter the reservation stations. Partitioned memory-reference instructions are then fed into separate memory pipelines, each of which is connected to a small data-cache, called access region cache. The separation of independent memory references, in an ideal situation, facilitates the use of multiple caches with smaller number of ports and thus increases the data-bandwidth. We describe and evaluate a wide-issue processor with distinct memory pipelines, driven by a prediction mechanism. The potential performance of the proposed design is measured by comparing it with existing multi-porting solution as well as an ideal multi-ported data cache.

Original languageEnglish
Title of host publicationProceedings - IEEE International Conference on Computer Design: VLSI in Computers and Processors
Pages293-300
Number of pages8
Publication statusPublished - 2001 Jan 1
Externally publishedYes
EventIEEE International Conference on: Computer Design: VLSI in Computers and Processors (ICCD 2001) - Austin, TX, United States
Duration: 2001 Sep 232001 Sep 26

Other

OtherIEEE International Conference on: Computer Design: VLSI in Computers and Processors (ICCD 2001)
CountryUnited States
CityAustin, TX
Period01/9/2301/9/26

Fingerprint

Data storage equipment
Pipelines
Clocks
Hardware
Bandwidth

ASJC Scopus subject areas

  • Hardware and Architecture
  • Electrical and Electronic Engineering

Cite this

Thakar, B. S., & Lee, K. H. (2001). Access region cache: A multi-porting solution for future wide-issue processors. In Proceedings - IEEE International Conference on Computer Design: VLSI in Computers and Processors (pp. 293-300)

Access region cache : A multi-porting solution for future wide-issue processors. / Thakar, B. S.; Lee, Kyung Ho.

Proceedings - IEEE International Conference on Computer Design: VLSI in Computers and Processors. 2001. p. 293-300.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Thakar, BS & Lee, KH 2001, Access region cache: A multi-porting solution for future wide-issue processors. in Proceedings - IEEE International Conference on Computer Design: VLSI in Computers and Processors. pp. 293-300, IEEE International Conference on: Computer Design: VLSI in Computers and Processors (ICCD 2001), Austin, TX, United States, 01/9/23.
Thakar BS, Lee KH. Access region cache: A multi-porting solution for future wide-issue processors. In Proceedings - IEEE International Conference on Computer Design: VLSI in Computers and Processors. 2001. p. 293-300
Thakar, B. S. ; Lee, Kyung Ho. / Access region cache : A multi-porting solution for future wide-issue processors. Proceedings - IEEE International Conference on Computer Design: VLSI in Computers and Processors. 2001. pp. 293-300
@inproceedings{0cb324fc187e48299dc1fbc48c488212,
title = "Access region cache: A multi-porting solution for future wide-issue processors",
abstract = "Wide-issue processors issuing tens of instructions per cycle, put heavy stress on the memory system, including data caches. For wide-issue architecture, data cache needs to be heavily multi-ported with extremely wide data-paths. This paper studies a scalable solution to achieve multi-porting with short data-paths and less hardware complexity at higher clock-rates. Our approach divides memory streams into multiple independent sub-streams with the help of prediction mechanism before they enter the reservation stations. Partitioned memory-reference instructions are then fed into separate memory pipelines, each of which is connected to a small data-cache, called access region cache. The separation of independent memory references, in an ideal situation, facilitates the use of multiple caches with smaller number of ports and thus increases the data-bandwidth. We describe and evaluate a wide-issue processor with distinct memory pipelines, driven by a prediction mechanism. The potential performance of the proposed design is measured by comparing it with existing multi-porting solution as well as an ideal multi-ported data cache.",
author = "Thakar, {B. S.} and Lee, {Kyung Ho}",
year = "2001",
month = "1",
day = "1",
language = "English",
pages = "293--300",
booktitle = "Proceedings - IEEE International Conference on Computer Design: VLSI in Computers and Processors",

}

TY - GEN

T1 - Access region cache

T2 - A multi-porting solution for future wide-issue processors

AU - Thakar, B. S.

AU - Lee, Kyung Ho

PY - 2001/1/1

Y1 - 2001/1/1

N2 - Wide-issue processors issuing tens of instructions per cycle, put heavy stress on the memory system, including data caches. For wide-issue architecture, data cache needs to be heavily multi-ported with extremely wide data-paths. This paper studies a scalable solution to achieve multi-porting with short data-paths and less hardware complexity at higher clock-rates. Our approach divides memory streams into multiple independent sub-streams with the help of prediction mechanism before they enter the reservation stations. Partitioned memory-reference instructions are then fed into separate memory pipelines, each of which is connected to a small data-cache, called access region cache. The separation of independent memory references, in an ideal situation, facilitates the use of multiple caches with smaller number of ports and thus increases the data-bandwidth. We describe and evaluate a wide-issue processor with distinct memory pipelines, driven by a prediction mechanism. The potential performance of the proposed design is measured by comparing it with existing multi-porting solution as well as an ideal multi-ported data cache.

AB - Wide-issue processors issuing tens of instructions per cycle, put heavy stress on the memory system, including data caches. For wide-issue architecture, data cache needs to be heavily multi-ported with extremely wide data-paths. This paper studies a scalable solution to achieve multi-porting with short data-paths and less hardware complexity at higher clock-rates. Our approach divides memory streams into multiple independent sub-streams with the help of prediction mechanism before they enter the reservation stations. Partitioned memory-reference instructions are then fed into separate memory pipelines, each of which is connected to a small data-cache, called access region cache. The separation of independent memory references, in an ideal situation, facilitates the use of multiple caches with smaller number of ports and thus increases the data-bandwidth. We describe and evaluate a wide-issue processor with distinct memory pipelines, driven by a prediction mechanism. The potential performance of the proposed design is measured by comparing it with existing multi-porting solution as well as an ideal multi-ported data cache.

UR - http://www.scopus.com/inward/record.url?scp=0035181791&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0035181791&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0035181791

SP - 293

EP - 300

BT - Proceedings - IEEE International Conference on Computer Design: VLSI in Computers and Processors

ER -