ECM

Effective Capacity Maximizer for high-performance compressed caching

Seungcheol Baek, Hyung Gyu Lee, Chrysostomos Nicopoulos, Junghee Lee, Jongman Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Citations (Scopus)

Abstract

Compressed Last-Level Cache (LLC) architectures have been proposed to enhance system performance by efficiently increasing the effective capacity of the cache, without physically increasing the cache size. In a compressed cache, the cacheline size varies depending on the achieved compression ratio. We observe that this size information gives a useful hint when selecting a victim, which can lead to increased cache performance. However, no replacement policy tailored to compressed LLCs has been investigated so far. This paper introduces the notion of size-aware compressed cache management as a way to maximize the performance of compressed caches. Toward this end, the Effective Capacity Maximizer (ECM) scheme is introduced, which targets compressed LLCs. The proposed mechanism revolves around three fundamental principles: Size-Aware Insertion (SAI), a Dynamically Adjustable Threshold Scheme (DATS), and Size-Aware Replacement (SAR). By adjusting the eviction criteria, based on the compressed data size, one may increase the effective cache capacity and minimize the miss penalty. Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing the conventional Least-Recently Used (LRU) and Dynamic Re-Reference Interval Prediction (DRRIP) [11] replacement policies. Specifically, ECM shows an average effective capacity increase of 15% over LRU and 18.8% over DRRIP, an average cache miss reduction of 9.4% over LRU and 3.9% over DRRIP, and an average system performance improvement of 6.2% over LRU and 3.3% over DRRIP.

Original languageEnglish
Title of host publication19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013
Pages131-142
Number of pages12
DOIs
Publication statusPublished - 2013 Jul 23
Externally publishedYes
Event19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013 - Shenzhen, China
Duration: 2013 Feb 232013 Feb 27

Publication series

NameProceedings - International Symposium on High-Performance Computer Architecture
ISSN (Print)1530-0897

Conference

Conference19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013
CountryChina
CityShenzhen
Period13/2/2313/2/27

Fingerprint

Simulators
Data storage equipment

ASJC Scopus subject areas

  • Hardware and Architecture

Cite this

Baek, S., Lee, H. G., Nicopoulos, C., Lee, J., & Kim, J. (2013). ECM: Effective Capacity Maximizer for high-performance compressed caching. In 19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013 (pp. 131-142). [6522313] (Proceedings - International Symposium on High-Performance Computer Architecture). https://doi.org/10.1109/HPCA.2013.6522313

ECM : Effective Capacity Maximizer for high-performance compressed caching. / Baek, Seungcheol; Lee, Hyung Gyu; Nicopoulos, Chrysostomos; Lee, Junghee; Kim, Jongman.

19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013. 2013. p. 131-142 6522313 (Proceedings - International Symposium on High-Performance Computer Architecture).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Baek, S, Lee, HG, Nicopoulos, C, Lee, J & Kim, J 2013, ECM: Effective Capacity Maximizer for high-performance compressed caching. in 19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013., 6522313, Proceedings - International Symposium on High-Performance Computer Architecture, pp. 131-142, 19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013, Shenzhen, China, 13/2/23. https://doi.org/10.1109/HPCA.2013.6522313
Baek S, Lee HG, Nicopoulos C, Lee J, Kim J. ECM: Effective Capacity Maximizer for high-performance compressed caching. In 19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013. 2013. p. 131-142. 6522313. (Proceedings - International Symposium on High-Performance Computer Architecture). https://doi.org/10.1109/HPCA.2013.6522313
Baek, Seungcheol ; Lee, Hyung Gyu ; Nicopoulos, Chrysostomos ; Lee, Junghee ; Kim, Jongman. / ECM : Effective Capacity Maximizer for high-performance compressed caching. 19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013. 2013. pp. 131-142 (Proceedings - International Symposium on High-Performance Computer Architecture).
@inproceedings{68efe0806e2444cf91d9540b3efecbe3,
title = "ECM: Effective Capacity Maximizer for high-performance compressed caching",
abstract = "Compressed Last-Level Cache (LLC) architectures have been proposed to enhance system performance by efficiently increasing the effective capacity of the cache, without physically increasing the cache size. In a compressed cache, the cacheline size varies depending on the achieved compression ratio. We observe that this size information gives a useful hint when selecting a victim, which can lead to increased cache performance. However, no replacement policy tailored to compressed LLCs has been investigated so far. This paper introduces the notion of size-aware compressed cache management as a way to maximize the performance of compressed caches. Toward this end, the Effective Capacity Maximizer (ECM) scheme is introduced, which targets compressed LLCs. The proposed mechanism revolves around three fundamental principles: Size-Aware Insertion (SAI), a Dynamically Adjustable Threshold Scheme (DATS), and Size-Aware Replacement (SAR). By adjusting the eviction criteria, based on the compressed data size, one may increase the effective cache capacity and minimize the miss penalty. Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing the conventional Least-Recently Used (LRU) and Dynamic Re-Reference Interval Prediction (DRRIP) [11] replacement policies. Specifically, ECM shows an average effective capacity increase of 15{\%} over LRU and 18.8{\%} over DRRIP, an average cache miss reduction of 9.4{\%} over LRU and 3.9{\%} over DRRIP, and an average system performance improvement of 6.2{\%} over LRU and 3.3{\%} over DRRIP.",
author = "Seungcheol Baek and Lee, {Hyung Gyu} and Chrysostomos Nicopoulos and Junghee Lee and Jongman Kim",
year = "2013",
month = "7",
day = "23",
doi = "10.1109/HPCA.2013.6522313",
language = "English",
isbn = "9781467355858",
series = "Proceedings - International Symposium on High-Performance Computer Architecture",
pages = "131--142",
booktitle = "19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013",

}

TY - GEN

T1 - ECM

T2 - Effective Capacity Maximizer for high-performance compressed caching

AU - Baek, Seungcheol

AU - Lee, Hyung Gyu

AU - Nicopoulos, Chrysostomos

AU - Lee, Junghee

AU - Kim, Jongman

PY - 2013/7/23

Y1 - 2013/7/23

N2 - Compressed Last-Level Cache (LLC) architectures have been proposed to enhance system performance by efficiently increasing the effective capacity of the cache, without physically increasing the cache size. In a compressed cache, the cacheline size varies depending on the achieved compression ratio. We observe that this size information gives a useful hint when selecting a victim, which can lead to increased cache performance. However, no replacement policy tailored to compressed LLCs has been investigated so far. This paper introduces the notion of size-aware compressed cache management as a way to maximize the performance of compressed caches. Toward this end, the Effective Capacity Maximizer (ECM) scheme is introduced, which targets compressed LLCs. The proposed mechanism revolves around three fundamental principles: Size-Aware Insertion (SAI), a Dynamically Adjustable Threshold Scheme (DATS), and Size-Aware Replacement (SAR). By adjusting the eviction criteria, based on the compressed data size, one may increase the effective cache capacity and minimize the miss penalty. Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing the conventional Least-Recently Used (LRU) and Dynamic Re-Reference Interval Prediction (DRRIP) [11] replacement policies. Specifically, ECM shows an average effective capacity increase of 15% over LRU and 18.8% over DRRIP, an average cache miss reduction of 9.4% over LRU and 3.9% over DRRIP, and an average system performance improvement of 6.2% over LRU and 3.3% over DRRIP.

AB - Compressed Last-Level Cache (LLC) architectures have been proposed to enhance system performance by efficiently increasing the effective capacity of the cache, without physically increasing the cache size. In a compressed cache, the cacheline size varies depending on the achieved compression ratio. We observe that this size information gives a useful hint when selecting a victim, which can lead to increased cache performance. However, no replacement policy tailored to compressed LLCs has been investigated so far. This paper introduces the notion of size-aware compressed cache management as a way to maximize the performance of compressed caches. Toward this end, the Effective Capacity Maximizer (ECM) scheme is introduced, which targets compressed LLCs. The proposed mechanism revolves around three fundamental principles: Size-Aware Insertion (SAI), a Dynamically Adjustable Threshold Scheme (DATS), and Size-Aware Replacement (SAR). By adjusting the eviction criteria, based on the compressed data size, one may increase the effective cache capacity and minimize the miss penalty. Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing the conventional Least-Recently Used (LRU) and Dynamic Re-Reference Interval Prediction (DRRIP) [11] replacement policies. Specifically, ECM shows an average effective capacity increase of 15% over LRU and 18.8% over DRRIP, an average cache miss reduction of 9.4% over LRU and 3.9% over DRRIP, and an average system performance improvement of 6.2% over LRU and 3.3% over DRRIP.

UR - http://www.scopus.com/inward/record.url?scp=84880288504&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84880288504&partnerID=8YFLogxK

U2 - 10.1109/HPCA.2013.6522313

DO - 10.1109/HPCA.2013.6522313

M3 - Conference contribution

SN - 9781467355858

T3 - Proceedings - International Symposium on High-Performance Computer Architecture

SP - 131

EP - 142

BT - 19th IEEE International Symposium on High Performance Computer Architecture, HPCA 2013

ER -