Size-aware cache management for compressed cache architectures

Seungcheol Baek, Hyung Gyu Lee, Chrysostomos Nicopoulos, Junghee Lee, Jongman Kim

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

A practical way to increase the effective capacity of a microprocessor's cache, without physically increasing the cache size, is to employ data compression. Last-Level Caches (LLC) are particularly amenable to such compression schemes, since the primary purpose of the LLC is to minimize the miss rate, i.e., it directly benefits from a larger logical capacity. In compressed LLCs, the cacheline size varies depending on the achieved compression ratio. Our observations indicate that this size information gives useful hints when managing the cache (e.g., when selecting a victim), which can lead to increased cache performance. However, there are currently no replacement policies tailored to compressed LLCs; existing techniques focus primarily on locality information. This article introduces the concept of size-aware cache management as a way to maximize the performance of compressed caches. Upon analyzing the benefits of considering size information in the management of compressed caches, we propose a novel mechanism - called Effective Capacity Maximizer (ECM) - to further enhance the performance and energy consumption of compressed LLCs. The proposed technique revolves around four fundamental principles: ECM Insertion (ECM-I), ECM Promotion (ECM-P), ECM Eviction Scheduling (ECM-ES), and ECM Replacement (ECM-R). Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing conventional locality-aware cache replacement policies. Specifically, our ECM shows an average effective capacity increase of 18.4 percent over the Least-Recently Used (LRU) policy, and 23.9 percent over the Dynamic Re-Reference Interval Prediction (DRRIP) [1] scheme. This translates into average system performance improvements of 7.2 percent over LRU and 4.2 percent over DRRIP. Moreover, the average energy consumption is also reduced by 5.9 percent over LRU and 3.8 percent over DRRIP.

Original languageEnglish
Article number6911946
Pages (from-to)2337-2352
Number of pages16
JournalIEEE Transactions on Computers
Volume64
Issue number8
DOIs
Publication statusPublished - 2015 Aug 1
Externally publishedYes

Fingerprint

Cache
Energy utilization
Percent
Data compression
Prediction Interval
Microprocessor chips
Simulators
Scheduling
Replacement Policy
Data storage equipment
Locality
Energy Consumption
Architecture
Compression
Data Compression
Microprocessor
Insertion
Replacement
System Performance
Simulator

Keywords

  • Cache
  • cache compression
  • cache replacement policy
  • compression
  • data compression

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Software
  • Hardware and Architecture
  • Computational Theory and Mathematics

Cite this

Size-aware cache management for compressed cache architectures. / Baek, Seungcheol; Lee, Hyung Gyu; Nicopoulos, Chrysostomos; Lee, Junghee; Kim, Jongman.

In: IEEE Transactions on Computers, Vol. 64, No. 8, 6911946, 01.08.2015, p. 2337-2352.

Research output: Contribution to journalArticle

Baek, S, Lee, HG, Nicopoulos, C, Lee, J & Kim, J 2015, 'Size-aware cache management for compressed cache architectures', IEEE Transactions on Computers, vol. 64, no. 8, 6911946, pp. 2337-2352. https://doi.org/10.1109/TC.2014.2360518
Baek, Seungcheol ; Lee, Hyung Gyu ; Nicopoulos, Chrysostomos ; Lee, Junghee ; Kim, Jongman. / Size-aware cache management for compressed cache architectures. In: IEEE Transactions on Computers. 2015 ; Vol. 64, No. 8. pp. 2337-2352.
@article{ff3e98c8f63b4659b2a5d62c92b68d18,
title = "Size-aware cache management for compressed cache architectures",
abstract = "A practical way to increase the effective capacity of a microprocessor's cache, without physically increasing the cache size, is to employ data compression. Last-Level Caches (LLC) are particularly amenable to such compression schemes, since the primary purpose of the LLC is to minimize the miss rate, i.e., it directly benefits from a larger logical capacity. In compressed LLCs, the cacheline size varies depending on the achieved compression ratio. Our observations indicate that this size information gives useful hints when managing the cache (e.g., when selecting a victim), which can lead to increased cache performance. However, there are currently no replacement policies tailored to compressed LLCs; existing techniques focus primarily on locality information. This article introduces the concept of size-aware cache management as a way to maximize the performance of compressed caches. Upon analyzing the benefits of considering size information in the management of compressed caches, we propose a novel mechanism - called Effective Capacity Maximizer (ECM) - to further enhance the performance and energy consumption of compressed LLCs. The proposed technique revolves around four fundamental principles: ECM Insertion (ECM-I), ECM Promotion (ECM-P), ECM Eviction Scheduling (ECM-ES), and ECM Replacement (ECM-R). Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing conventional locality-aware cache replacement policies. Specifically, our ECM shows an average effective capacity increase of 18.4 percent over the Least-Recently Used (LRU) policy, and 23.9 percent over the Dynamic Re-Reference Interval Prediction (DRRIP) [1] scheme. This translates into average system performance improvements of 7.2 percent over LRU and 4.2 percent over DRRIP. Moreover, the average energy consumption is also reduced by 5.9 percent over LRU and 3.8 percent over DRRIP.",
keywords = "Cache, cache compression, cache replacement policy, compression, data compression",
author = "Seungcheol Baek and Lee, {Hyung Gyu} and Chrysostomos Nicopoulos and Junghee Lee and Jongman Kim",
year = "2015",
month = "8",
day = "1",
doi = "10.1109/TC.2014.2360518",
language = "English",
volume = "64",
pages = "2337--2352",
journal = "IEEE Transactions on Computers",
issn = "0018-9340",
publisher = "IEEE Computer Society",
number = "8",

}

TY - JOUR

T1 - Size-aware cache management for compressed cache architectures

AU - Baek, Seungcheol

AU - Lee, Hyung Gyu

AU - Nicopoulos, Chrysostomos

AU - Lee, Junghee

AU - Kim, Jongman

PY - 2015/8/1

Y1 - 2015/8/1

N2 - A practical way to increase the effective capacity of a microprocessor's cache, without physically increasing the cache size, is to employ data compression. Last-Level Caches (LLC) are particularly amenable to such compression schemes, since the primary purpose of the LLC is to minimize the miss rate, i.e., it directly benefits from a larger logical capacity. In compressed LLCs, the cacheline size varies depending on the achieved compression ratio. Our observations indicate that this size information gives useful hints when managing the cache (e.g., when selecting a victim), which can lead to increased cache performance. However, there are currently no replacement policies tailored to compressed LLCs; existing techniques focus primarily on locality information. This article introduces the concept of size-aware cache management as a way to maximize the performance of compressed caches. Upon analyzing the benefits of considering size information in the management of compressed caches, we propose a novel mechanism - called Effective Capacity Maximizer (ECM) - to further enhance the performance and energy consumption of compressed LLCs. The proposed technique revolves around four fundamental principles: ECM Insertion (ECM-I), ECM Promotion (ECM-P), ECM Eviction Scheduling (ECM-ES), and ECM Replacement (ECM-R). Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing conventional locality-aware cache replacement policies. Specifically, our ECM shows an average effective capacity increase of 18.4 percent over the Least-Recently Used (LRU) policy, and 23.9 percent over the Dynamic Re-Reference Interval Prediction (DRRIP) [1] scheme. This translates into average system performance improvements of 7.2 percent over LRU and 4.2 percent over DRRIP. Moreover, the average energy consumption is also reduced by 5.9 percent over LRU and 3.8 percent over DRRIP.

AB - A practical way to increase the effective capacity of a microprocessor's cache, without physically increasing the cache size, is to employ data compression. Last-Level Caches (LLC) are particularly amenable to such compression schemes, since the primary purpose of the LLC is to minimize the miss rate, i.e., it directly benefits from a larger logical capacity. In compressed LLCs, the cacheline size varies depending on the achieved compression ratio. Our observations indicate that this size information gives useful hints when managing the cache (e.g., when selecting a victim), which can lead to increased cache performance. However, there are currently no replacement policies tailored to compressed LLCs; existing techniques focus primarily on locality information. This article introduces the concept of size-aware cache management as a way to maximize the performance of compressed caches. Upon analyzing the benefits of considering size information in the management of compressed caches, we propose a novel mechanism - called Effective Capacity Maximizer (ECM) - to further enhance the performance and energy consumption of compressed LLCs. The proposed technique revolves around four fundamental principles: ECM Insertion (ECM-I), ECM Promotion (ECM-P), ECM Eviction Scheduling (ECM-ES), and ECM Replacement (ECM-R). Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing conventional locality-aware cache replacement policies. Specifically, our ECM shows an average effective capacity increase of 18.4 percent over the Least-Recently Used (LRU) policy, and 23.9 percent over the Dynamic Re-Reference Interval Prediction (DRRIP) [1] scheme. This translates into average system performance improvements of 7.2 percent over LRU and 4.2 percent over DRRIP. Moreover, the average energy consumption is also reduced by 5.9 percent over LRU and 3.8 percent over DRRIP.

KW - Cache

KW - cache compression

KW - cache replacement policy

KW - compression

KW - data compression

UR - http://www.scopus.com/inward/record.url?scp=84960887911&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84960887911&partnerID=8YFLogxK

U2 - 10.1109/TC.2014.2360518

DO - 10.1109/TC.2014.2360518

M3 - Article

AN - SCOPUS:84960887911

VL - 64

SP - 2337

EP - 2352

JO - IEEE Transactions on Computers

JF - IEEE Transactions on Computers

SN - 0018-9340

IS - 8

M1 - 6911946

ER -