HoPE: Hot-cacheline prediction for dynamic early decompression in compressed LLCs

Jaehyun Park, Seungcheol Baek, Hyung Gyu Lee, Chrysostomos Nicopoulos, Vinson Young, Junghee Lee, Jongman Kim

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Data compression plays a pivotal role in improving system performance and reducing energy consumption, because it increases the logical effective capacity of a compressed memory system without physically increasing the memory size. However, data compression techniques incur some cost, such as non-negligible compression and decompression overhead. This overhead becomes more severe if compression is used in the cache. In this article, we aim to minimize the read-hit decompression penalty in compressed Last-Level Caches (LLCs) by speculatively decompressing frequently used cachelines. To this end, we propose a Hot-cacheline Prediction and Early decompression (HoPE) mechanism that consists of three synergistic techniques: Hotcacheline Prediction (HP), Early Decompression (ED), and Hit-history-based Insertion (HBI). HP and HBI efficiently identify the hot compressed cachelines, while ED selectively decompresses hot cachelines, based on their size information. Unlike previous approaches, the HoPE framework considers the performance balance/tradeoff between the increased effective cache capacity and the decompression penalty. To evaluate the effectiveness of the proposed HoPE mechanism, we run extensive simulations on memory traces obtained from multi-threaded benchmarks running on a full-system simulation framework. We observe significant performance improvements over compressed cache schemes employing the conventional Least-Recently Used (LRU) replacement policy, the Dynamic Re-Reference Interval Prediction (DRRIP) scheme, and the Effective Capacity Maximizer (ECM) compressed cache management mechanism. Specifically, HoPE exhibits system performance improvements of approximately 11%, on average, over LRU, 8% over DRRIP, and 7% over ECM by reducing the read-hit decompression penalty by around 65%, over a wide range of applications.

Original languageEnglish
Article number40
JournalACM Transactions on Design Automation of Electronic Systems
Volume22
Issue number3
DOIs
Publication statusPublished - 2017 Apr 1
Externally publishedYes

Fingerprint

Data compression
Data storage equipment
Computer systems
Energy utilization
Costs

Keywords

  • Cache
  • Cache management policy
  • Compression

ASJC Scopus subject areas

  • Computer Science Applications
  • Computer Graphics and Computer-Aided Design
  • Electrical and Electronic Engineering

Cite this

HoPE : Hot-cacheline prediction for dynamic early decompression in compressed LLCs. / Park, Jaehyun; Baek, Seungcheol; Lee, Hyung Gyu; Nicopoulos, Chrysostomos; Young, Vinson; Lee, Junghee; Kim, Jongman.

In: ACM Transactions on Design Automation of Electronic Systems, Vol. 22, No. 3, 40, 01.04.2017.

Research output: Contribution to journalArticle

Park, Jaehyun ; Baek, Seungcheol ; Lee, Hyung Gyu ; Nicopoulos, Chrysostomos ; Young, Vinson ; Lee, Junghee ; Kim, Jongman. / HoPE : Hot-cacheline prediction for dynamic early decompression in compressed LLCs. In: ACM Transactions on Design Automation of Electronic Systems. 2017 ; Vol. 22, No. 3.
@article{70ec25cf67564c3fbf3bda0cf657bc47,
title = "HoPE: Hot-cacheline prediction for dynamic early decompression in compressed LLCs",
abstract = "Data compression plays a pivotal role in improving system performance and reducing energy consumption, because it increases the logical effective capacity of a compressed memory system without physically increasing the memory size. However, data compression techniques incur some cost, such as non-negligible compression and decompression overhead. This overhead becomes more severe if compression is used in the cache. In this article, we aim to minimize the read-hit decompression penalty in compressed Last-Level Caches (LLCs) by speculatively decompressing frequently used cachelines. To this end, we propose a Hot-cacheline Prediction and Early decompression (HoPE) mechanism that consists of three synergistic techniques: Hotcacheline Prediction (HP), Early Decompression (ED), and Hit-history-based Insertion (HBI). HP and HBI efficiently identify the hot compressed cachelines, while ED selectively decompresses hot cachelines, based on their size information. Unlike previous approaches, the HoPE framework considers the performance balance/tradeoff between the increased effective cache capacity and the decompression penalty. To evaluate the effectiveness of the proposed HoPE mechanism, we run extensive simulations on memory traces obtained from multi-threaded benchmarks running on a full-system simulation framework. We observe significant performance improvements over compressed cache schemes employing the conventional Least-Recently Used (LRU) replacement policy, the Dynamic Re-Reference Interval Prediction (DRRIP) scheme, and the Effective Capacity Maximizer (ECM) compressed cache management mechanism. Specifically, HoPE exhibits system performance improvements of approximately 11{\%}, on average, over LRU, 8{\%} over DRRIP, and 7{\%} over ECM by reducing the read-hit decompression penalty by around 65{\%}, over a wide range of applications.",
keywords = "Cache, Cache management policy, Compression",
author = "Jaehyun Park and Seungcheol Baek and Lee, {Hyung Gyu} and Chrysostomos Nicopoulos and Vinson Young and Junghee Lee and Jongman Kim",
year = "2017",
month = "4",
day = "1",
doi = "10.1145/2999538",
language = "English",
volume = "22",
journal = "ACM Transactions on Design Automation of Electronic Systems",
issn = "1084-4309",
publisher = "Association for Computing Machinery (ACM)",
number = "3",

}

TY - JOUR

T1 - HoPE

T2 - Hot-cacheline prediction for dynamic early decompression in compressed LLCs

AU - Park, Jaehyun

AU - Baek, Seungcheol

AU - Lee, Hyung Gyu

AU - Nicopoulos, Chrysostomos

AU - Young, Vinson

AU - Lee, Junghee

AU - Kim, Jongman

PY - 2017/4/1

Y1 - 2017/4/1

N2 - Data compression plays a pivotal role in improving system performance and reducing energy consumption, because it increases the logical effective capacity of a compressed memory system without physically increasing the memory size. However, data compression techniques incur some cost, such as non-negligible compression and decompression overhead. This overhead becomes more severe if compression is used in the cache. In this article, we aim to minimize the read-hit decompression penalty in compressed Last-Level Caches (LLCs) by speculatively decompressing frequently used cachelines. To this end, we propose a Hot-cacheline Prediction and Early decompression (HoPE) mechanism that consists of three synergistic techniques: Hotcacheline Prediction (HP), Early Decompression (ED), and Hit-history-based Insertion (HBI). HP and HBI efficiently identify the hot compressed cachelines, while ED selectively decompresses hot cachelines, based on their size information. Unlike previous approaches, the HoPE framework considers the performance balance/tradeoff between the increased effective cache capacity and the decompression penalty. To evaluate the effectiveness of the proposed HoPE mechanism, we run extensive simulations on memory traces obtained from multi-threaded benchmarks running on a full-system simulation framework. We observe significant performance improvements over compressed cache schemes employing the conventional Least-Recently Used (LRU) replacement policy, the Dynamic Re-Reference Interval Prediction (DRRIP) scheme, and the Effective Capacity Maximizer (ECM) compressed cache management mechanism. Specifically, HoPE exhibits system performance improvements of approximately 11%, on average, over LRU, 8% over DRRIP, and 7% over ECM by reducing the read-hit decompression penalty by around 65%, over a wide range of applications.

AB - Data compression plays a pivotal role in improving system performance and reducing energy consumption, because it increases the logical effective capacity of a compressed memory system without physically increasing the memory size. However, data compression techniques incur some cost, such as non-negligible compression and decompression overhead. This overhead becomes more severe if compression is used in the cache. In this article, we aim to minimize the read-hit decompression penalty in compressed Last-Level Caches (LLCs) by speculatively decompressing frequently used cachelines. To this end, we propose a Hot-cacheline Prediction and Early decompression (HoPE) mechanism that consists of three synergistic techniques: Hotcacheline Prediction (HP), Early Decompression (ED), and Hit-history-based Insertion (HBI). HP and HBI efficiently identify the hot compressed cachelines, while ED selectively decompresses hot cachelines, based on their size information. Unlike previous approaches, the HoPE framework considers the performance balance/tradeoff between the increased effective cache capacity and the decompression penalty. To evaluate the effectiveness of the proposed HoPE mechanism, we run extensive simulations on memory traces obtained from multi-threaded benchmarks running on a full-system simulation framework. We observe significant performance improvements over compressed cache schemes employing the conventional Least-Recently Used (LRU) replacement policy, the Dynamic Re-Reference Interval Prediction (DRRIP) scheme, and the Effective Capacity Maximizer (ECM) compressed cache management mechanism. Specifically, HoPE exhibits system performance improvements of approximately 11%, on average, over LRU, 8% over DRRIP, and 7% over ECM by reducing the read-hit decompression penalty by around 65%, over a wide range of applications.

KW - Cache

KW - Cache management policy

KW - Compression

UR - http://www.scopus.com/inward/record.url?scp=85017144416&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85017144416&partnerID=8YFLogxK

U2 - 10.1145/2999538

DO - 10.1145/2999538

M3 - Article

AN - SCOPUS:85017144416

VL - 22

JO - ACM Transactions on Design Automation of Electronic Systems

JF - ACM Transactions on Design Automation of Electronic Systems

SN - 1084-4309

IS - 3

M1 - 40

ER -