Detail View

T-CAT: Dynamic Cache Allocation for Tiered Memory Systems with Memory Interleaving
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Lee, Hwanjun -
dc.contributor.author Lee, Seunghak -
dc.contributor.author Jung, Yeji -
dc.contributor.author Kim, Daehoon -
dc.date.accessioned 2024-02-05T00:10:51Z -
dc.date.available 2024-02-05T00:10:51Z -
dc.date.created 2023-07-20 -
dc.date.issued 2023-07 -
dc.identifier.issn 1556-6056 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/47772 -
dc.description.abstract New memory interconnect technology, such as Intel's Compute Express Link (CXL), helps to expand memory bandwidth and capacity by adding CPU-less NUMA nodes to the main memory system, addressing the growing memory wall challenge. Consequently, modern computing systems embrace the heterogeneity in memory systems, composing the memory systems with a tiered memory system with near and far memory (e.g., local DRAM and CXL-DRAM). However, adopting NUMA interleaving, which can improve performance by exploiting node-level parallelism and utilizing aggregate bandwidth, to the tiered memory systems can face challenges due to differences in the access latency between the two types of memory, leading to potential performance degradation for memory-intensive workloads. By tackling the challenges, we first investigate the effects of the NUMA interleaving on the performance of the tiered memory systems. We observe that while NUMA interleaving is essential for applications demanding high memory bandwidth, it can negatively impact the performance of applications demanding low memory bandwidth. Next, we propose a dynamic cache management, called T-CAT, which partitions the last-level cache between near and far memory, aiming to mitigate performance degradation by accessing far memory. T-CAT, attempts to reduce the difference in the average access latency between near and far memory by re-sizing the cache partitions. Through dynamic cache management, T-CAT, can preserve the performance benefits of NUMA interleaving while mitigating performance degradation by the far memory accesses. Our experimental results show that T-CAT, improves performance by up to 17% compared to cases with NUMA interleaving without the cache management. © 2023 IEEE -
dc.language English -
dc.publisher Institute of Electrical and Electronics Engineers -
dc.title T-CAT: Dynamic Cache Allocation for Tiered Memory Systems with Memory Interleaving -
dc.type Article -
dc.identifier.doi 10.1109/LCA.2023.3290197 -
dc.identifier.scopusid 2-s2.0-85163495905 -
dc.identifier.bibliographicCitation Lee, Hwanjun. (2023-07). T-CAT: Dynamic Cache Allocation for Tiered Memory Systems with Memory Interleaving. IEEE Computer Architecture Letters, 22(2), 73–76. doi: 10.1109/LCA.2023.3290197 -
dc.description.isOpenAccess FALSE -
dc.subject.keywordAuthor Cache partitioning -
dc.subject.keywordAuthor memory interleaving -
dc.subject.keywordAuthor non - uniform memory architecture (NUMA) -
dc.subject.keywordAuthor tiered memory systems -
dc.citation.endPage 76 -
dc.citation.number 2 -
dc.citation.startPage 73 -
dc.citation.title IEEE Computer Architecture Letters -
dc.citation.volume 22 -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

김대훈
Kim, Daehoon김대훈

Department of Electrical Engineering and Computer Science

read more

Total Views & Downloads