<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/849</link>
    <description />
    <pubDate>Sun, 05 Apr 2026 14:27:01 GMT</pubDate>
    <dc:date>2026-04-05T14:27:01Z</dc:date>
    <item>
      <title>NCAP: Network-Driven, Packet Context-Aware Power Management for Client-Server Architecture</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58977</link>
      <description>Title: NCAP: Network-Driven, Packet Context-Aware Power Management for Client-Server Architecture
Author(s): Alian, Mohammad; Abulila, Ahmed H. M. O.; Jindal, Lokesh; Kim, Daehoon; Kim, Nam Sung
Abstract: The rate of network packets encapsulating requests from clients can significantly affect the utilization, and thus performance and sleep states of processors in servers deploying a power management policy. To improve energy efficiency, servers may adopt an aggressive power management policy that frequently transitions a processor to a low-performance or sleep state at a low utilization. However, such servers may not respond to a sudden increase in the rate of requests from clients early enough due to a considerable performance penalty of transitioning a processor from a sleep or low-performance state to a high-performance state. This in turn entails violations of a service level agreement (SLA), discourages server operators from deploying an aggressive power management policy, and thus wastes energy during low-utilization periods. For both fast response time and high energy-efficiency, we propose NCAP, Network-driven, packet Context-Aware Power management for client-server architecture. NCAP enhances a network interface card (NIC) and its driver such that it can examine received and transmitted network packets, determine the rate of network packets containing latency-critical requests, and proactively transition a processor to an appropriate performance or sleep state. To demonstrate the efficacy, we evaluate on-line data-intensive (OLDI) applications and show that a server deploying NCAP consumes 37~61% lower processor energy than a baseline server while satisfying a given SLA at various load levels. © 2017 IEEE.</description>
      <pubDate>Sun, 05 Feb 2017 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/58977</guid>
      <dc:date>2017-02-05T15:00:00Z</dc:date>
    </item>
    <item>
      <title>InnerSP: A Memory Efficient Sparse Matrix Multiplication Accelerator with Locality-Aware Inner Product Processing</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/46905</link>
      <description>Title: InnerSP: A Memory Efficient Sparse Matrix Multiplication Accelerator with Locality-Aware Inner Product Processing
Author(s): Baek, Daehyeon; Hwang, Soojin; Heo, Taekyung; Kim, Daehoon; Huh, Jaehyuk
Abstract: Sparse matrix multiplication is one of the key computational kernels in large-scale data analytics. However, a naive implementation suffers from the overheads of irregular memory accesses due to the representation of sparsity. To mitigate the memory access overheads, recent accelerator designs advocated the outer product processing which minimizes input accesses but generates intermediate products to be merged to the final output matrix. Using real-world sparse matrices, this study first identifies the memory bloating problem of the outer product designs due to the unpredictable intermediate products. Such an unpredictable increase in memory requirement during computation can limit the applicability of accelerators. To address the memory bloating problem, this study revisits an alternative inner product approach, and proposes a new accelerator design called InnerSP. This study shows that nonzero element distributions in real-world sparse matrices have a certain level of locality. Using a smart caching scheme designed for inner product, the locality is effectively exploited with a modest on-chip cache. However, the row-wise inner product relies on on-chip aggregation of intermediate products. Due to uneven sparsity per row, overflows or underflows of the on-chip storage for aggregation can occur. To maximize the parallelism while avoiding costly overflows, the proposed accelerator uses pre-scanning for row splitting and merging. The simulation results show that the performance of InnerSP can exceed or be similar to those of the prior outer product approaches without any memory bloating problem. © 2021 IEEE</description>
      <pubDate>Mon, 27 Sep 2021 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/46905</guid>
      <dc:date>2021-09-27T15:00:00Z</dc:date>
    </item>
    <item>
      <title>NMAP: Power Management Based on Network Packet Processing Mode Transition for Latency-Critical Workloads</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/46892</link>
      <description>Title: NMAP: Power Management Based on Network Packet Processing Mode Transition for Latency-Critical Workloads
Author(s): Kang, Ki Dong; Park, Gyeongseo; Kim, Hyosang; Alian, Mohammad; Kim, Nam Sung; Kim, Daehoon
Abstract: Processor power management exploiting Dynamic Voltage and Frequency Scaling (DVFS) plays a crucial role in improving the data-center&amp;apos;s energy efficiency. However, we observe that current power management policies in Linux (i.e., governors) often considerably increase tail response time (i.e., violate a given Service Level Objective (SLO)) and energy consumption of latency-critical applications. Furthermore, the previously proposed SLO-aware power management policies oversimplify network request processing and ignore the fact that network requests arrive at the application layer in bursts. Considering the complex interplay between the OS and network devices, we propose a power management framework exploiting network packet processing mode transitions in the OS to quickly react to the processing demands from the received network requests. Our proposed power management framework tracks the transitions between polling and interrupt in the network software stack to detect excessive packet processing on the cores and immediately react to the load changes by updating the voltage and frequency (V/F) states. Our experimental results show that our framework does not violate SLO and reduces energy consumption by up to 35.7% and 14.8% compared to Linux governors and stateof- the-art SLO-aware power management techniques, respectively. © 2021 Association for Computing Machinery.</description>
      <pubDate>Tue, 19 Oct 2021 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/46892</guid>
      <dc:date>2021-10-19T15:00:00Z</dc:date>
    </item>
    <item>
      <title>GreenDIMM: OS-Assisted DRAM Power Management for DRAM with a Sub-Array Granularity Power-Down State</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/46891</link>
      <description>Title: GreenDIMM: OS-Assisted DRAM Power Management for DRAM with a Sub-Array Granularity Power-Down State
Author(s): Lee, Seunghak; Kang, Ki Dong; Lee, Hwanjun; Park, Hyungwon; Son, Younghoon; Kim, Namsun; Kim, Daehoon
Abstract: Power and energy consumed byDRAMcomprising main memory of data-center servers have increased substantially as the capacity and bandwidth of memory increase. Especially, the fraction of DRAM background power in DRAM total power is already high, and it will continue to increase with the decelerating DRAM technology scaling as we will have to plug more DRAM modules in servers or stack more DRAM dies in a DRAM package to provide necessary DRAM capacity in the future. To reduce the background power, we may exploit low average utilization of the DRAM capacity in data-center servers (i.e., 40 C60%) for DRAM power management. Nonetheless, the current DRAM power management supports lowpower states only at the rank granularity, which becomes ineffective with memory interleaving techniques devised to disperse memory requests across ranks. That is, ranks need to be frequently woken up from low-power states with aggressive power management, which can significantly degrade system performance, or they do not get a chance to enter low-power states with conservative power management. To tackle such limitations of the current DRAM power management, we propose GreenDIMM, OS-assisted DRAM power management. Specifically, GreenDIMM first takes a memory block in physical address space mapped to a group of DRAM sub-arrays across every channel, rank, and bank as a unit of DRAM power management. This facilitates fine-grained DRAM power management while keeping the benefit of memory interleaving techniques. Second, GreenDIMM exploits memory on-/off-lining operations of the modern OS to dynamically remove/add memory blocks from/to the physical address space, depending on the utilization of memory capacity at run-time. Third, GreenDIMM implements a deep powerdown state at the sub-array granularity to reduce the background power of the off-lined memory blocks. As the off-lined memory blocks are removed from the physical address space, the sub-arrays will not receive any memory request and stay in the power-down state until the memory blocks are explicitly on-lined by the OS. Our evaluation with a commercial server running diverse workloads shows that GreenDIMM can reduce DRAM and system power by 36% and 20%, respectively, with ~1% performance degradation. © 2021 Association for Computing Machinery.</description>
      <pubDate>Tue, 19 Oct 2021 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/46891</guid>
      <dc:date>2021-10-19T15:00:00Z</dc:date>
    </item>
  </channel>
</rss>

