<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/846">
    <title>Repository Community: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/846</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59306" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59252" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58977" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58750" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-05T09:00:22Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59306">
    <title>메모리 인터리빙을 사용하는 계층 메모리 환경을  위한 동적 캐시 할당 방법</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59306</link>
    <description>Title: 메모리 인터리빙을 사용하는 계층 메모리 환경을  위한 동적 캐시 할당 방법
Author(s): 정진; 소진인; 이종건; 김대훈; 이환준</description>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59252">
    <title>메모리 시스템 및 메모리 시스템의 동작 방법</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59252</link>
    <description>Title: 메모리 시스템 및 메모리 시스템의 동작 방법
Author(s): 김대훈; 이종건; 이환준; 김민호; 소진인; 박형원; 정진; 정예지; 장민우
Abstract: Memory systems and methods for operating the same. A memory system comprises a first memory, a second memory having an operating speed different from that of the first memory, a storage unit configured to store an instruction, a prefetcher configured to update prefetcher data in response to occurrence of cache hits and a processor configured to execute the instruction stored in the storage unit. When the instruction is executed, the processor is configured to generate prefetcher friendly data by filtering the prefetcher data, set a prefetcher friendly bit in a first pointer area corresponding to the first memory and a second pointer area corresponding to the second memory based on the prefetcher friendly data, and determine whether data of the first pointer area and the second pointer area are migrated, in consideration of a reference bit and the prefetcher friendly bit of the first and second pointer areas.</description>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58977">
    <title>NCAP: Network-Driven, Packet Context-Aware Power Management for Client-Server Architecture</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58977</link>
    <description>Title: NCAP: Network-Driven, Packet Context-Aware Power Management for Client-Server Architecture
Author(s): Alian, Mohammad; Abulila, Ahmed H. M. O.; Jindal, Lokesh; Kim, Daehoon; Kim, Nam Sung
Abstract: The rate of network packets encapsulating requests from clients can significantly affect the utilization, and thus performance and sleep states of processors in servers deploying a power management policy. To improve energy efficiency, servers may adopt an aggressive power management policy that frequently transitions a processor to a low-performance or sleep state at a low utilization. However, such servers may not respond to a sudden increase in the rate of requests from clients early enough due to a considerable performance penalty of transitioning a processor from a sleep or low-performance state to a high-performance state. This in turn entails violations of a service level agreement (SLA), discourages server operators from deploying an aggressive power management policy, and thus wastes energy during low-utilization periods. For both fast response time and high energy-efficiency, we propose NCAP, Network-driven, packet Context-Aware Power management for client-server architecture. NCAP enhances a network interface card (NIC) and its driver such that it can examine received and transmitted network packets, determine the rate of network packets containing latency-critical requests, and proactively transition a processor to an appropriate performance or sleep state. To demonstrate the efficacy, we evaluate on-line data-intensive (OLDI) applications and show that a server deploying NCAP consumes 37~61% lower processor energy than a baseline server while satisfying a given SLA at various load levels. © 2017 IEEE.</description>
    <dc:date>2017-02-05T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58750">
    <title>Processor, system, and method of operation for dynamic cache allocation</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58750</link>
    <description>Title: Processor, system, and method of operation for dynamic cache allocation
Author(s): 이종건; 정진; 소진인; 이환준; 김대훈
Abstract: Processors, systems, and methods of operation are provided for dynamic cache allocation. The processor includes: a processing core configured to process each of a plurality of requests by accessing a respective one of a first memory and a second memory; a delay monitor configured to generate first delay information and second delay information, the first delay information including a first access delay to the first memory and the second delay information including a second access delay to the second memory; a plurality of cache lines, the plurality of cache lines being divided into a first partition and a second partition; and a decision engine configured to allocate each of the plurality of cache lines to one of the first partition and the second partition based on the first latency information and the second latency information.</description>
  </item>
</rdf:RDF>

