<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/6125">
    <title>Repository Community: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/6125</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60060" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/59052" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58953" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/58724" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-21T16:45:29Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60060">
    <title>Revisiting Trim for CXL Memory</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60060</link>
    <description>Title: Revisiting Trim for CXL Memory
Author(s): Lee, Hayan; Kim, Jungwoo; Lee, Wookyung; Park, Juhyung; Jung, Sanghyuk; Han, Jinki; Kim, Bryan S.; Lee, Sungjin; Lee, Eunji
Abstract: The expansion of memory disaggregation, driven by datacentric applications, increases heterogeneity in memory systems. This shift enables the use of inexpensive, yet lifetimelimited, flash memory to be used as a memory expansion module. We argue that TRIM should be introduced into memory management systems to effectively respond to this transition. In this position paper, we explore the potential adoption of flash memory as memory expansion and present an analytical model that offers a straightforward yet rigorous evaluation of TRIM&amp;apos;s effectiveness. Using this model and characteristics extracted from real-world workloads, we evaluate the effectiveness of TRIM in scalable memory systems and prove its necessity.</description>
    <dc:date>2025-07-09T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/59052">
    <title>Key-value storage device, host and host storage system</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/59052</link>
    <description>Title: Key-value storage device, host and host storage system
Author(s): 임준수; 박주형; 이성진; 구진형
Abstract: The invention provides a key-value storage device, a host and a host storage system. A host in communication with a key-value storage device includes: a host memory storing a computer program; and a host controller including a processor configured to run a computer program. The computer program is configured, when executed, to cause the processor to process a file or directory related to file data received by an application stored in the host, and to provide the file or directory to the key-value storage device, map the file or directory to a key-value object that can be stored in the key-value storage device, in one embodiment, a method includes storing a key-value object, converting a file operation requested by an application stored in a host into a key-value operation executable in a key-value storage device, managing transactions related to the key-value object and the key-value operation, and providing the transactions to the key-value storage device, and abstracting a file or directory into a meta-object or a data object.</description>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58953">
    <title>Lightweight KV-based Distributed Store for Datacenters</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58953</link>
    <description>Title: Lightweight KV-based Distributed Store for Datacenters
Author(s): Chung, Chanwoo; Koo, Jinhyung; Arvind; Lee, Sungjin
Abstract: A great deal of digital data is generated every day by content providers, end-users, and even IoT sensors. This data is stored in and managed by thousands of distributed storage nodes, each comprised of a power-hungry x86 Xeon server with a huge amount of DRAM and an array of HDDs or SSDs grouped by RAID. Such clusters take up a large amount of space in datacenters and require a lot of electricity and cooling facilities. Therefore, packing as much data as possible into a smaller datacenter space and managing it in an energy- and performance-efficient manner can result in enormous savings.</description>
    <dc:date>2017-07-09T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/58724">
    <title>ARTIFICIAL INTELLIGENCE INFERENCE AND TRAINING SYSTEM AND METHOD USING SSD OFFLOADING</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58724</link>
    <description>Title: ARTIFICIAL INTELLIGENCE INFERENCE AND TRAINING SYSTEM AND METHOD USING SSD OFFLOADING
Author(s): 이성진; 김정우; 오성균
Abstract: An artificial intelligence inference and training system using SSD offloading, according to one embodiment of the present invention, comprises: storage servers for storing data and comprising a plurality of SSDs which each comprise a first computing device; a transfer learning server connected to the storage servers via a network and generating an artificial intelligence model which is updated through periodic training; an inference server for extracting metadata for the data by using the artificial intelligence model received from the transfer learning server; and a database for storing the extracted metadata, wherein the transfer learning server comprises a second computing device, and a first part of the training is carried out through the first computing devices, and a second part, including the remaining training parts other than the first part of the training, is carried out through the second computing device.</description>
  </item>
</rdf:RDF>

