<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/6297</link>
    <description />
    <pubDate>Sat, 04 Apr 2026 15:17:59 GMT</pubDate>
    <dc:date>2026-04-04T15:17:59Z</dc:date>
    <item>
      <title>Storage Abstractions for SSDs: The Past, Present, and Future</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58299</link>
      <description>Title: Storage Abstractions for SSDs: The Past, Present, and Future
Author(s): Zhang, Xiangqun; Bhimani, Janki; Pei, Shuyi; Lee, Eunji; Lee, Sungjin; Seong, Yoon Jae; Kim, Eui Jin; Choi, Changho; Nam, Eyee Hyun; Choi, Jongmoo; Kim, Bryan Suk
Abstract: This article traces the evolution of SSD (solid-state drive) interfaces, examining the transition from the block storage paradigm inherited from hard disk drives to SSD-specific standards customized to flash memory. Early SSDs conformed to the block abstraction for compatibility with the existing software storage stack, but studies and deployments show that this limits the performance potential for SSDs. As a result, new SSD-specific interface standards emerged to not only capitalize on the low latency and abundant internal parallelism of SSDs, but also include new command sets that diverge from the longstanding block abstraction. We first describe flash memory technology in the context of the block storage abstraction and the components within an SSD that provide the block storage illusion. We then describe the genealogy and relationships among academic research and industry standardization efforts for SSDs, along with some of their rise and fall in popularity. We classify these works into four evolving branches: (1) extending block abstraction with host-SSD hints/directives; (2) enhancing host-level control over SSDs; (3) offloading host-level management to SSDs; and (4) making SSDs byte-addressable. By dissecting these trajectories, the article also sheds light on the emerging challenges and opportunities, providing a roadmap for future research and development in SSD technologies.  © 2025 Copyright held by the owner/author(s).</description>
      <pubDate>Fri, 31 Jan 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/58299</guid>
      <dc:date>2025-01-31T15:00:00Z</dc:date>
    </item>
    <item>
      <title>플래시 메모리의 실시간 보장 기간 최대화를 위한 적응형 데이터 I/O 관리 기법</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57471</link>
      <description>Title: 플래시 메모리의 실시간 보장 기간 최대화를 위한 적응형 데이터 I/O 관리 기법
Author(s): 김경택; 김명석; 이성진; 좌훈승
Abstract: 본 논문에서는 낸드 플래시 내의 작업들의 P/E cycle에 따른 실행 시간 변화가 실시간성 보장을 위해 반드시 고려되어야 함을 보인다. 제시하는 문제를 해결하기 위해 P/E cycle에 따라 달라지는 데이터 I/O 실행 시간을 고려하여 실시간성 보장 여부를 판단하는 새로운 스케줄 가능성 분석과, 이 분석을 바탕으로 실시간성 보장 기간을 최대화하는 데이터 관리 기법을 제안한다. 제안하는 관리 기법은 일반적인 데이터 관리 기법과 대표적인 플래시 수명 연장 기법인 wear-leveling에 비해 실시간성 보장 기간을 향상시킨다. 임의 생성된 태스크 셋을 대상으로 제안된 기법을 실험하였을 때 일반적인 관리 기법에 비해 최대 81%, wear-leveling 기법에 비해 최대 63%까지 실시간성 보장 기간을 연장함을 확인하였다. 본 기법은 실시간 I/O를 필요로 하는 시스템의 신뢰성과 성능을 높이는 데 기여할 것으로 기대된다.


This paper demonstrates that variation of NAND flash operation latency must be considered to ensure real-time guarantees. To address this issue, a new schedulability analysis is proposed, which considers varying I/O execution time resulting from P/E cycle increase. An I/O management mechanism designed to maximize the duration of real-time guarantees is then suggested. The proposed technique can improve the duration of real-time guarantees compared to conventional flash firmware and wear-leveling technique, commonly used to extend the lifespan of flash storage. In evaluation, our proposed technique extended the real-time guarantee duration by up to 81% compared to conventional management methods, and up to 63% compared to wear-leveling techniques. Our solution is expected to enhance the reliability and performance of systems requiring real-time data I/O.</description>
      <pubDate>Sat, 30 Nov 2024 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/57471</guid>
      <dc:date>2024-11-30T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Program context-assisted address translation for high-capacity SSDs</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57292</link>
      <description>Title: Program context-assisted address translation for high-capacity SSDs
Author(s): Li, Xiaochang; Kim, Minjae; Lee, Sungjin; Zhai, Zhengjun; Kim, Jihong
Abstract: As the capacity of NAND flash-based SSDs keeps increasing, it becomes crucial to design a memory-efficient address translation algorithm that offers high performance when a translation table cannot be entirely loaded in a controller DRAM. Existing flash translation layers (FTL) employ demand-based address translation which caches popular mapping information in DRAM by leveraging locality of I/O references. Owing to the lack of information about detailed behaviors of applications, however, existing demand-based FTLs often suffer from many translation-table misses and thus result in sub-optimal performance. In this paper, we propose a new Program context-AssisteD Flash Translation Layer, called PADFTL. Unlike existing FTLs which are implemented as the form of firmware, PADFTL is vertically integrated with a host-level I/O classifier which provides useful hints for an FTL in an SSD to make a better decision in managing a translation table. The host-level I/O classifier monitors unique behaviors of applications by analyzing their program contexts and categorizes I/O patterns into four types, (1) Loop, (2) Hot, (3) Sequential, and (4) Random, which are then delivered to an SSD through extended interfaces. The SSD-side module of PADFTL partitions a controller DRAM into four zones and isolates mapping information associated with different I/O patterns into separate zones. By employing cache management strategies optimized for individual zones, PADFTL can lower the overall translation-table miss ratio. To evaluate the effectiveness of PADFTL, we implement the host-level classifier in the Linux kernel and PADFTL&amp;apos;s FTL in a trace-driven FTL simulator. In our experimental results, compared to the state-of-the-art FTL, PADFTL increases the overall table hit ratio by 16% while reducing the address translation time by up to 20% on average. © 2024</description>
      <pubDate>Tue, 31 Dec 2024 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/57292</guid>
      <dc:date>2024-12-31T15:00:00Z</dc:date>
    </item>
    <item>
      <title>모바일 저장장치 성능 향상을 위한 통합 호스트-저장장치 주소변환 테이블 캐시 관리기법</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/47697</link>
      <description>Title: 모바일 저장장치 성능 향상을 위한 통합 호스트-저장장치 주소변환 테이블 캐시 관리기법
Author(s): 김윤아; 최인혁; 이성진; 김지홍
Abstract: 최근 저장장치의 사이즈가 커지며 플래시 기반 저장장치의 주소변환 테이블 관리에 필요한 메모리 공간의 요구량 또한 점차 커지는 추세이다. 모바일 저장장치로 사용되는 UFS의 경우 하드웨어 및 가격적인 제약으로 인해 UFS 내장 SRAM 메모리 공간을 늘리기에 많은 어려움이 존재하여 늘어난 주소 변환 테이블 관리에 성능적 문제가 발생하게 된다. 이를 보완하기 위해, 호스트 DRAM 메모리를 사용해 주소변환 테이블의 일부를 적재할 수 있는 HPB 기법이 제안되었다. 본 논문에서는 호스트 메모리와 UFS의 SRAM이 통합적으로 관리되지 않아 주어진 자원을 낭비하는 문제를 발견하였고 두 캐시 계층의 특성을 고려한 통합 주소변환 테이블 관리기법을 제안한다. 본 기법을 통하여 낭비되는 캐시 자원을 최소화하고 저장장치 지연시간을 감소시키며 불필요한 저장장치 수명 저하를 방지할 수 있다. 모바일 응용 트레이스 기반으로 실험한 결과, 기존 관리기법 대비 캐시 적중률은 5% 향상하였고, 낭비되었던 캐시 공간 자원을 95% 감소하였으며 주소변환 테이블 업데이트로 발생되는 가비지 컬렉션 횟수가 43% 감소하였다.
As the size of a storage device gradually increases, the demand for on-device memory capacity required for managing the address mapping translation of a NAND flash-based storage device increases. The on-device memory capacity of a mobile storage device, Universal Flash Storage (UFS), does not increase due to H/W and cost constraints, making it challenging to manage the increased address translation table. To resolve the problem, Host Performance Booster (HPB), which borrows host-side DRAM memory to load portions of the address translation table was introduced. In this paper, we demonstrate that the HPB-enabled system does not work in an integrated manner with the device-side SRAM, therefore wasting the given memory resource. We propose integrated mapping table management techniques that consider the distinctive features of each cache layer. By adopting these techniques, we aim to minimize wasted cache resources, reduce storage latency, and prevent unnecessary degradation of the storage lifetime. Based on the evaluation results, the cache hit ratio is improved by 5% while the wasted memory resource is reduced by 95%, and the number of device-side garbage collections is reduced by 43% compared to the baseline scheme. Ⓒ 2023 한국정보과학회</description>
      <pubDate>Tue, 31 Oct 2023 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/47697</guid>
      <dc:date>2023-10-31T15:00:00Z</dc:date>
    </item>
  </channel>
</rss>

