<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/12964</link>
    <description />
    <pubDate>Sat, 04 Apr 2026 12:07:48 GMT</pubDate>
    <dc:date>2026-04-04T12:07:48Z</dc:date>
    <item>
      <title>Jack Unit: An Area- and Energy-Efficient Multiply-Accumulate (MAC) Unit Supporting Diverse Data Formats</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60123</link>
      <description>Title: Jack Unit: An Area- and Energy-Efficient Multiply-Accumulate (MAC) Unit Supporting Diverse Data Formats
Author(s): 노석환; Kim, Sungju; Kim, Seohyun; Kim, Daehoon; Kung, Jaeha; Kim, Yeseong
Abstract: In this work, we introduce an area- and energy-efficient multiply-accumulate (MAC) unit, named Jack Unit, that is a jack-of-all-trades, supporting various data formats such as integer (INT), floating point (FP), and microscaling data format (MX). It provides bit-level flexibility and enhances hardware efficiency by i) replacing the carry-save multiplier (CSM) in the FP multiplier with a precision-scalable CSM, ii) performing the adjustment of significands based on the exponent differences within the CSM, and iii) utilizing 2D sub-word parallelism. To assess effectiveness, we implemented the layout of the Jack unit and three baseline MAC units. Additionally, we designed an AI accelerator equipped with our Jack units to compare with a state-of-the-art AI accelerator supporting various data formats. The proposed MAC unit achieves an area reduction of 14.53 similar to 50.25% and a power reduction of 4.76 similar to 45.65% compared to the baseline MAC units. On five AI benchmarks, the accelerator designed with our Jack units improves energy efficiency by 1.32 similar to 5.41x over the baseline across various data formats.</description>
      <pubDate>Wed, 06 Aug 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/60123</guid>
      <dc:date>2025-08-06T15:00:00Z</dc:date>
    </item>
    <item>
      <title>Hyperdimensional Computing-Based Federated Learning in Mobile Robots through Synthetic Oversampling</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60050</link>
      <description>Title: Hyperdimensional Computing-Based Federated Learning in Mobile Robots through Synthetic Oversampling
Author(s): Lee, Hyunsei; Han, Woongjae; Kim, Hojeong; Kwon, Hyukjun; Jang, Shinhyoung; Suh, Ilhong; Kim, Yeseong
Abstract: Traditional federated learning frameworks, often reliant on deep neural networks, face challenges related to computational demands and privacy risks. In this paper, we present a novel Hyperdimensional (HD) Computing-based federated learning framework designed for resource-constrained mobile robots. Unlike other HD-based learning, our approach introduces dynamic encoding, which improves both model accuracy and privacy by continuously updating hypervector representations. To further address the issue of imbalanced data, especially prevalent in robotics tasks, we propose a hypervector oversampling technique, enhancing model robustness. Extensive evaluations on LiDAR-equipped mobile robots demonstrate that our oversampling method outperforms state-of-the-art HD computing frameworks, achieving up to a 22.9% increase in accuracy while maintaining computational efficiency.</description>
      <pubDate>Sun, 18 May 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/60050</guid>
      <dc:date>2025-05-18T15:00:00Z</dc:date>
    </item>
    <item>
      <title>A Diffusion-Based Framework for Configurable and Realistic Multi-Storage Trace Generation</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60049</link>
      <description>Title: A Diffusion-Based Framework for Configurable and Realistic Multi-Storage Trace Generation
Author(s): Kim, Seohyun; Lee, Junyoung; Park, Jongho; Koo, Jinhyung; Lee, Sungjin; Kim, Yeseong
Abstract: We propose DiTTO, a novel diffusion-based framework for generating realistic, precisely configurable, and diverse multi-device storage traces. Leveraging advanced diffusion techniques, DiTTO enables the synthesis of high-fidelity continuous traces that capture temporal dynamics and inter-device dependencies with user-defined configurations. Our experimental results demonstrate that DiTTO can generate traces with high fidelity and diversity while aligning closely with guided configurations with only 8% errors.</description>
      <pubDate>Mon, 19 May 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/60049</guid>
      <dc:date>2025-05-19T15:00:00Z</dc:date>
    </item>
    <item>
      <title>FlexNeRFer: A Multi-Dataflow, Adaptive Sparsity-Aware Accelerator for On-Device NeRF Rendering</title>
      <link>https://scholar.dgist.ac.kr/handle/20.500.11750/58616</link>
      <description>Title: FlexNeRFer: A Multi-Dataflow, Adaptive Sparsity-Aware Accelerator for On-Device NeRF Rendering
Author(s): Noh, Seock-Hwan; Shin, Banseok; Choi, Jeik; Lee, Seungpyo; Kung, Jaeha; Kim, Yeseong
Abstract: Neural Radiance Fields (NeRF), an AI-driven approach for 3D view reconstruction, has demonstrated impressive performance, sparking active research across fields. As a result, a range of advanced NeRF models has emerged, leading on-device applications to increasingly adopt NeRF for highly realistic scene reconstructions. With the advent of diverse NeRF models, NeRF-based applications leverage a variety of NeRF frameworks, creating the need for hardware capable of efficiently supporting these models. However, GPUs fail to meet the performance, power, and area (PPA) cost demanded by these on-device applications, or are specialized for specific NeRF algorithms, resulting in lower efficiency when applied to other NeRF models. To address this limitation, in this work, we introduce FlexNeRFer, an energy-efficient versatile NeRF accelerator. The key components enabling the enhancement of FlexNeRFer include: i) a flexible network-on-chip (NoC) supporting multi-dataflow and sparsity on precision-scalable MAC array, and ii) efficient data storage using an optimal sparsity format based on the sparsity ratio and precision modes. To evaluate the effectiveness of FlexNeRFer, we performed a layout implementation using 28nm CMOS technology. Our evaluation shows that FlexNeRFer achieves 8.2∼243.3× speedup and 24.1∼520.3× improvement in energy efficiency over a GPU (i.e., NVIDIA RTX 2080 Ti), while demonstrating 4.2∼86.9× speedup and 2.3∼47.5× improvement in energy efficiency compared to a state-of-the-art NeRF accelerator (i.e., NeuRex).  © 2025 Copyright held by the owner/author(s).</description>
      <pubDate>Tue, 24 Jun 2025 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://scholar.dgist.ac.kr/handle/20.500.11750/58616</guid>
      <dc:date>2025-06-24T15:00:00Z</dc:date>
    </item>
  </channel>
</rss>

