<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/12962">
    <title>Repository Community: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/12962</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60123" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60050" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60049" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/60010" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T14:59:01Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60123">
    <title>Jack Unit: An Area- and Energy-Efficient Multiply-Accumulate (MAC) Unit Supporting Diverse Data Formats</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60123</link>
    <description>Title: Jack Unit: An Area- and Energy-Efficient Multiply-Accumulate (MAC) Unit Supporting Diverse Data Formats
Author(s): 노석환; Kim, Sungju; Kim, Seohyun; Kim, Daehoon; Kung, Jaeha; Kim, Yeseong
Abstract: In this work, we introduce an area- and energy-efficient multiply-accumulate (MAC) unit, named Jack Unit, that is a jack-of-all-trades, supporting various data formats such as integer (INT), floating point (FP), and microscaling data format (MX). It provides bit-level flexibility and enhances hardware efficiency by i) replacing the carry-save multiplier (CSM) in the FP multiplier with a precision-scalable CSM, ii) performing the adjustment of significands based on the exponent differences within the CSM, and iii) utilizing 2D sub-word parallelism. To assess effectiveness, we implemented the layout of the Jack unit and three baseline MAC units. Additionally, we designed an AI accelerator equipped with our Jack units to compare with a state-of-the-art AI accelerator supporting various data formats. The proposed MAC unit achieves an area reduction of 14.53 similar to 50.25% and a power reduction of 4.76 similar to 45.65% compared to the baseline MAC units. On five AI benchmarks, the accelerator designed with our Jack units improves energy efficiency by 1.32 similar to 5.41x over the baseline across various data formats.</description>
    <dc:date>2025-08-06T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60050">
    <title>Hyperdimensional Computing-Based Federated Learning in Mobile Robots through Synthetic Oversampling</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60050</link>
    <description>Title: Hyperdimensional Computing-Based Federated Learning in Mobile Robots through Synthetic Oversampling
Author(s): Lee, Hyunsei; Han, Woongjae; Kim, Hojeong; Kwon, Hyukjun; Jang, Shinhyoung; Suh, Ilhong; Kim, Yeseong
Abstract: Traditional federated learning frameworks, often reliant on deep neural networks, face challenges related to computational demands and privacy risks. In this paper, we present a novel Hyperdimensional (HD) Computing-based federated learning framework designed for resource-constrained mobile robots. Unlike other HD-based learning, our approach introduces dynamic encoding, which improves both model accuracy and privacy by continuously updating hypervector representations. To further address the issue of imbalanced data, especially prevalent in robotics tasks, we propose a hypervector oversampling technique, enhancing model robustness. Extensive evaluations on LiDAR-equipped mobile robots demonstrate that our oversampling method outperforms state-of-the-art HD computing frameworks, achieving up to a 22.9% increase in accuracy while maintaining computational efficiency.</description>
    <dc:date>2025-05-18T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60049">
    <title>A Diffusion-Based Framework for Configurable and Realistic Multi-Storage Trace Generation</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60049</link>
    <description>Title: A Diffusion-Based Framework for Configurable and Realistic Multi-Storage Trace Generation
Author(s): Kim, Seohyun; Lee, Junyoung; Park, Jongho; Koo, Jinhyung; Lee, Sungjin; Kim, Yeseong
Abstract: We propose DiTTO, a novel diffusion-based framework for generating realistic, precisely configurable, and diverse multi-device storage traces. Leveraging advanced diffusion techniques, DiTTO enables the synthesis of high-fidelity continuous traces that capture temporal dynamics and inter-device dependencies with user-defined configurations. Our experimental results demonstrate that DiTTO can generate traces with high fidelity and diversity while aligning closely with guided configurations with only 8% errors.</description>
    <dc:date>2025-05-19T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/60010">
    <title>DeepPM: Predicting Performance and Energy Consumption of Program Binaries Using Transformers</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/60010</link>
    <description>Title: DeepPM: Predicting Performance and Energy Consumption of Program Binaries Using Transformers
Author(s): Shim, Jun S.; Chang, Hyeonji; Kim, Yeseong; Kim, Jihong
Abstract: Accurate estimation of performance and energy consumption is critical for optimizing application efficiency on diverse hardware platforms. Traditional methods often rely on profiling and measurements, requiring at least one execution, making them time-consuming and resource-intensive. This article introduces the Deep Power Meter (DeepPM) framework, leveraging deep learning, specifically the Transformer architecture, to predict performance and energy consumption of basic blocks directly from compiled binaries, eliminating the need for explicit measurement processes. The DeepPM model effectively learns the performance and energy consumption of basic blocks, enabling accurate predictions for each. Furthermore, the framework enhances applicability across different ISAs and microarchitectures, addressing limitations of state-of-the-art ML-based techniques restricted to specific processor architectures. Experimental results using the SPEC CPU 2017 benchmark suite show that DeepPM achieves significantly lower prediction errors compared to state-of-the-art ML-based techniques, with a 24% improvement in performance and an 18% improvement in energy consumption for x86 basic blocks, and similar gains for ARM processors. Fine-tuning with minimal data from the Phoronix Test Suite further validates DeepPM’s robustness, achieving an error of approximately 13.7%, close to the fully trained model’s 13.3% error. These findings demonstrate DeepPM’s ability to enhance the accuracy and efficiency of performance and energy consumption predictions, making it a valuable tool for optimizing computing systems across diverse hardware environments. © 2025 Elsevier B.V., All rights reserved.</description>
    <dc:date>2025-10-31T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

