Detail View

All-Rounder: A Flexible AI Accelerator With Diverse Data Format Support and Morphable Structure for Multi-DNN Processing
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

Title
All-Rounder: A Flexible AI Accelerator With Diverse Data Format Support and Morphable Structure for Multi-DNN Processing
Issued Date
2025-05
Citation
Noh, Seock-Hwan. (2025-05). All-Rounder: A Flexible AI Accelerator With Diverse Data Format Support and Morphable Structure for Multi-DNN Processing. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 33(5), 1264–1277. doi: 10.1109/TVLSI.2025.3540346
Type
Article
Author Keywords
multitenant executionDeep neural networkshardware accelerationmultiple data format support
ISSN
1063-8210
Abstract
Recognizing the explosive increase in the use of artificial intelligence (AI)-based applications, several industrial companies developed custom application-specific integrated circuits (ASICs) (e.g., Google TPU, IBM RaPiD, and Intel NNP-I/NNP-T) and constructed a hyperscale cloud infrastructure with them. These ASICs perform operations of the inference or training process of AI models which are requested by users. Since the AI models have different data formats and types of operations, the ASICs need to support diverse data formats and various operation shapes. However, the previous ASIC solutions do not or less fulfill these requirements. To overcome these limitations, we first present an area-efficient multiplier, named all-in-one multiplier, which supports multiple bit-widths for both integer (INT) and floating-point (FP) data types. Then, we build a multiply-and-accumulation (MAC) array equipped with these multipliers with multiformat support. In addition, the MAC array can be partitioned into multiple blocks that can be flexibly fused to support various deep neural network (DNN) operation types. We evaluate the practical effectiveness of the proposed MAC array by making an accelerator out of it, named All-rounder. According to our evaluation, the proposed all-in-one multiplier occupies 1.49× smaller area compared to the baselines with dedicated multipliers for each data format. Then, we compare the performance and energy efficiency of the proposed All-rounder with three different accelerators showing consistent speedup and higher efficiency across various AI benchmarks from vision to large language model (LLM)-based language tasks. © 2025 IEEE. All rights reserved.
URI
http://hdl.handle.net/20.500.11750/58297
DOI
10.1109/TVLSI.2025.3540346
Publisher
Institute of Electrical and Electronics Engineers
Show Full Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Total Views & Downloads