WEB OF SCIENCE
SCOPUS
Recognizing the explosive increase in the use of artificial intelligence (AI)-based applications, several industrial companies developed custom application-specific integrated circuits (ASICs) (e.g., Google TPU, IBM RaPiD, and Intel NNP-I/NNP-T) and constructed a hyperscale cloud infrastructure with them. These ASICs perform operations of the inference or training process of AI models which are requested by users. Since the AI models have different data formats and types of operations, the ASICs need to support diverse data formats and various operation shapes. However, the previous ASIC solutions do not or less fulfill these requirements. To overcome these limitations, we first present an area-efficient multiplier, named all-in-one multiplier, which supports multiple bit-widths for both integer (INT) and floating-point (FP) data types. Then, we build a multiply-and-accumulation (MAC) array equipped with these multipliers with multiformat support. In addition, the MAC array can be partitioned into multiple blocks that can be flexibly fused to support various deep neural network (DNN) operation types. We evaluate the practical effectiveness of the proposed MAC array by making an accelerator out of it, named All-rounder. According to our evaluation, the proposed all-in-one multiplier occupies 1.49× smaller area compared to the baselines with dedicated multipliers for each data format. Then, we compare the performance and energy efficiency of the proposed All-rounder with three different accelerators showing consistent speedup and higher efficiency across various AI benchmarks from vision to large language model (LLM)-based language tasks. © 2025 IEEE. All rights reserved.
더보기