Detail View

FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

Title
FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support
Issued Date
2023-09
Citation
Noh, Seockhwan. (2023-09). FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support. IEEE Transactions on Computers, 72(9), 2522–2535. doi: 10.1109/TC.2023.3253050
Type
Article
Author Keywords
Block floating pointDNN training acceleratorlow precision trainingprecision scalability
ISSN
0018-9340
Abstract
When training deep neural networks (DNNs), expensive floating point arithmetic units are used in GPUs or custom neural processing units (NPUs). To reduce the burden of floating point arithmetic, community has started exploring the use of more efficient data representations, e.g., block floating point (BFP). The BFP format allows a group of values to share an exponent, which effectively reduces the memory footprint and enables cheaper fixed point arithmetic for multiply-accumulate (MAC) operations. However, existing BFP-based DNN accelerators are targeted for a specific precision, making them less versatile. In this paper, we present FlexBlock, a DNN training accelerator with three BFP modes, possibly different among activation, weight, and gradient tensors. By configuring FlexBlock to a lower BFP precision, the number of MACs handled by the core increases by up to 4× in 8-bit mode or 16× in 4-bit mode compared to 16-bit mode. To reach this theoretical upper bound, FlexBlock maximizes the core utilization at various precision levels or layer types, and allows dynamic precision control to keep throughput at its peak without sacrificing training accuracy. We evaluate the effectiveness of FlexBlock using representative DNNs on CIFAR, ImageNet and WMT14 datasets. As a result, training in FlexBlock significantly improves training speed by 1.5 ∼5.3× and energy efficiency by 2.4 ∼7.0× compared to other training accelerators. © 2023 IEEE
URI
http://hdl.handle.net/20.500.11750/47952
DOI
10.1109/TC.2023.3253050
Publisher
IEEE Computer Society
Show Full Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Total Views & Downloads