Cited time in webofscience Cited time in scopus

On the Improvement of Hardware Utilization of Sparse GEMM Accelerators

Title
On the Improvement of Hardware Utilization of Sparse GEMM Accelerators
Alternative Title
희소 행렬 곱셈 가속기의 하드웨어 향상에 대하여
Author(s)
Banseok Shin
DGIST Authors
Banseok ShinJaeha KungYeseong Kim
Advisor
궁재하
Co-Advisor(s)
Yeseong Kim
Issued Date
2023
Awarded Date
2023-02-01
Type
Thesis
Description
SpGEMM, Distribution Network, Reduction Network, Data tiling
Abstract
Deep learning is being used and researched in various industries such as image processing, natural lan-guage processing, and recommendation algorithm service. Also, The size of the model is growing in tandem with deep learning technologies to increase accuracy. Additionally, sparse matrix multiplication is used in the majority of deep learning model operations. As a result, there is an increasing needs for accelerator research on sparse matrix multiplication. One of the accelerators that supports the sparse general matrix-matrix multi-plication (spGEMM) operation is SIGMA (A Sparse and Irregular GEMM Accelerator). However, each opera-tion network and index matching process of SIGMA is inefficient. We propose improvement measures in three aspects to solve these problems. First, the distribution network's redundant hardware modules are elimi-nated. When multiple flexdpe's are controlled by NoC (Network on chip), area and power can be by utilizing the use of a network where unnecessary parts are removed. Second, we suggest a brand-new architecture that solely uses the output flip-flop to store and compute the total of the partial sums of reduction networks. Fi-nally, we suggest that for quick operation processing, the sparsity of each matrix, the number of operation elements, and the matrix size be used as indicators for choosing an efficient partitioning approach utilizing a pre-calculated table as a look-up table. The total hardware area was decreased by roughly 21.8% and the power was decreased by 37.5% thanks to the proposed distribution and reduction network structure en-hancement. When using the LUT and tiling with 2, it is possible to reduce the clock cycle by around 80% when the stationary matrix's sparseness is 80% and the streaming matrix's sparseness is 99%.; 딥 러닝(Deep Learning)은 이미지 처리, 자연어 처리, 추천 알고리즘 서비스 등 다양한 산업 분야에서 활용 및 연구되고 있다. 또한 딥 러닝 모델의 정확도 향상을 위해 모델이 크기도 증가하고 있다. 딥 러닝 모델에서 대부분의 연산은 희소 행렬 곱셈이 차지한다. 따라서 희소행렬 곱셈에 대한 가속기 연구의 필요성이 커지고 있다. 우리는 희소행렬 곱셈 연산을 지원하는 가속기 중 하나인 SIGMA(A Sparse and Irregular GEMM Accelerator)의 3가지 측면에서 개선방안을 제시한다. 첫째, distribution network의 불필요한 하드웨어 구성 요소를 제거한다. 둘째, reduction network 의 partial sum의 합을 output flip-flop만을 사용하여 저장하고 연산하는 새로운 topology 를 제안한다. 마지막으로 빠른 연산 처리를 위해 각 행렬의 희소도, 연산 요소 개수, 행렬 크기를 지표로 하여 미리 계산한 table을 Look Up Table로 활용하여 효율적인 분할 방법을 선택하는 것을 제안한다. 제안한 distri-bution, reduction network 구조 개선을 통해 전체 하드웨어 면적은 약 21.8% 가 감소하였고 전력은 37.5%가 감소하였다. Stationary matrix의 희소도가 80%, streaming matrix의 희소도가 99%일 때 LUT를 보고 2로 tiling할 경우 약 80%의 clock cycle을 줄일 수 있다.
Table Of Contents
Ⅰ. Introduction 1
Ⅱ. Background and Prior Work 4
2.1 Background 4
2.1.1 Multi-layer Perceptron (MLP) 4
2.1.2 Convolutional Neural Networks (CNN) 5
2.1.3 Transformer 6
2.2 Prior Works: Inner, Outer, Row-wise Product Based Accelerators 7
2.3 Prior Work: SIGMA 8
2.3.1 Dataflow Of SIGMA 8
2.3.2 Distribution Network 11
2.3.3 Reduction Network 12
Ⅲ. Proposed Sparse Accelerator Design 13
3.1 Distribution Network 13
3.2 Reduction Network 15
3.2.1 Reorganized adder tree 15
3.3 Data Tiling Strategy 16
Ⅳ. Evaluation 17
4.1 Methodology 17
4.2 Experimental Results 17
4.2.1 Area / Power Improvements 17
4.2.2 Performance Improvements 18
Ⅴ. Conclusion 19
References 20
URI
http://hdl.handle.net/20.500.11750/45757

http://dgist.dcollection.net/common/orgView/200000657310
DOI
10.22677/THESIS.200000657310
Degree
Master
Department
Department of Electrical Engineering and Computer Science
Publisher
DGIST
Related Researcher
  • 궁재하 Kung, Jaeha
  • Research Interests 딥러닝; 가속하드웨어; 저전력 하드웨어; 고성능 시스템
Files in This Item:

There are no files associated with this item.

Appears in Collections:
Department of Electrical Engineering and Computer Science Theses Master

qrcode

  • twitter
  • facebook
  • mendeley

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE