Cited time in webofscience Cited time in scopus

GPUTucker: Large-Scale GPU-Based Tucker Decomposition Using Tensor Partitioning

Title
GPUTucker: Large-Scale GPU-Based Tucker Decomposition Using Tensor Partitioning
Author(s)
Lee, JihyeHan, DonghyoungKwon, Oh-KyoungChon, Kang-WookKim, Min-Soo
Issued Date
2024-03
Citation
Expert Systems with Applications, v.237, no.Part A
Type
Article
Author Keywords
Tensor decompositionBig dataGraphics processing unit(GPU)Scalable algorithmMemory-efficient method
Keywords
APPROXIMATIONALGORITHMS
ISSN
0957-4174
Abstract
Tucker decomposition is used extensively for modeling multi-dimensional data represented as tensors. Owing to the increasing magnitude of nonzero values in real-world tensors, a growing demand has emerged for expeditious and scalable Tucker decomposition techniques. Several graphics processing unit (GPU)-accelerated techniques have been proposed for Tucker decomposition to decrease the decomposition speed. However, these approaches often encounter difficulties in handling extensive tensors owing to their huge memory demands, which exceed the available capacity of GPU memory. This study presents an expandable GPU-based technique for Tucker decomposition called GPUTucker. The proposed method meticulously partitions sizable tensors into smaller sub-tensors, which are referred to as tensor blocks, and effectively implements the GPU-based data pipeline by handling these tensor blocks asynchronously. Extensive experiments demonstrate that GPUTucker outperforms state-of-the-art Tucker decomposition methods in terms of the decomposition speed and scalability. © 2023 Elsevier Ltd
URI
http://hdl.handle.net/20.500.11750/46563
DOI
10.1016/j.eswa.2023.121445
Publisher
Elsevier
Files in This Item:

There are no files associated with this item.

Appears in Collections:
ETC 1. Journal Articles

qrcode

  • twitter
  • facebook
  • mendeley

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE