Detail View

GPUTucker: Large-Scale GPU-Based Tucker Decomposition Using Tensor Partitioning
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

Title
GPUTucker: Large-Scale GPU-Based Tucker Decomposition Using Tensor Partitioning
Issued Date
2024-03
Citation
Lee, Jihye. (2024-03). GPUTucker: Large-Scale GPU-Based Tucker Decomposition Using Tensor Partitioning. Expert Systems with Applications, 237(Part A). doi: 10.1016/j.eswa.2023.121445
Type
Article
Author Keywords
Tensor decompositionBig dataGraphics processing unit(GPU)Scalable algorithmMemory-efficient method
Keywords
APPROXIMATIONALGORITHMS
ISSN
0957-4174
Abstract
Tucker decomposition is used extensively for modeling multi-dimensional data represented as tensors. Owing to the increasing magnitude of nonzero values in real-world tensors, a growing demand has emerged for expeditious and scalable Tucker decomposition techniques. Several graphics processing unit (GPU)-accelerated techniques have been proposed for Tucker decomposition to decrease the decomposition speed. However, these approaches often encounter difficulties in handling extensive tensors owing to their huge memory demands, which exceed the available capacity of GPU memory. This study presents an expandable GPU-based technique for Tucker decomposition called GPUTucker. The proposed method meticulously partitions sizable tensors into smaller sub-tensors, which are referred to as tensor blocks, and effectively implements the GPU-based data pipeline by handling these tensor blocks asynchronously. Extensive experiments demonstrate that GPUTucker outperforms state-of-the-art Tucker decomposition methods in terms of the decomposition speed and scalability. © 2023 Elsevier Ltd
URI
http://hdl.handle.net/20.500.11750/46563
DOI
10.1016/j.eswa.2023.121445
Publisher
Elsevier
Show Full Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Total Views & Downloads