Communities & Collections
Researchers & Labs
Titles
DGIST
LIBRARY
DGIST R&D
Detail View
Department of Electrical Engineering and Computer Science
InfoLab
1. Journal Articles
GMiner: A fast GPU-based frequent itemset mining method for large-scale data
Chon, Kang Wook
;
Hwang, Sang Hyun
;
Kim, Min Soo
Department of Electrical Engineering and Computer Science
InfoLab
1. Journal Articles
Citations
WEB OF SCIENCE
Citations
SCOPUS
Metadata Downloads
XML
Excel
Title
GMiner: A fast GPU-based frequent itemset mining method for large-scale data
DGIST Authors
Chon, Kang Wook
;
Hwang, Sang Hyun
;
Kim, Min Soo
Issued Date
2018-05
Citation
Chon, Kang Wook. (2018-05). GMiner: A fast GPU-based frequent itemset mining method for large-scale data. doi: 10.1016/j.ins.2018.01.046
Type
Article
Article Type
Article
Author Keywords
Frequent itemset mining
;
Graphics processing unit
;
Parallel algorithm
;
Workload skewness
Keywords
ALGORITHM
ISSN
0020-0255
Abstract
Frequent itemset mining is widely used as a fundamental data mining technique. However, as the data size increases, the relatively slow performances of the existing methods hinder its applicability. Although many sequential frequent itemset mining methods have been proposed, there is a clear limit to the performance that can be achieved using a single thread. To overcome this limitation, various parallel methods using multi-core CPU, multiple machine, or many-core graphic processing unit (GPU) approaches have been proposed. However, these methods still have drawbacks, including relatively slow performance, data size limitations, and poor scalability due to workload skewness. In this paper, we propose a fast GPU-based frequent itemset mining method called GMiner for large-scale data. GMiner achieves very fast performance by fully exploiting the computational power of GPUs and is suitable for large-scale data. The method performs mining tasks in a counterintuitive way: it mines the patterns from the first level of the enumeration tree rather than storing and utilizing the patterns at the intermediate levels of the tree. This approach is quite effective in terms of both performance and memory use in the GPU architecture. In addition, GMiner solves the workload skewness problem from which the existing parallel methods suffer; as a result, its performance increases almost linearly as the number of GPUs increases. Through extensive experiments, we demonstrate that GMiner significantly outperforms other representative sequential and parallel methods in most cases, by orders of magnitude on the tested benchmarks. © 2018 The Authors
URI
http://hdl.handle.net/20.500.11750/5914
DOI
10.1016/j.ins.2018.01.046
Publisher
Elsevier BV
Show Full Item Record
File Downloads
There are no files associated with this item.
공유
공유하기
Total Views & Downloads