Communities & Collections
Researchers & Labs
Titles
DGIST
LIBRARY
DGIST R&D
Detail View
Department of Electrical Engineering and Computer Science
Computer Vision Lab.
1. Journal Articles
Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation
Kim, Jaeyeul
;
Woo, Jungwan
;
Shin, Ukcheol
;
Oh, Jean
;
Im, Sunghoon
Department of Electrical Engineering and Computer Science
Computer Vision Lab.
1. Journal Articles
Citations
WEB OF SCIENCE
Citations
SCOPUS
Metadata Downloads
XML
Excel
Title
Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation
Issued Date
2025-04
Citation
Kim, Jaeyeul. (2025-04). Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation. IEEE Robotics and Automation Letters, 10(4), 3462–3469. doi: 10.1109/LRA.2025.3542327
Type
Article
Author Keywords
Computer vision for transportation
;
deep learning for visual perception
;
recognition
ISSN
2377-3766
Abstract
Understanding the motion states of the surrounding environment is critical for safe autonomous driving. These motion states can be accurately derived from scene flow, which captures the three-dimensional motion field of points. Existing LiDAR scene flow methods extract spatial features from each point cloud and then fuse them channel-wise, resulting in the implicit extraction of spatio-temporal features. Furthermore, they utilize 2D Bird's Eye View and process only two frames, missing crucial spatial information along the Z-axis and the broader temporal context, leading to suboptimal performance. To address these limitations, we propose Flow4D, which temporally fuses multiple point clouds after the 3D intra-voxel feature encoder, enabling more explicit extraction of spatio-temporal features through a 4D voxel network. However, while using 4D convolution improves performance, it significantly increases the computational load. For further efficiency, we introduce the Spatio-Temporal Decomposition Block (STDB), which combines 3D and 1D convolutions instead of using heavy 4D convolution. In addition, Flow4D further improves performance by using five frames to take advantage of richer temporal information. As a result, the proposed method achieves a 45.9% higher performance compared to the state-of-the-art while running in real-time, and won 1st place in the 2024 Argoverse 2 Scene Flow Challenge(Figure presented.). © IEEE.
URI
http://hdl.handle.net/20.500.11750/58125
DOI
10.1109/LRA.2025.3542327
Publisher
Institute of Electrical and Electronics Engineers Inc.
Show Full Item Record
File Downloads
There are no files associated with this item.
공유
공유하기
Related Researcher
Im, Sunghoon
임성훈
Department of Electrical Engineering and Computer Science
read more
Total Views & Downloads