Communities & Collections
Researchers & Labs
Titles
DGIST
LIBRARY
DGIST R&D
Detail View
Department of Electrical Engineering and Computer Science
Computation Efficient Learning Lab.
2. Conference Papers
FlexNeRFer: A Multi-Dataflow, Adaptive Sparsity-Aware Accelerator for On-Device NeRF Rendering
Noh, Seock-Hwan
;
Shin, Banseok
;
Choi, Jeik
;
Lee, Seungpyo
;
Kung, Jaeha
;
Kim, Yeseong
Department of Electrical Engineering and Computer Science
Computation Efficient Learning Lab.
2. Conference Papers
Citations
WEB OF SCIENCE
Citations
SCOPUS
Metadata Downloads
XML
Excel
Title
FlexNeRFer: A Multi-Dataflow, Adaptive Sparsity-Aware Accelerator for On-Device NeRF Rendering
Issued Date
2025-06-25
Citation
Noh, Seock-Hwan. (2025-06-25). FlexNeRFer: A Multi-Dataflow, Adaptive Sparsity-Aware Accelerator for On-Device NeRF Rendering. ACM/IEEE International Symposium on Computer Architecture, 1894–1909. doi: 10.1145/3695053.3731107
Type
Conference Paper
ISBN
9798400712616
ISSN
1063-6897
Abstract
Neural Radiance Fields (NeRF), an AI-driven approach for 3D view reconstruction, has demonstrated impressive performance, sparking active research across fields. As a result, a range of advanced NeRF models has emerged, leading on-device applications to increasingly adopt NeRF for highly realistic scene reconstructions. With the advent of diverse NeRF models, NeRF-based applications leverage a variety of NeRF frameworks, creating the need for hardware capable of efficiently supporting these models. However, GPUs fail to meet the performance, power, and area (PPA) cost demanded by these on-device applications, or are specialized for specific NeRF algorithms, resulting in lower efficiency when applied to other NeRF models. To address this limitation, in this work, we introduce FlexNeRFer, an energy-efficient versatile NeRF accelerator. The key components enabling the enhancement of FlexNeRFer include: i) a flexible network-on-chip (NoC) supporting multi-dataflow and sparsity on precision-scalable MAC array, and ii) efficient data storage using an optimal sparsity format based on the sparsity ratio and precision modes. To evaluate the effectiveness of FlexNeRFer, we performed a layout implementation using 28nm CMOS technology. Our evaluation shows that FlexNeRFer achieves 8.2∼243.3× speedup and 24.1∼520.3× improvement in energy efficiency over a GPU (i.e., NVIDIA RTX 2080 Ti), while demonstrating 4.2∼86.9× speedup and 2.3∼47.5× improvement in energy efficiency compared to a state-of-the-art NeRF accelerator (i.e., NeuRex). © 2025 Copyright held by the owner/author(s).
URI
https://scholar.dgist.ac.kr/handle/20.500.11750/58616
DOI
10.1145/3695053.3731107
Publisher
ACM, IEEE
Show Full Item Record
File Downloads
There are no files associated with this item.
공유
공유하기
Related Researcher
Kim, Yeseong
김예성
Department of Electrical Engineering and Computer Science
read more
Total Views & Downloads