Detail View

FlexNeRFer: A Multi-Dataflow, Adaptive Sparsity-Aware Accelerator for On-Device NeRF Rendering
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Noh, Seock-Hwan -
dc.contributor.author Shin, Banseok -
dc.contributor.author Choi, Jeik -
dc.contributor.author Lee, Seungpyo -
dc.contributor.author Kung, Jaeha -
dc.contributor.author Kim, Yeseong -
dc.date.accessioned 2025-07-04T10:40:10Z -
dc.date.available 2025-07-04T10:40:10Z -
dc.date.created 2025-07-02 -
dc.date.issued 2025-06-25 -
dc.identifier.isbn 9798400712616 -
dc.identifier.issn 1063-6897 -
dc.identifier.uri https://scholar.dgist.ac.kr/handle/20.500.11750/58616 -
dc.description.abstract Neural Radiance Fields (NeRF), an AI-driven approach for 3D view reconstruction, has demonstrated impressive performance, sparking active research across fields. As a result, a range of advanced NeRF models has emerged, leading on-device applications to increasingly adopt NeRF for highly realistic scene reconstructions. With the advent of diverse NeRF models, NeRF-based applications leverage a variety of NeRF frameworks, creating the need for hardware capable of efficiently supporting these models. However, GPUs fail to meet the performance, power, and area (PPA) cost demanded by these on-device applications, or are specialized for specific NeRF algorithms, resulting in lower efficiency when applied to other NeRF models. To address this limitation, in this work, we introduce FlexNeRFer, an energy-efficient versatile NeRF accelerator. The key components enabling the enhancement of FlexNeRFer include: i) a flexible network-on-chip (NoC) supporting multi-dataflow and sparsity on precision-scalable MAC array, and ii) efficient data storage using an optimal sparsity format based on the sparsity ratio and precision modes. To evaluate the effectiveness of FlexNeRFer, we performed a layout implementation using 28nm CMOS technology. Our evaluation shows that FlexNeRFer achieves 8.2∼243.3× speedup and 24.1∼520.3× improvement in energy efficiency over a GPU (i.e., NVIDIA RTX 2080 Ti), while demonstrating 4.2∼86.9× speedup and 2.3∼47.5× improvement in energy efficiency compared to a state-of-the-art NeRF accelerator (i.e., NeuRex). © 2025 Copyright held by the owner/author(s). -
dc.language English -
dc.publisher ACM, IEEE -
dc.relation.ispartof Proceedings of the 52nd Annual International Symposium on Computer Architecture -
dc.title FlexNeRFer: A Multi-Dataflow, Adaptive Sparsity-Aware Accelerator for On-Device NeRF Rendering -
dc.type Conference Paper -
dc.identifier.doi 10.1145/3695053.3731107 -
dc.identifier.scopusid 2-s2.0-105009603225 -
dc.identifier.bibliographicCitation Noh, Seock-Hwan. (2025-06-25). FlexNeRFer: A Multi-Dataflow, Adaptive Sparsity-Aware Accelerator for On-Device NeRF Rendering. ACM/IEEE International Symposium on Computer Architecture, 1894–1909. doi: 10.1145/3695053.3731107 -
dc.identifier.url https://www.iscaconf.org/isca2025/program/index.php -
dc.citation.conferenceDate 2025-06-21 -
dc.citation.conferencePlace JA -
dc.citation.conferencePlace Tokyo -
dc.citation.endPage 1909 -
dc.citation.startPage 1894 -
dc.citation.title ACM/IEEE International Symposium on Computer Architecture -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

김예성
Kim, Yeseong김예성

Department of Electrical Engineering and Computer Science

read more

Total Views & Downloads