Detail View

Fusion is Not Enough: Single Modal Attacks on Fusion Models for 3D Object Detection
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

Title
Fusion is Not Enough: Single Modal Attacks on Fusion Models for 3D Object Detection
Issued Date
2024-05-07
Citation
Cheng, Zhiyuan. (2024-05-07). Fusion is Not Enough: Single Modal Attacks on Fusion Models for 3D Object Detection. International Conference on Learning Representations (poster), 1–25. doi: 10.48550/arXiv.2304.14614
Type
Conference Paper
Abstract
Multi-sensor fusion (MSF) is widely used in autonomous vehicles (AVs) for perception, particularly for 3D object detection with camera and LiDAR sensors. The purpose of fusion is to capitalize on the advantages of each modality while minimizing its weaknesses. Advanced deep neural network (DNN)-based fusion techniques have demonstrated the exceptional and industry-leading performance. Due to the redundant information in multiple modalities, MSF is also recognized as a general defence strategy against adversarial attacks. In this paper, we attack fusion models from the camera modality that is considered to be of lesser importance in fusion but is more affordable for attackers. We argue that the weakest link of fusion models depends on their most vulnerable modality, and propose an attack framework that targets advanced camera-LiDAR fusion-based 3D object detection models through camera-only adversarial attacks. Our approach employs a two-stage optimization-based strategy that first thoroughly evaluates vulnerable image areas under adversarial attacks, and then applies dedicated attack strategies for different fusion models to generate deployable patches. The evaluations with six advanced camera-LiDAR fusion models and one camera-only model indicate that our attacks successfully compromise all of them. Our approach can either decrease the mean average precision (mAP) of detection performance from 0.824 to 0.353, or degrade the detection score of a target object from 0.728 to 0.156, demonstrating the efficacy of our proposed attack framework. Code is available. © 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved.
URI
http://hdl.handle.net/20.500.11750/57852
DOI
10.48550/arXiv.2304.14614
Publisher
International Conference on Learning Representations, ICLR
Show Full Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Total Views & Downloads