<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/16897">
    <title>Repository Collection:</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/16897</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/57852" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/57824" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-21T16:25:34Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/57852">
    <title>Fusion is Not Enough: Single Modal Attacks on Fusion Models for 3D Object Detection</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57852</link>
    <description>Title: Fusion is Not Enough: Single Modal Attacks on Fusion Models for 3D Object Detection
Author(s): Cheng, Zhiyuan; Choi, Hongjun; Feng, Shiwei; Liang, James; Tao, Guanhong; Liu, Dongfang; Zuzak, Michael; Zhang, Xiangyu
Abstract: Multi-sensor fusion (MSF) is widely used in autonomous vehicles (AVs) for perception, particularly for 3D object detection with camera and LiDAR sensors. The purpose of fusion is to capitalize on the advantages of each modality while minimizing its weaknesses. Advanced deep neural network (DNN)-based fusion techniques have demonstrated the exceptional and industry-leading performance. Due to the redundant information in multiple modalities, MSF is also recognized as a general defence strategy against adversarial attacks. In this paper, we attack fusion models from the camera modality that is considered to be of lesser importance in fusion but is more affordable for attackers. We argue that the weakest link of fusion models depends on their most vulnerable modality, and propose an attack framework that targets advanced camera-LiDAR fusion-based 3D object detection models through camera-only adversarial attacks. Our approach employs a two-stage optimization-based strategy that first thoroughly evaluates vulnerable image areas under adversarial attacks, and then applies dedicated attack strategies for different fusion models to generate deployable patches. The evaluations with six advanced camera-LiDAR fusion models and one camera-only model indicate that our attacks successfully compromise all of them. Our approach can either decrease the mean average precision (mAP) of detection performance from 0.824 to 0.353, or degrade the detection score of a target object from 0.728 to 0.156, demonstrating the efficacy of our proposed attack framework. Code is available. © 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved.</description>
    <dc:date>2024-05-06T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/57824">
    <title>ROCAS: Root Cause Analysis of Autonomous Driving Accidents via Cyber-Physical Co-mutation</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57824</link>
    <description>Title: ROCAS: Root Cause Analysis of Autonomous Driving Accidents via Cyber-Physical Co-mutation
Author(s): Feng, Shiwei; Ye, Yapeng; Shi, Qingkai; Cheng, Zhiyuan; Xu, Xiangzhe; Cheng, Siyuan; Choi, Hongjun; Zhang, Xiangyu
Abstract: As Autonomous driving systems (ADS) have transformed our daily life, safety of ADS is of growing significance. While various testing approaches have emerged to enhance the ADS reliability, a crucial gap remains in understanding the accidents causes. Such post-accident analysis is paramount and beneficial for enhancing ADS safety and reliability. Existing cyber-physical system (CPS) root cause analysis techniques are mainly designed for drones and cannot handle the unique challenges introduced by more complex physical environments and deep learning models deployed in ADS. In this paper, we address the gap by offering a formal definition of ADS root cause analysis problem and introducing Rocas, a novel ADS root cause analysis framework featuring cyber-physical co-mutation. Our technique uniquely leverages both physical and cyber mutation that can precisely identify the accident-trigger entity and pinpoint the misconfiguration of the target ADS responsible for an accident. We further design a differential analysis to identify the responsible module to reduce search space for the misconfiguration. We study 12 categories of ADS accidents and demonstrate the effectiveness and efficiency of Rocas in narrowing down search space and pinpointing the misconfiguration. We also show detailed case studies on how the identified misconfiguration helps understand rationale behind accidents. Copyright held by the owner/author(s).</description>
    <dc:date>2024-10-29T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

