WEB OF SCIENCE
SCOPUS
In image-guided surgery, the registration of preoperative 3D computed tomography (CT) and intraoperative digital tomosynthesis (DTS) images is essential. However, it presents significant technical challenges due to the multi-modal nature of the two images, inherent DTS artifacts, and the lack of ground truth data. Therefore, this study proposes a self-supervised learning-based non-rigid registration framework. The proposed method precisely estimates local deformations through deep learning-based non-rigid registration, leveraging pre-registration on CT–DTS image pairs. To overcome the lack of ground truth data, a training data pipeline was established. This pipeline generates CT-synthesized DTS-ground truth deformation field data pairs by applying anatomically constrained virtual deformations to the CT images and re-projecting them. Additionally, we designed a specialized network architecture incorporating a multi-encoder and a cross-attention mechanism to effectively fuse the features of the multi-modal images. Experimental results using a public dataset show that the proposed method achieved a 3D target registration error of 12.99 mm. This study is expected to contribute to the future advancement of surgical navigation systems by offering a new direction for the CT–DTS registration problem.
더보기