Cited 0 time in webofscience Cited 9 time in scopus

Deep Defocus Map Estimation using Domain Adaptation

Title
Deep Defocus Map Estimation using Domain Adaptation
Authors
Lee, JunyongLee, SungkilCho, SunghyunLee, Seungyong
DGIST Authors
Cho, Sunghyun
Issue Date
2019-06-20
Citation
32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019, 12214-12222
Type
Conference
ISBN
9781728132938
ISSN
1063-6919
Abstract
In this paper, we propose the first end-to-end convolutional neural network (CNN) architecture, Defocus Map Estimation Network (DMENet), for spatially varying defocus map estimation. To train the network, we produce a novel depth-of-field (DOF) dataset, SYNDOF, where each image is synthetically blurred with a ground-truth depth map. Due to the synthetic nature of SYNDOF, the feature characteristics of images in SYNDOF can differ from those of real defocused photos. To address this gap, we use domain adaptation that transfers the features of real defocused photos into those of synthetically blurred ones. Our DMENet consists of four subnetworks: Blur estimation, domain adaptation, content preservation, and sharpness calibration networks. The subnetworks are connected to each other and jointly trained with their corresponding supervisions in an end-to-end manner. Our method is evaluated on publicly available blur detection and blur estimation datasets and the results show the state-of-the-art performance.In this paper, we propose the first end-to-end convolutional neural network (CNN) architecture, Defocus Map Estimation Network (DMENet), for spatially varying defocus map estimation. To train the network, we produce a novel depth-of-field (DOF) dataset, SYNDOF, where each image is synthetically blurred with a ground-truth depth map. Due to the synthetic nature of SYNDOF, the feature characteristics of images in SYNDOF can differ from those of real defocused photos. To address this gap, we use domain adaptation that transfers the features of real defocused photos into those of synthetically blurred ones. Our DMENet consists of four subnetworks: Blur estimation, domain adaptation, content preservation, and sharpness calibration networks. The subnetworks are connected to each other and jointly trained with their corresponding supervisions in an end-to-end manner. Our method is evaluated on publicly available blur detection and blur estimation datasets and the results show the state-of-the-art performance. © 2019 IEEE.
URI
http://hdl.handle.net/20.500.11750/11497
DOI
10.1109/CVPR.2019.01250
Publisher
IEEE Computer Society
Files:
There are no files associated with this item.
Collection:
Department of Information and Communication EngineeringVisual Computing Lab2. Conference Papers


qrcode mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE