Communities & Collections
Researchers & Labs
Titles
DGIST
LIBRARY
DGIST R&D
Detail View
Division of Intelligent Robot
2. Conference Papers
CAD: Co-Adapting Discriminative Features for Improved Few-Shot Classification
Chikontwe, Philip
;
Kim, Soopil
;
Park, Sang Hyun
Division of Intelligent Robot
2. Conference Papers
Department of Robotics and Mechatronics Engineering
Medical Image & Signal Processing Lab
2. Conference Papers
Citations
WEB OF SCIENCE
Citations
SCOPUS
Metadata Downloads
XML
Excel
Title
CAD: Co-Adapting Discriminative Features for Improved Few-Shot Classification
Issued Date
2022-06-23
Citation
Chikontwe, Philip. (2022-06-23). CAD: Co-Adapting Discriminative Features for Improved Few-Shot Classification. Computer Vision and Pattern recognition, 14534–14543. doi: 10.1109/CVPR52688.2022.01415
Type
Conference Paper
ISBN
9781665469463
ISSN
1063-6919
Abstract
Few-shot classification is a challenging problem that aims to learn a model that can adapt to unseen classes given a few labeled samples. Recent approaches pre-train a feature extractor, and then fine-tune for episodic metalearning. Other methods leverage spatial features to learn pixel-level correspondence while jointly training a classifier. However, results using such approaches show marginal improvements. In this paper, inspired by the transformer style self-attention mechanism, we propose a strategy to cross-attend and re-weight discriminative features for fewshot classification. Given a base representation of support and query images after global pooling, we introduce a single shared module that projects features and cross-attends in two aspects: (i) query to support, and (ii) support to query. The module computes attention scores between features to produce an attention pooled representation of features in the same class that is later added to the original representation followed by a projection head. This effectively re-weights features in both aspects (i & ii) to produce features that better facilitate improved metric-based metalearning. Extensive experiments on public benchmarks show our approach outperforms state-of-the-art methods by 3%5%. © 2022 IEEE.
URI
http://hdl.handle.net/20.500.11750/46829
DOI
10.1109/CVPR52688.2022.01415
Publisher
IEEE Computer Society
Show Full Item Record
File Downloads
There are no files associated with this item.
공유
공유하기
Related Researcher
Kim, Soopil
김수필
Division of Intelligent Robotics
read more
Total Views & Downloads