Detail View

Deep Minimax Classifiers for Imbalanced Datasets With a Small Number of Minority Samples
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

Title
Deep Minimax Classifiers for Imbalanced Datasets With a Small Number of Minority Samples
Issued Date
2025-04
Citation
Choi, Hansung. (2025-04). Deep Minimax Classifiers for Imbalanced Datasets With a Small Number of Minority Samples. IEEE Journal of Selected Topics in Signal Processing, 19(3), 491–506. doi: 10.1109/JSTSP.2025.3546083
Type
Article
Author Keywords
Minimax trainingimbalanced dataadversarial prior
ISSN
1932-4553
Abstract
The concept of a minimax classifier is well-established in statistical decision theory, but its implementation via neural networks remains challenging, particularly in scenarios with imbalanced training data having a limited number of samples for minority classes. To address this issue, we propose a novel minimax learning algorithm designed to minimize the risk of worst-performing classes. Our algorithm iterates through two steps: a minimization step that trains the model based on a selected target prior, and a maximization step that updates the target prior towards the adversarial prior for the trained model. In the minimization, we introduce a targeted logit-adjustment loss function that efficiently identifies optimal decision boundaries under the target prior. Moreover, based on a new prior-dependent generalization bound that we obtained, we theoretically prove that our loss function has a better generalization capability than existing loss functions. During the maximization, we refine the target prior by shifting it towards the adversarial prior, depending on the worst-performing classes rather than on per-class risk estimates. Our maximization method is particularly robust in the regime of a small number of samples. Additionally, to adapt to overparameterized neural networks, we partition the entire training dataset into two subsets: one for model training during the minimization step and the other for updating the target prior during the maximization step. Our proposed algorithm has a provable convergence property, and empirical results indicate that our algorithm performs better than or is comparable to existing methods. All codes are publicly available at https://github.com/hansung-choi/TLA-linear-ascent. © IEEE.
URI
http://hdl.handle.net/20.500.11750/58296
DOI
10.1109/JSTSP.2025.3546083
Publisher
Institute of Electrical and Electronics Engineers
Show Full Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

서대원
Seo, Daewon서대원

Department of Electrical Engineering and Computer Science

read more

Total Views & Downloads