WEB OF SCIENCE
SCOPUS
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Choi, Hansung | - |
| dc.contributor.author | Seo, Daewon | - |
| dc.date.accessioned | 2025-04-16T14:40:16Z | - |
| dc.date.available | 2025-04-16T14:40:16Z | - |
| dc.date.created | 2025-03-13 | - |
| dc.date.issued | 2025-04 | - |
| dc.identifier.issn | 1932-4553 | - |
| dc.identifier.uri | http://hdl.handle.net/20.500.11750/58296 | - |
| dc.description.abstract | The concept of a minimax classifier is well-established in statistical decision theory, but its implementation via neural networks remains challenging, particularly in scenarios with imbalanced training data having a limited number of samples for minority classes. To address this issue, we propose a novel minimax learning algorithm designed to minimize the risk of worst-performing classes. Our algorithm iterates through two steps: a minimization step that trains the model based on a selected target prior, and a maximization step that updates the target prior towards the adversarial prior for the trained model. In the minimization, we introduce a targeted logit-adjustment loss function that efficiently identifies optimal decision boundaries under the target prior. Moreover, based on a new prior-dependent generalization bound that we obtained, we theoretically prove that our loss function has a better generalization capability than existing loss functions. During the maximization, we refine the target prior by shifting it towards the adversarial prior, depending on the worst-performing classes rather than on per-class risk estimates. Our maximization method is particularly robust in the regime of a small number of samples. Additionally, to adapt to overparameterized neural networks, we partition the entire training dataset into two subsets: one for model training during the minimization step and the other for updating the target prior during the maximization step. Our proposed algorithm has a provable convergence property, and empirical results indicate that our algorithm performs better than or is comparable to existing methods. All codes are publicly available at https://github.com/hansung-choi/TLA-linear-ascent. © IEEE. | - |
| dc.language | English | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.title | Deep Minimax Classifiers for Imbalanced Datasets With a Small Number of Minority Samples | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/JSTSP.2025.3546083 | - |
| dc.identifier.wosid | 001490404500006 | - |
| dc.identifier.scopusid | 2-s2.0-85219390636 | - |
| dc.identifier.bibliographicCitation | Choi, Hansung. (2025-04). Deep Minimax Classifiers for Imbalanced Datasets With a Small Number of Minority Samples. IEEE Journal of Selected Topics in Signal Processing, 19(3), 491–506. doi: 10.1109/JSTSP.2025.3546083 | - |
| dc.description.isOpenAccess | FALSE | - |
| dc.subject.keywordAuthor | Minimax training | - |
| dc.subject.keywordAuthor | imbalanced data | - |
| dc.subject.keywordAuthor | adversarial prior | - |
| dc.citation.endPage | 506 | - |
| dc.citation.number | 3 | - |
| dc.citation.startPage | 491 | - |
| dc.citation.title | IEEE Journal of Selected Topics in Signal Processing | - |
| dc.citation.volume | 19 | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.type.docType | Article | - |
Department of Electrical Engineering and Computer Science