Detail View

AdaLo: Adaptive learning rate optimizer with loss for classification
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Jeong, Jae Jin -
dc.contributor.author Koo, Gyogwon -
dc.date.accessioned 2024-12-08T21:10:13Z -
dc.date.available 2024-12-08T21:10:13Z -
dc.date.created 2024-11-18 -
dc.date.issued 2025-02 -
dc.identifier.issn 0020-0255 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/57280 -
dc.description.abstract Gradient-based algorithms are frequently used to optimize neural networks, with various methods developed to enhance their performance. Among them, the adaptive moment estimation (Adam) optimizer is well-known for its effectiveness and ease of implementation. However, it suffers from poor generalization without a learning rate scheduler. Additionally, it has the disadvantage of a large computational burden because of individual learning rate term, as known as second-order moments of gradients. In this study, we propose a novel gradient descent algorithm called AdaLo, which stands for Adaptive Learning Rate Optimizer with Loss. AdaLo addresses two problems using its adaptive learning rate (ALR). Firstly, the proposed ALR adjusts the learning rate, based on the model's training progress, specifically the loss value. Therefore AdaLo's ALR effectively replaces traditional learning rate schedulers. Secondly, the ALR is a scalar global learning rate, reducing the computational burden. In addition, the stability of the proposed method is analyzed from the perspective of the learning rate. The superiority of AdaLo was proven by non-convex functions. Simulation results indicated that the proposed optimizer outperformed the Adam, AdaBelief, and diffGrad with regard to the training error and test accuracy. © 2024 Elsevier Inc. -
dc.language English -
dc.publisher Elsevier -
dc.title AdaLo: Adaptive learning rate optimizer with loss for classification -
dc.type Article -
dc.identifier.doi 10.1016/j.ins.2024.121607 -
dc.identifier.wosid 001371040800001 -
dc.identifier.scopusid 2-s2.0-85208023325 -
dc.identifier.bibliographicCitation Jeong, Jae Jin. (2025-02). AdaLo: Adaptive learning rate optimizer with loss for classification. Information Sciences, 690. doi: 10.1016/j.ins.2024.121607 -
dc.description.isOpenAccess FALSE -
dc.subject.keywordAuthor Optimization -
dc.subject.keywordAuthor Learning rate scheduler -
dc.subject.keywordAuthor CIFAR -
dc.subject.keywordAuthor Neural network -
dc.subject.keywordAuthor Gradient decent -
dc.citation.title Information Sciences -
dc.citation.volume 690 -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.relation.journalResearchArea Computer Science -
dc.relation.journalWebOfScienceCategory Computer Science, Information Systems -
dc.type.docType Article -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

구교권
Koo, Gyogwon구교권

Division of Intelligent Robotics

read more

Total Views & Downloads