Cited time in webofscience Cited time in scopus

Full metadata record

DC Field Value Language
dc.contributor.author Choi, Pyeongjun -
dc.contributor.author Ham, Dongho -
dc.contributor.author Kim, Yeongjin -
dc.contributor.author Kwak, Jeongho -
dc.date.accessioned 2024-06-17T11:40:13Z -
dc.date.available 2024-06-17T11:40:13Z -
dc.date.created 2024-01-19 -
dc.date.issued 2024-05 -
dc.identifier.issn 2327-4662 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/56645 -
dc.description.abstract As deep learning technology becomes advanced, mobile vision applications such as augmented reality (AR) or autonomous vehicles are prevalent. The performance of such services highly depends on computing capability of different mobile devices, dynamic service requests, stochastic mobile network environment, and learning models. Existing studies have independently optimized such mobile resource allocation and learning model design with given other side of parameters and computing/network resources. However, they cannot reflect realistic mobile environments since the time-varying wireless channel and service requests are assumed to follow specific distributions. Without these unrealistic assumptions, we propose an algorithm that jointly optimizes learning models and process/network resources adapting to system dynamics, namely VisionScaling by leveraging the state-of-the-art online convex optimization (OCO) framework. This VisionScaling jointly makes decisions on (i) the learning model and the size of input layer at learning-side, and (ii) the GPU clock frequency, the transmission rate, and the computation offloading policy at resource-side every time slot. We theoretically show that VisionScaling asymptotically converges to an offline optimal performance with satisfying sublinearity. Moreover, we demonstrate that VisionScaling saves at least 24% of dynamic regret which captures energy consumption and processed frames per second (PFPS) under mean average precision (mAP) constraint via real trace-driven simulations. Finally, we show that VisionScaling attains 30.8% energy saving and improves 39.7% PFPS while satisfying the target mAP on the testbed with Nvidia Jetson TX2 and an edge server equipped with high-end GPU. © 2024 IEEE -
dc.language English -
dc.publisher Institute of Electrical and Electronics Engineers Inc. -
dc.title VisionScaling: Dynamic Deep Learning Model and Resource Scaling in Mobile Vision Applications -
dc.type Article -
dc.identifier.doi 10.1109/JIOT.2024.3349512 -
dc.identifier.wosid 001216833600006 -
dc.identifier.scopusid 2-s2.0-85181556821 -
dc.identifier.bibliographicCitation IEEE Internet of Things Journal, v.11, no.9, pp.15523 - 15539 -
dc.description.isOpenAccess FALSE -
dc.subject.keywordAuthor Computation offloading -
dc.subject.keywordAuthor deep learning -
dc.subject.keywordAuthor dynamic voltage and frequency scaling (DVFS) -
dc.subject.keywordAuthor mobile vision service -
dc.subject.keywordAuthor model scaling -
dc.subject.keywordAuthor online convex optimization (OCO) -
dc.subject.keywordPlus ALLOCATION -
dc.subject.keywordPlus OPTIMIZATION -
dc.citation.endPage 15539 -
dc.citation.number 9 -
dc.citation.startPage 15523 -
dc.citation.title IEEE Internet of Things Journal -
dc.citation.volume 11 -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.relation.journalResearchArea Computer Science; Engineering; Telecommunications -
dc.relation.journalWebOfScienceCategory Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications -
dc.type.docType Article -
Files in This Item:

There are no files associated with this item.

Appears in Collections:
Department of Electrical Engineering and Computer Science Intelligent Computing & Networking Laboratory 1. Journal Articles

qrcode

  • twitter
  • facebook
  • mendeley

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE