Detail View

VisionScaling: Dynamic Deep Learning Model and Resource Scaling in Mobile Vision Applications
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

Title
VisionScaling: Dynamic Deep Learning Model and Resource Scaling in Mobile Vision Applications
Issued Date
2024-05
Citation
Choi, Pyeongjun. (2024-05). VisionScaling: Dynamic Deep Learning Model and Resource Scaling in Mobile Vision Applications. IEEE Internet of Things Journal, 11(9), 15523–15539. doi: 10.1109/JIOT.2024.3349512
Type
Article
Author Keywords
Computation offloadingdeep learningdynamic voltage and frequency scaling (DVFS)mobile vision servicemodel scalingonline convex optimization (OCO)
Keywords
ALLOCATIONOPTIMIZATION
ISSN
2327-4662
Abstract
As deep learning technology becomes advanced, mobile vision applications such as augmented reality (AR) or autonomous vehicles are prevalent. The performance of such services highly depends on computing capability of different mobile devices, dynamic service requests, stochastic mobile network environment, and learning models. Existing studies have independently optimized such mobile resource allocation and learning model design with given other side of parameters and computing/network resources. However, they cannot reflect realistic mobile environments since the time-varying wireless channel and service requests are assumed to follow specific distributions. Without these unrealistic assumptions, we propose an algorithm that jointly optimizes learning models and process/network resources adapting to system dynamics, namely VisionScaling by leveraging the state-of-the-art online convex optimization (OCO) framework. This VisionScaling jointly makes decisions on (i) the learning model and the size of input layer at learning-side, and (ii) the GPU clock frequency, the transmission rate, and the computation offloading policy at resource-side every time slot. We theoretically show that VisionScaling asymptotically converges to an offline optimal performance with satisfying sublinearity. Moreover, we demonstrate that VisionScaling saves at least 24% of dynamic regret which captures energy consumption and processed frames per second (PFPS) under mean average precision (mAP) constraint via real trace-driven simulations. Finally, we show that VisionScaling attains 30.8% energy saving and improves 39.7% PFPS while satisfying the target mAP on the testbed with Nvidia Jetson TX2 and an edge server equipped with high-end GPU. © 2024 IEEE
URI
http://hdl.handle.net/20.500.11750/56645
DOI
10.1109/JIOT.2024.3349512
Publisher
Institute of Electrical and Electronics Engineers Inc.
Show Full Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Related Researcher

곽정호
Kwak, Jeongho곽정호

Department of Electrical Engineering and Computer Science

read more

Total Views & Downloads