Detail View

A Deep Reinforcement Learning-Based Policy for Efficient Resource Management in Vehicular Edge Computing
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.advisor 최지웅 -
dc.contributor.author Jeeyoo Kim -
dc.date.accessioned 2025-02-28T21:02:22Z -
dc.date.available 2025-03-01T06:00:33Z -
dc.date.issued 2025 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/58051 -
dc.identifier.uri http://dgist.dcollection.net/common/orgView/200000843414 -
dc.description Load balancing, vehicle-to-everything, vehicle edge computing, autonomous driving, deep reinforcement learning -
dc.description.abstract Recently, the rapid development of autonomous vehicles has significantly increased the demand for data processing, making edge computing, which offloads computational tasks from the vehicle to nearby servers, essential. However, excessive offloading can overload network traffic, emphasizing the need for efficient resource management policies. In this paper, we propose a dynamic resource allocation model using a deep queueing network within a three- tier structure of local, edge, and cloud servers to optimize data processing for high-priority vehicles such as emergency vehicles. This approach can keep vehicle queues stable and reduce offload costs. Although the usage cost of cloud servers is generally higher than that of edge servers, this paper considers several scenarios, including a realistic scenario where the usage cost of edge servers is higher due to the installation cost. The proposed policy aims to efficiently allocate resources in an autonomous vehicle network by differentiating the resource allocation according to vehicle priority and offloading cost. Our analysis shows that the cost structure and vehicle priority have a significant impact on the resource allocation in an autonomous vehicle network. Keywords: Load balancing, vehicle-to-everything, vehicle edge computing, autonomous driv- ing, deep reinforcement learning|최근 자율주행차의 급속한 발전으로 데이터 처리 수요가 크게 증가하였으며, 차량에서 인근 서버로 연산 작업을 오프로딩하는 엣지 컴퓨팅 기술이 부각되고 있다. 그러나 과도한 오프로딩은 네트워크 트래픽을 과부하시키기 때문에 효율적인 자원 관리 정책이 필요하다. 본 논문에서는 로컬, 엣지, 클라우드 서버로 구성된 3계층 구조 내에서 심층 Q-네트워크를 활용한 동적 자원 할당 모델을 제안하여, 응급 차량과 같은 고우선순위 차량의 데이터 처리를 최적화한다. 이를 통해 차량의 대기열을 안정적으로 유지하고 오프로딩 비용을 절감할 수 있다. 일반적으로 클라우드 서버의 사용 비용이 엣지 서버보다 더 높지만, 본 논문에서는 설치 비용으로 인해 엣지 서버의 사용 비용이 더 높은 현실적인 시나리오를 포함한 여러 시나리오를 고려하였다. 제안된 정책은 차량의 우선순위와 오프로딩 비용에 따라 자원을 효율적으로 배분하는 것을 목표로 한다. 분석 결과, 자율주행 네트워크에서 비용 구조와 차량 우선순위가 자원 할당에 중요한 영향을 미친다는 점을 확인하였다. -
dc.description.tableofcontents 1. Introduction 1
2. Background 4
2.1 V2X Technology 4
2.2 Mobile Edge Computing 5
2.3 Vehicular Edge Computing 7
2.4 Deep Reinforcement Learning 8
2.5 Related Works 10
3. Research Method 12
3.1 Scenario Description 12
3.1.1 Overview of the Local-Edge-Cloud 3-Tier System 12
3.1.2 Vehicle Prioritization 14
3.1.3 Costs of Offloading 14
3.1.4 Data Processing Flow 15
3.2 Simulation Setup 16
3.2.1 Queue Management 18
3.2.1.1 Queue Length Update 18
3.2.1.2 Processed Data Calculation 19
3.2.2 Dynamic Determination of Data Processing Location in Vehicle Networks through DRL 20
3.2.2.1 Balancing Exploration and Exploitation: ϵ-Greedy Policy 20
3.2.2.2 Q-Value Update: Bellman Equation 22
3.3 Algorithm Description 23
4. Performance Analysis 27
4.1 Processing Queue of Each Vehicle 27
4.2 Data Processing Ratios 29
4.3 Cost Comparison of Scenarios 35
5. Conclusion 37
-
dc.format.extent 46 -
dc.language eng -
dc.publisher DGIST -
dc.title A Deep Reinforcement Learning-Based Policy for Efficient Resource Management in Vehicular Edge Computing -
dc.title.alternative 차량 엣지 컴퓨팅에서 효율적인 자원 관리를 위한 심층 강화 학습 기반 정책 -
dc.type Thesis -
dc.identifier.doi 10.22677/THESIS.200000843414 -
dc.description.degree Master -
dc.contributor.department Department of Electrical Engineering and Computer Science -
dc.identifier.bibliographicCitation Jeeyoo Kim. (2025). A Deep Reinforcement Learning-Based Policy for Efficient Resource Management in Vehicular Edge Computing. doi: 10.22677/THESIS.200000843414 -
dc.contributor.coadvisor Jeongho Kwak -
dc.date.awarded 2025-02-01 -
dc.publisher.location Daegu -
dc.description.database dCollection -
dc.citation XT.IM 김78 202502 -
dc.date.accepted 2025-01-20 -
dc.contributor.alternativeDepartment 전기전자컴퓨터공학과 -
dc.subject.keyword Load balancing, vehicle-to-everything, vehicle edge computing, autonomous driving, deep reinforcement learning -
dc.contributor.affiliatedAuthor Jeeyoo Kim -
dc.contributor.affiliatedAuthor Ji-Woong Choi -
dc.contributor.affiliatedAuthor Jeongho Kwak -
dc.contributor.alternativeName 김지유 -
dc.contributor.alternativeName Ji-Woong Choi -
dc.contributor.alternativeName 곽정호 -
Show Simple Item Record

File Downloads

  • There are no files associated with this item.

공유

qrcode
공유하기

Total Views & Downloads