<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/12124">
    <title>Repository Collection: null</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/12124</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/57403" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/57402" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/57401" />
        <rdf:li rdf:resource="https://scholar.dgist.ac.kr/handle/20.500.11750/57400" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-04T12:10:40Z</dc:date>
  </channel>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/57403">
    <title>Dynamic content-cached satellite selection and routing for power minimization in LEO satellite networks</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57403</link>
    <description>Title: Dynamic content-cached satellite selection and routing for power minimization in LEO satellite networks
Author(s): Seo, Jeongmin; Ham, Dongho; Kwak, Jeongho
Abstract: Efficient delivery of content to areas where terrestrial Internet service is unavailable can be possible via content caching at low earth orbit (LEO) satellites. Cached content in several LEO satellites must be delivered via inter-satellite links (ISLs) with appropriate routing techniques. Until now, content caching and routing techniques have been optimized independently. To tackle this issue, the optimization of selecting a content-cached satellite and routing is jointly performed, using the example of Earth observation data cached across multiple satellites. In this paper, we first formulate a dynamic power minimization problem constrained by the queue stability of all LEO satellites, where the control variables are the selection of content-cached satellite and routing in every satellite. To solve this long-term time-averaged problem, we leverage Lyapunov optimization framework to transform the original problem into a series of slot-by-slot problems. Moreover, we prove that the average power consumption and the average queue backlog by the proposed algorithm can be upper-bounded via theoretical analysis. Finally, through extensive simulations, we demonstrate that our proposed algorithm surpasses existing independent content-retrieval algorithms in terms of power consumption, queue backlog, and fairness. © 2024 The Author(s)</description>
    <dc:date>2024-11-30T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/57402">
    <title>Cutting-Edge Inference: Dynamic DNN Model Partitioning and Resource Scaling for Mobile AI</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57402</link>
    <description>Title: Cutting-Edge Inference: Dynamic DNN Model Partitioning and Resource Scaling for Mobile AI
Author(s): Lim, Jeong-A; Lee, Joohyun; Kwak, Jeongho; Kim, Yeongjin
Abstract: Recently, applications using artificial intelligence (AI) technique in mobile devices such as augmented reality have been extensively pervasive. The hardware specifications of mobile devices, dynamic service demands, stochastic network states, and characteristics of DNN (Deep Neural Network) models affect the quality of experience (QoE) of such applications. In this paper, we propose CutEdge, that leverages a virtual queue-based Lyapunov optimization framework to jointly optimize DNN model partitioning between a mobile device and a mobile edge computing (MEC) server and processing/networking resources in a mobile device with respect to internal/external system dynamics. Specifically, CutEdge makes decisions of (i) the partition point of DNN model between the mobile device and MEC server, (ii) GPU clock frequency, and (iii) transmission rates in a mobile device, simultaneously. Then, we theoretically show the optimal trade-off curves among energy consumption, throughput, and end-to-end latency yielded by CutEdge where such QoE metrics have not been jointly addressed in the previous studies. Moreover, we show the impact of joint optimization of three control parameters on the performances via real trace-driven simulations. Finally, we show the superiority of CutEdge over the existing algorithms by experiment on top of implemented testbed using an embedded AI device and an MEC server. © IEEE.</description>
    <dc:date>2024-10-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/57401">
    <title>Satellite Network Slice Planning with Handover Trigger and DRL-Based Virtual Network Embedding</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57401</link>
    <description>Title: Satellite Network Slice Planning with Handover Trigger and DRL-Based Virtual Network Embedding
Author(s): Kim, Taeyeoun; Kim, Seonghoon; Kwak, Jeongho; Choi, Jihwan P.
Abstract: For satellite network slicing, the end-to-end connectivity should be maintained during the service time of slices under the mobility of low Earth orbit (LEO) satellites. The ground user or station should update the satellite connection at least every 10 minutes, and the routing paths established through inter-satellite links (ISLs) are susceptible to performance degradation as a consequence of fluctuations in relative satellite distances. Therefore, the end-to-end connectivity management of the satellite network slice and its update during the slice service time are crucial issues. In satellite network slice planning (SNSP), the end-to-end connectivity decision is made by solving a virtual network embedding (VNE) problem, and the connectivity is maintained by updating the end-to-end routing path when satellite-ground handover occurs. Hence, an optimal integrated management of VNE and handover is necessary for SNSP. In this paper, we propose an efficient SNSP algorithm leveraging a simple and lightweight deep reinforcement learning (DRL) framework where actions of the learning are to select appropriate embedding methods and optimal pairs of actions and states. Here, a handover trigger (HT) mechanism is developed by introducing an SNSP utility, which is a joint function of end-to-end latency and service available time, so that handover preemptively happens before significant performance degradation. Moreover, dynamic virtual network embedding (VNE) and re-embedding methods are proposed using a deep Q-network (DQN) framework. Extensive simulation results show that the proposed DQN-HT algorithm achieves approximately 36% lower average end-to-end latency compared with benchmarks.  © IEEE.</description>
    <dc:date>2025-03-31T15:00:00Z</dc:date>
  </item>
  <item rdf:about="https://scholar.dgist.ac.kr/handle/20.500.11750/57400">
    <title>Judgement-based Deep Q-Learning Framework for Interference Management in Small Cell Networks</title>
    <link>https://scholar.dgist.ac.kr/handle/20.500.11750/57400</link>
    <description>Title: Judgement-based Deep Q-Learning Framework for Interference Management in Small Cell Networks
Author(s): Yoon, Pildo; Cho, Yunhee; Na, Jeehyeon; Kwak, Jeongho
Abstract: Small cell technology for future 6G networks allows network operators to increase network capacity by reducing the distance between BSs (Base Stations) and users, thereby increasing wireless channel gains. However, it also leads to significant computational complexity to optimally mitigate inter-cell and/or inter-beam interference by dynamically managing beamforming, transmit power and user scheduling. In this paper, we formulate an optimization problem aiming to maximize the sum utility of users where decision variables are beam pattern selection, user scheduling and transmit power allocation in small cell networks. Next, we capture room for performance enhancement and low computational complexity that existing studies have overlooked by proposing i) a novel decision making process of DQN (Deep Q-Network) to jointly learn all decision variables in a single DRL (Deep Reinforcement Learning) model without a curse of dimensionality by adopting a user-specific state to each agent with distributed interference approximation meaning that interferences to all users in all neighbor BSs can be abstracted by a single user, and ii) a novel reward design so that the reward is judged by the result of a practical optimization-based solution. Finally, we show the superiority of the proposed DQL (Deep Q-Learning) algorithm compared to the existing interference management algorithms via simulations and provide insights for network providers who will leverage DQL in future small cell networks through in-depth performance analysis compared with conventional DQL algorithm and practical optimization algorithms.  © IEEE.</description>
    <dc:date>2024-08-31T15:00:00Z</dc:date>
  </item>
</rdf:RDF>

