Detail View

Judgement-based Deep Q-Learning Framework for Interference Management in Small Cell Networks
Citations

WEB OF SCIENCE

Citations

SCOPUS

Metadata Downloads

DC Field Value Language
dc.contributor.author Yoon, Pildo -
dc.contributor.author Cho, Yunhee -
dc.contributor.author Na, Jeehyeon -
dc.contributor.author Kwak, Jeongho -
dc.date.accessioned 2024-12-23T21:40:22Z -
dc.date.available 2024-12-23T21:40:22Z -
dc.date.created 2024-10-04 -
dc.date.issued 2024-09 -
dc.identifier.issn 2169-3536 -
dc.identifier.uri http://hdl.handle.net/20.500.11750/57400 -
dc.description.abstract Small cell technology for future 6G networks allows network operators to increase network capacity by reducing the distance between BSs (Base Stations) and users, thereby increasing wireless channel gains. However, it also leads to significant computational complexity to optimally mitigate inter-cell and/or inter-beam interference by dynamically managing beamforming, transmit power and user scheduling. In this paper, we formulate an optimization problem aiming to maximize the sum utility of users where decision variables are beam pattern selection, user scheduling and transmit power allocation in small cell networks. Next, we capture room for performance enhancement and low computational complexity that existing studies have overlooked by proposing i) a novel decision making process of DQN (Deep Q-Network) to jointly learn all decision variables in a single DRL (Deep Reinforcement Learning) model without a curse of dimensionality by adopting a user-specific state to each agent with distributed interference approximation meaning that interferences to all users in all neighbor BSs can be abstracted by a single user, and ii) a novel reward design so that the reward is judged by the result of a practical optimization-based solution. Finally, we show the superiority of the proposed DQL (Deep Q-Learning) algorithm compared to the existing interference management algorithms via simulations and provide insights for network providers who will leverage DQL in future small cell networks through in-depth performance analysis compared with conventional DQL algorithm and practical optimization algorithms. © IEEE. -
dc.language English -
dc.publisher Institute of Electrical and Electronics Engineers Inc. -
dc.title Judgement-based Deep Q-Learning Framework for Interference Management in Small Cell Networks -
dc.type Article -
dc.identifier.doi 10.1109/ACCESS.2024.3462987 -
dc.identifier.wosid 001327317100001 -
dc.identifier.scopusid 2-s2.0-85204640700 -
dc.identifier.bibliographicCitation Yoon, Pildo. (2024-09). Judgement-based Deep Q-Learning Framework for Interference Management in Small Cell Networks. IEEE Access, 12, 136771–136782. doi: 10.1109/ACCESS.2024.3462987 -
dc.description.isOpenAccess TRUE -
dc.subject.keywordAuthor beam pattern selection -
dc.subject.keywordAuthor power allocation -
dc.subject.keywordAuthor user scheduling -
dc.subject.keywordAuthor Deep Q-learning -
dc.subject.keywordAuthor judgement-based learning -
dc.subject.keywordPlus POWER ALLOCATION -
dc.subject.keywordPlus USER -
dc.citation.endPage 136782 -
dc.citation.startPage 136771 -
dc.citation.title IEEE Access -
dc.citation.volume 12 -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.relation.journalResearchArea Computer Science; Engineering; Telecommunications -
dc.relation.journalWebOfScienceCategory Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications -
dc.type.docType Article -
Show Simple Item Record

File Downloads

공유

qrcode
공유하기

Related Researcher

곽정호
Kwak, Jeongho곽정호

Department of Electrical Engineering and Computer Science

read more

Total Views & Downloads