Journal of Engineering Research
DOI
https://doi.org/10.70259/engJER.2025.932024
Abstract
Integrating Device-to-Device (D2D) communication into Heterogeneous Cellular Networks (HCNs) augmented with Millimeter Wave (mmWave) technology presents a compelling approach to fulfill the escalating demands for ultra-high data throughput in next-generation wireless systems. Although these advancements significantly improve data transmission efficiency and network scalability, the coexistence of D2D and cellular users within a shared spectral environment triggers considerable interference, complicating network coordination. To mitigate this, the interference scenario is modeled as a unified optimization task involving mode selection and resource allocation, aiming to enhance the aggregate system throughput while adhering to strict SINR constraints for both communication tiers. To tackle this non-trivial problem, a decentralized multi-agent Deep Reinforcement Learning (DRL) framework is proposed, with a reward structure meticulously tailored to reflect the global system objective. Additionally, to streamline computational processes, a shared-policy learning approach enables D2D agents to make informed decisions based on selectively observed historical training data. Comparative simulation assessments reveal that the proposed DRL strategy outperforms conventional schemes in maximizing the system sum rate.
Recommended Citation
shukry, suzan mohamed
(2025)
"Interference Management for Device-to-Device Communications in Heterogeneous Cellular Networks using Deep Reinforcement LearningDevice-to-device communication; mmWave communication; spectrum resource allocation; deep reinforcement learning; HCNs,"
Journal of Engineering Research: Vol. 9:
Iss.
3, Article 22.
DOI: https://doi.org/10.70259/engJER.2025.932024
Available at:
https://digitalcommons.aaru.edu.jo/erjeng/vol9/iss3/22