Open Access Open Access  Restricted Access Subscription Access

Energy Efficient Multi Hop D2D Communication Using Deep Reinforcement Learning in 5G Networks


Affiliations
1 Department Faculty of Computer Science, Pacific Academy of Higher Education and Research University, Udaipur (Rajasthan)., India
 

One of the most potential 5G technologies for wireless networks is device-to-device (D2D) communication. It promises peer-to-peer consumers high data speeds, ubiquity, and low latency, energy, and spectrum efficiency. These benefits make it possible for D2D communication to be completely realized in a multi-hop communication scenario. However, the energy efficient multi hop routing is more challenging task. Hence, this research deep reinforcement learning based multi hop routing protocol is introduced. In this, the energy consumption is considered by the proposed double deep Q learning technique for identifying the possible paths. Then, the optimal best path is selected by the proposed Gannet Chimp optimization (GCO) algorithm using multi-objective fitness function. The assessment of the proposed method based on various measures like packet delivery ratio, latency, residual energy, throughput and network lifetime accomplished the values of 99.89, 1.63, 0.98, 64 and 99.69 respectively.

Keywords

5G Networks, D2D Communication, Energy Efficient Routing, Multi-Hop Path, Deep Q Learning, Optimal Path Selection.
User
Notifications
Font Size

  • Z. Li, C. Guo, and Y. Xuan, “A Multi-Agent Deep Reinforcement Learning Based Spectrum Allocation Framework for D2D Communications,” in 2019 IEEE Global Communications Conference (GLOBECOM), Dec. 2019, pp. 1–6. doi: 10.1109/GLOBECOM38437.2019.9013763.
  • V. Sridhar and S. E. Roslin, “Energy Efficient Device to Device Data Transmission Based on Deep Artificial Learning in 6G Networks,” Int. J. Comput. Networks Appl., vol. 9, no. 5, pp. 568–577, 2022, doi: 10.22247/ijcna/2022/215917.
  • M. Alnakhli, S. Anand, and R. Chandramouli, “Joint Spectrum and Energy Efficiency in Device to Device Communication Enabled Wireless Networks,” IEEE Trans. Cogn. Commun. Netw., vol. 3, no. 2, pp. 217–225, Jun. 2017, doi: 10.1109/TCCN.2017.2689015.
  • M. Waqas et al., “A Comprehensive Survey on Mobility-Aware D2D Communications: Principles, Practice and Challenges,” IEEE Commun. Surv. Tutorials, vol. 22, no. 3, pp. 1863–1886, 2020, doi: 10.1109/COMST.2019.2923708.
  • R. A. Diab, N. Bastaki, and A. Abdrabou, “A Survey on Routing Protocols for Delay and Energy-Constrained Cognitive Radio Networks,” IEEE Access, vol. 8, pp. 198779–198800, 2020, doi: 10.1109/ACCESS.2020.3035325.
  • L. Li, L. Chang, and F. Song, “A Smart Collaborative Routing Protocol for QoE Enhancement in Multi-Hop Wireless Networks,” IEEE Access, vol. 8, pp. 100963–100973, 2020, doi: 10.1109/ACCESS.2020.2997350.
  • X. Zhou, M. Sun, G. Y. Li, and B. H. Fred Juang, “Intelligent wireless communications enabled by cognitive radio and machine learning,” China Commun., vol. 15, no. 12, pp. 16–48, 2018.
  • K. M. Thilina, Kae Won Choi, N. Saquib, and E. Hossain, “Machine Learning Techniques for Cooperative Spectrum Sensing in Cognitive Radio Networks,” IEEE J. Sel. Areas Commun., vol. 31, no. 11, pp. 2209–2221, Nov. 2013, doi: 10.1109/JSAC.2013.131120.
  • R. Joon and P. Tomar, “Energy Aware Q-learning AODV (EAQ-AODV) routing for cognitive radio sensor networks,” J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 9, pp. 6989–7000, Oct. 2022, doi: 10.1016/j.jksuci.2022.03.021.
  • J. Ramkumar and R. Vadivel, “Improved Wolf prey inspired protocol for routing in cognitive radio Ad Hoc networks,” Int. J. Comput. Networks Appl., vol. 7, no. 5, pp. 126–136, 2020, doi: 10.22247/ijcna/2020/202977.
  • M. C. Hlophe and B. T. Maharaj, “QoS provisioning and energy saving scheme for distributed cognitive radio networks using deep learning,” J. Commun. Networks, vol. 22, no. 3, pp. 185–204, Jun. 2020, doi: 10.1109/JCN.2020.000013.
  • H. B. Salameh, S. Mahasneh, A. Musa, R. Halloush, and Y. Jararweh, “Effective peer-to-peer routing in heterogeneous half-duplex and full-duplex multi-hop cognitive radio networks,” Peer-to-Peer Netw. Appl., vol. 14, no. 5, pp. 3225–3234, Sep. 2021, doi: 10.1007/s12083-021-01183-6.
  • Y. Zhi, J. Tian, X. Deng, J. Qiao, and D. Lu, “Deep reinforcement learning-based resource allocation for D2D communications in heterogeneous cellular networks,” Digit. Commun. Networks, vol. 8, no. 5, pp. 834–842, Oct. 2022, doi: 10.1016/j.dcan.2021.09.013.
  • S. Yu and J. W. Lee, “Deep Reinforcement Learning Based Resource Allocation for D2D Communications Underlay Cellular Networks,” Sensors, vol. 22, no. 23, p. 9459, Dec. 2022, doi: 10.3390/s22239459.
  • X. Li, G. Chen, G. Wu, Z. Sun, and G. Chen, “Research on Multi-Agent D2D Communication Resource Allocation Algorithm Based on A2C,” Electronics, vol. 12, no. 2, p. 360, Jan. 2023, doi: 10.3390/electronics12020360.
  • S. H. A. Kazmi, F. Qamar, R. Hassan, and K. Nisar, “Routing-Based Interference Mitigation in SDN Enabled Beyond 5G Communication Networks: A Comprehensive Survey,” IEEE Access, vol. 11, pp. 4023– 4041, 2023, doi: 10.1109/ACCESS.2023.3235366.
  • J. Zhang, W. Gao, G. Chuai, and Z. Zhou, “An Energy-Effective and QoS-Guaranteed Transmission Scheme in UAV-Assisted Heterogeneous Network,” Drones, vol. 7, no. 2, p. 141, Feb. 2023, doi: 10.3390/drones7020141.
  • X. Li, G. Chen, G. Wu, Z. Sun, and G. Chen, “D2D Communication Network Interference Coordination Scheme Based on Improved Stackelberg,” Sustainability, vol. 15, no. 2, p. 961, Jan. 2023, doi: 10.3390/su15020961.
  • D. Han and J. So, “Energy-Efficient Resource Allocation Based on Deep Q-Network in V2V Communications,” Sensors, vol. 23, no. 3, p. 1295, Jan. 2023, doi: 10.3390/s23031295.
  • P. Tam, R. Corrado, C. Eang, and S. Kim, “Applicability of Deep Reinforcement Learning for Efficient Federated Learning in Massive IoT Communications,” Appl. Sci., vol. 13, no. 5, p. 3083, Feb. 2023, doi: 10.3390/app13053083.
  • L. Nagapuri et al., “Energy Efficient Underlaid D2D Communication for 5G Applications,” Electronics, vol. 11, no. 16, p. 2587, Aug. 2022, doi: 10.3390/electronics11162587.
  • N. Khan, I. A. Khan, J. U. Arshed, M. Afzal, M. M. Ahmed, and M. Arif, “5G-EECC: Energy-Efficient Collaboration-Based Content Sharing Strategy in Device-to-Device Communication,” Secur. Commun. Networks, vol. 2022, pp. 1–13, Jan. 2022, doi: 10.1155/2022/1354238.
  • I. Ioannou, C. Christophorou, V. Vassiliou, and A. Pitsillides, “A novel Distributed AI framework with ML for D2D communication in 5G/6G networks,” Comput. Networks, vol. 211, p. 108987, Jul. 2022, doi: 10.1016/j.comnet.2022.108987.
  • M. K. Chamran, K.-L. A. Yau, M. H. Ling, and Y.-W. Chong, “A Hybrid Route Selection Scheme for 5G Network Scenarios: An Experimental Approach,” Sensors, vol. 22, no. 16, p. 6021, Aug. 2022, doi: 10.3390/s22166021.
  • V. Tilwari, T. Song, and S. Pack, “An Improved Routing Approach for Enhancing QoS Performance for D2D Communication in B5G Networks,” Electronics, vol. 11, no. 24, p. 4118, Dec. 2022, doi: 10.3390/electronics11244118.
  • J.-S. Pan, L.-G. Zhang, R.-B. Wang, V. Snášel, and S.-C. Chu, “Gannet optimization algorithm : A new metaheuristic algorithm for solving engineering optimization problems,” Math. Comput. Simul., vol. 202, pp. 343–373, Dec. 2022, doi: 10.1016/j.matcom.2022.06.007.
  • M. Khishe and M. R. Mosavi, “Chimp optimization algorithm,” Expert Syst. Appl., vol. 149, p. 113338, Jul. 2020, doi: 10.1016/j.eswa.2020.113338.

Abstract Views: 182

PDF Views: 0




  • Energy Efficient Multi Hop D2D Communication Using Deep Reinforcement Learning in 5G Networks

Abstract Views: 182  |  PDF Views: 0

Authors

Md.Tabrej Khan
Department Faculty of Computer Science, Pacific Academy of Higher Education and Research University, Udaipur (Rajasthan)., India
Ashish Adholiya
Department Faculty of Computer Science, Pacific Academy of Higher Education and Research University, Udaipur (Rajasthan)., India

Abstract


One of the most potential 5G technologies for wireless networks is device-to-device (D2D) communication. It promises peer-to-peer consumers high data speeds, ubiquity, and low latency, energy, and spectrum efficiency. These benefits make it possible for D2D communication to be completely realized in a multi-hop communication scenario. However, the energy efficient multi hop routing is more challenging task. Hence, this research deep reinforcement learning based multi hop routing protocol is introduced. In this, the energy consumption is considered by the proposed double deep Q learning technique for identifying the possible paths. Then, the optimal best path is selected by the proposed Gannet Chimp optimization (GCO) algorithm using multi-objective fitness function. The assessment of the proposed method based on various measures like packet delivery ratio, latency, residual energy, throughput and network lifetime accomplished the values of 99.89, 1.63, 0.98, 64 and 99.69 respectively.

Keywords


5G Networks, D2D Communication, Energy Efficient Routing, Multi-Hop Path, Deep Q Learning, Optimal Path Selection.

References





DOI: https://doi.org/10.22247/ijcna%2F2023%2F221897