Open Access Open Access  Restricted Access Subscription Access

An Ensemble Adaptive Reinforcement Learning Based Efficient Load Balancing In Mobile Ad HOC Networks


Affiliations
1 Department of Information Technology, Hindusthan College of Engineering and Technology, India
2 Department of Computer Science, Bishop Appasamy College of Arts and Science, India

   Subscribe/Renew Journal


This research work introduces an Ensemble Adaptive Reinforcement Learning (EARL) approach for efficient load balancing in Mobile Ad Hoc Networks (MANETs). Traditional methods often fail to adapt to the dynamic nature of MANETs, leading to congestion and inefficiency. EARL leverages multiple reinforcement learning agents, trained with Q-learning and Deep Q-Networks (DQN), to optimize routing decisions based on real-time network conditions. The ensemble mechanism combines the strengths of individual agents, enhancing adaptability and performance. Simulation results demonstrate that EARL significantly outperforms traditional methods like AODV and DSR, achieving higher packet delivery ratios, lower end-to-end delays, increased throughput, better energy efficiency, and reduced packet loss, thereby proving its effectiveness in dynamic network environments.

Keywords

Ad Hoc Networks, Load Balancing, Adaptive, Learning, Efficient
Subscription Login to verify subscription
User
Notifications
Font Size

Abstract Views: 128




  • An Ensemble Adaptive Reinforcement Learning Based Efficient Load Balancing In Mobile Ad HOC Networks

Abstract Views: 128  | 

Authors

G. Rajiv Suresh Kumar
Department of Information Technology, Hindusthan College of Engineering and Technology, India
G. Arul Geetha
Department of Computer Science, Bishop Appasamy College of Arts and Science, India

Abstract


This research work introduces an Ensemble Adaptive Reinforcement Learning (EARL) approach for efficient load balancing in Mobile Ad Hoc Networks (MANETs). Traditional methods often fail to adapt to the dynamic nature of MANETs, leading to congestion and inefficiency. EARL leverages multiple reinforcement learning agents, trained with Q-learning and Deep Q-Networks (DQN), to optimize routing decisions based on real-time network conditions. The ensemble mechanism combines the strengths of individual agents, enhancing adaptability and performance. Simulation results demonstrate that EARL significantly outperforms traditional methods like AODV and DSR, achieving higher packet delivery ratios, lower end-to-end delays, increased throughput, better energy efficiency, and reduced packet loss, thereby proving its effectiveness in dynamic network environments.

Keywords


Ad Hoc Networks, Load Balancing, Adaptive, Learning, Efficient