Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Efficient Spectrum Utilization in Cognitive Radio Through Reinforcement Learning


Affiliations
1 Department of Information Technology, Anna University, MIT Campus, Chennai, India
     

   Subscribe/Renew Journal


Machine learning schemes can be employed in cognitive radio systems to intelligently locate the spectrum holes with some knowledge about the operating environment. In this paper, we formulate a variation of Actor Critic Learning algorithm known as Continuous Actor Critic Learning Automaton (CACLA) and compare this scheme with Actor Critic Learning scheme and existing Q-learning scheme. Simulation results show that our CACLA scheme has lesser execution time and achieves higher throughput compared to other two schemes.

Keywords

Markov Decision Process, Reinforcement Learning, Q–Learning, Actor–Critic Learning, CACLA.
Subscription Login to verify subscription
User
Notifications
Font Size

Abstract Views: 164

PDF Views: 0




  • Efficient Spectrum Utilization in Cognitive Radio Through Reinforcement Learning

Abstract Views: 164  |  PDF Views: 0

Authors

Dhananjay Kumar
Department of Information Technology, Anna University, MIT Campus, Chennai, India
Pavithra Hari
Department of Information Technology, Anna University, MIT Campus, Chennai, India
Panbhazhagi Selvaraj
Department of Information Technology, Anna University, MIT Campus, Chennai, India
Sharavanti Baskaran
Department of Information Technology, Anna University, MIT Campus, Chennai, India

Abstract


Machine learning schemes can be employed in cognitive radio systems to intelligently locate the spectrum holes with some knowledge about the operating environment. In this paper, we formulate a variation of Actor Critic Learning algorithm known as Continuous Actor Critic Learning Automaton (CACLA) and compare this scheme with Actor Critic Learning scheme and existing Q-learning scheme. Simulation results show that our CACLA scheme has lesser execution time and achieves higher throughput compared to other two schemes.

Keywords


Markov Decision Process, Reinforcement Learning, Q–Learning, Actor–Critic Learning, CACLA.