Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Effective Web Cache Pre-fetching Technique Using Markov Chain


Affiliations
1 Department of Information Technology, A D Patel Institute of Technology, New Vallabh Vidyanagar, Gujarat, India
2 Department of Information Technology, G H Patel college of Engineering & Technology, New Vallabh Vidyanagar, Gujarat, India
3 Department of Computer Engineering, Institute of Technology, Nirma University, Ahmedabad, Gujarat, India
     

   Subscribe/Renew Journal


With the rapid growth of World Wide Web and Internet, a problem of improving web server’s performance is having primary importance. When a web user requests any resource from web server, he/she experiences some amount of delay which is known as User Perceived Latency. Web caching is an effective and economical technique to reduce UPL. Performance of web server can still be improved by predicting a future request from the current page and then move it to the web server’s cache. This concept is known as web cache pre-fetching. Web cache pre-fetching using markov tree and multidimensional matrix have limitations regarding to time complexity and memory, respectively. In this paper, an attempt has been made to use markov model and concept of stationary distribution for web cache pre-fetching. A comparison has been presented for proposed algorithms with Least Recently Used (LRU). Empirical results based on implementation confirm that proposed algorithms are better than LRU.

Keywords

Markov Model, User Perceived Latency, Web Cache Pre-Fetching.
User
Subscription Login to verify subscription
Notifications
Font Size

Abstract Views: 233

PDF Views: 1




  • Effective Web Cache Pre-fetching Technique Using Markov Chain

Abstract Views: 233  |  PDF Views: 1

Authors

Urmi Vasishnav
Department of Information Technology, A D Patel Institute of Technology, New Vallabh Vidyanagar, Gujarat, India
Deven Agravat
Department of Information Technology, G H Patel college of Engineering & Technology, New Vallabh Vidyanagar, Gujarat, India
Sanjay Garg
Department of Computer Engineering, Institute of Technology, Nirma University, Ahmedabad, Gujarat, India

Abstract


With the rapid growth of World Wide Web and Internet, a problem of improving web server’s performance is having primary importance. When a web user requests any resource from web server, he/she experiences some amount of delay which is known as User Perceived Latency. Web caching is an effective and economical technique to reduce UPL. Performance of web server can still be improved by predicting a future request from the current page and then move it to the web server’s cache. This concept is known as web cache pre-fetching. Web cache pre-fetching using markov tree and multidimensional matrix have limitations regarding to time complexity and memory, respectively. In this paper, an attempt has been made to use markov model and concept of stationary distribution for web cache pre-fetching. A comparison has been presented for proposed algorithms with Least Recently Used (LRU). Empirical results based on implementation confirm that proposed algorithms are better than LRU.

Keywords


Markov Model, User Perceived Latency, Web Cache Pre-Fetching.